Not so long ago, most mortgages were of the 30-year fixed-rate variety. Shopping was simple: find the lowest monthly payment. Now they come in countless forms. Even experts have trouble comparing them and a low initial monthly payment can be a misleading guide to total costs (and risks). A main cause of the mortgage crisis is that borrowers did not understand the terms of their loans. Even those who tried to read the fine print felt their eyes glazing over, especially after their mortgage broker assured them that they had an amazing deal.Thaler & Sunstein conclude that regulators therefore need to help people manage complexity and resist temptation. They reject the option of requiring simplicity, on the grounds that this would stifle innovation, and they recommend that disclosure is improved.
Yet growing complexity on the borrowers’ side was trivial compared with what was going on at the banks. Mortgages used to be held by the banks that initiated the loans. Now they are sliced into mortgage-backed securities, which include arcane derivative products.
--- Human frailty caused this crisis, Financial Times, 11 November 2008. Thanks to Andrew Sterling for the link.
I’ve been thinking about bounded rationality for some time; see the Hard Intangibles thread. It’s one of the fundamental challenges of managing complex adaptive systems. I, like many others, recommended disclosure (aka transparency) as a key tool for internet governance; see e.g. my Internet Governance as Forestry paper).
However, the more I think about transparency, the more skeptical I become. I’ve concluded that in finance, at least, the problem isn’t disclosure but intelligibility; see e.g. my post From transparency to intelligibility in regulating finance. While it’s true that eliminating complexity could stifle innovation, that’s a false choice; nobody’s proposing to eliminate either complexity or innovation. It’s our infatuation with novelty, as well as our greed, that got us into this problem, and we have to manage our urges in both respects.
I suspect that one can make the intelligibility argument just as well for computing & communications as for finance – though the lack of a Comms Chernobyl will make it harder to sell the idea in that industry.
Transparency/intelligibility need not be mandatory; companies should be able to choose obscurity. However, the choice of keeping their activities shrouded will bear the cost of increased regulatory scrutiny – and perhaps higher expectations regarding performance against public interest mandates. Comcast need not explain exactly how it manages its network; but if it chooses obscurity, it should face tougher network neutrality expectations. Microsoft does not have to disclose its interfaces; but if it chooses obscurity, it should face a tougher anti-trust test. Google need not explain how it uses DoubleClick data to improve ad click-throughs; but if it chooses obscurity, it should face tougher privacy protection requirements.
My “transparency to intelligibility” post proposed algorithmic complexity as a way to measure intelligibility. That’s not the only method. Another (prompted by the financial crisis, and teetering software stacks) is an abstraction ladder: the more steps between a derivative and its underlying asset, the higher it is on the abstraction ladder, and the less intelligible and more risky it should be deemed to be. In computing & communications as in finance, the abstraction ladder counts the number of rungs up from atoms. The networking stack is an example: from wires in the ground one climbs up to links, networks, sessions, applications. On the premise that atoms are easier to observe than bits, and that piling up inscrutable and unstable combinations are easier the higher you go, services at higher layers will be subject to closer regulatory scrutiny, other things (like market concentration) being equal.
No comments:
Post a Comment