Tuesday, December 20, 2011

The limits of mechanistic understanding

Trials and Errors: Why Science Is Failing Us uses the story of a failed drug, torcetrapib, to illustrate issues involved with understanding complex systems. It begins with a critique of mechanistic reductionism.

The story of torcetrapib is a tale of mistaken causation. Pfizer was operating on the assumption that raising levels of HDL cholesterol and lowering LDL would lead to a predictable outcome: Improved cardiovascular health. Less arterial plaque. Cleaner pipes. But that didn’t happen.
....
Pfizer invested more than $1 billion in the development of the drug and $90 million to expand the factory that would manufacture the compound. Because scientists understood the individual steps of the cholesterol pathway at such a precise level, they assumed they also understood how it worked as a whole.

This assumption—that understanding a system’s constituent parts means we also understand the causes within the system—is not limited to the pharmaceutical industry or even to biology. It defines modern science. In general, we believe that the so-called problem of causation can be cured by more information, by our ceaseless accumulation of facts. Scientists refer to this process as reductionism. By breaking down a process, we can see how everything fits together; the complex mystery is distilled into a list of ingredients.
After a discussion of the problems involved in establishing causation, the article argues that science of the past few decades has pragmatically sidestepped these problems through the use of statistics and the substitution of establishing correlation for establishing causality.

But here’s the bad news: The reliance on correlations has entered an age of diminishing returns. At least two major factors contribute to this trend. First, all of the easy causes have been found, which means that scientists are now forced to search for ever-subtler correlations, mining that mountain of facts for the tiniest of associations. Is that a new cause? Or just a statistical mistake? The line is getting finer; science is getting harder. Second—and this is the biggy—searching for correlations is a terrible way of dealing with the primary subject of much modern research: those complex networks at the center of life. While correlations help us track the relationship between independent measurements, such as the link between smoking and cancer, they are much less effective at making sense of systems in which the variables cannot be isolated. Such situations require that we understand every interaction before we can reliably understand any of them. Given the byzantine nature of biology, this can often be a daunting hurdle, requiring that researchers map not only the complete cholesterol pathway but also the ways in which it is plugged into other pathways. (The neglect of these secondary and even tertiary interactions begins to explain the failure of torcetrapib, which had unintended effects on blood pressure. It also helps explain the success of Lipitor, which seems to have a secondary effect of reducing inflammation.) Unfortunately, we often shrug off this dizzying intricacy, searching instead for the simplest of correlations. It’s the cognitive equivalent of bringing a knife to a gunfight.
The piece ends with a paragraph that links back to an earlier discussion of the role of perception in establishing causation and hints at the importance of distinguishing between the known and the unknown.
And yet, we must never forget that our causal beliefs are defined by their limitations. For too long, we’ve pretended that the old problem of causality can be cured by our shiny new knowledge. If only we devote more resources to research or dissect the system at a more fundamental level or search for ever more subtle correlations, we can discover how it all works. But a cause is not a fact, and it never will be; the things we can see will always be bracketed by what we cannot. And this is why, even when we know everything about everything, we’ll still be telling stories about why it happened. It’s mystery all the way down.

Wednesday, December 14, 2011

Economics and shifting stability states

The BBC News has an interesting collection of economic graphs from 2001 put together by top economists. My personal favorite is shown below, along with its caption.

"For a long time the perception was that the creation of the euro meant sovereign risk was effectively the same across all countries. That of course proved to be wrong. The Lehman's crisis and financial meltdown that followed affected the deficits and debt levels of different countries in different ways. Interestingly it is much the same countries now with very high yields as it was pre-euro, suggesting little has changed fundamentally in a decade." VICKY PRYCE, SENIOR MANAGING DIRECTOR FTI



Seems like a classic example of shifting stability states with interesting implications for managing socio-ecological systems if you think of the adoption of the euro as the creation of a meso-level institutional structure (larger than the individual participating states, but not encompassing the entire global economy). Conceived that way, the new institutional structure temporarily managed to equalize risk, but a distant disturbance in the larger system (the Lehman bankruptcy) undid it and shifted system control back to the higher (global) level.

Saturday, December 3, 2011

A few items of interest