Links this week:
I linked to this interview with Robert Sapolsky a couple of weeks ago, but after glancing through it again, I felt it worth highlighting two paragraphs (both for your interest and so I can find them again). First, on the evolutionary purpose of the teenage brain:
What I’ve been thinking might actually be going on is that adolescence is something unavoidable that emerges not because it’s so cool and adaptive, but because the adaptive thing is wait a long, long time before you have fully wired up your frontal cortex. Why might that be the case? Alright, so we’re born with our genome, the combination of your mother and father’s genes, that wind up in that first fertilized egg and that’s it. That’s your genetic legacy. Every cell in your body is destined to have that exact same genome. That turns out not to be true in all sorts of interesting ways, but what that also means is that when you’re thinking about what genes have to do with the brain behavior, by definition critically, if the frontal cortex is the last part of the brain to develop it’s the part of the brain least shaped by genes, and most sculpted by the environment and experience. And I think basically the only way you can have a species that is as complex and socially resilient and socially context dependent and all those amazing things we do, the only way you can pull that off is to have a frontal cortex whose development just bears the imprint of everything you experienced along the way—in effect, that’s been freed from whatever extent the genes are deterministic, which is not very. I think ironically what the evolution of the frontal cortex has been about is genetic evolution to free it as much as possible from the straight jacket of genes.
Second, on reductionism in neurobiology:
[R]eductionism doesn’t actually tell you a whole lot about how this stuff works. I mean reductionism is perfect for like telling you why your clock is broken. What you do is you break it down to its component parts. You find the part that’s got a tooth missing from the gear. I guess there’s not a clock on earth that works this way anymore, but your Renaissance clock. You fix the missing tooth, you put it back, you add the pieces back together and it works. The way to understand a complicated system is to understand its component parts. The way in which that steps away from the ideology is the component parts of the genes and the nerve transmitters and the hormones and the early experience. Okay, so that’s a more sophisticated version of reductionism. You got to be reductive about lots of different domains. But nonetheless, even that more multidisciplinary version of reductionism isn’t going to work because that’s not how complex systems work and humans are a complex system. You got these emergent non-linear chaotic properties. What’s that another way of saying? If you knew every individual’s genome and exactly which gene was active at which point, are you going to be able to predict who’s going to do what next? Absolutely not. If you added in knowing the levels of every hormone in their body at that point, if you added in… it doesn’t work that way. The reductionism breaks down because the reductionism breaks down in the same way that like a cloud that isn’t producing enough rain during a drought or something, the solution isn’t to study half the cloud and then get a research grant to study a quarter of the cloud and smaller, smaller pieces and finally understand the reductive basis of the non-rain and add it up together. That’s not how clouds work when they don’t rain. Humans are more like clouds than they are like clocks. We’re not reductive in that way, which is the case for any complex system.
And if you haven’t read the full interview, do it.
Links this week:
- Academic urban legends spreading through sloppy citation. In PhD land, I have constantly found myself following citation chains that don’t lead to what they claim.
- Some progress in the replication wars. I’ll post about some of the specific examples over coming months.
- The evolutionary emergence of property rights (ungated working paper). HT: Ben Southwood
- Attribute substitution in charities – the evaluability bias. HT: Alex Gyani
- Peter Turchin reviews Richard Wrangham’s Catching Fire: How Cooking Made Us Human.
- Arnold Kling on Nicholas Wade. Comments and pointer from here.
- Polygenic modelling in cattle breeding. Humans next.
- An interesting debate on Cato Unbound this month – the libertarian case for a basic income guarantee.
Go to any behavioural science conference, event or presentation, and there is a high probability you will hear about “the jam study”. Last week’s excellent MSiX was no exception, with at least three references I can recall. The story is wonderfully simple and I have, at times, been mildly sympathetic to the idea. However, it is time for this story to be retired or heavily qualified with the research that has occurred in the intervening years.
As a start, what is the jam study? In 2000, Mark Lepper and Sheena Iyengar published their findings (ungated pdf) about the response of consumers to displays of jam in an upmarket grocery store. Their paper also contained similar experiments involving choice of chocolate and essay questions, but those experiments have not gained the same reputation.
On two Saturdays, they set up tasting displays of either six or 24 jars of jam. Consumers could taste as many jams as they wished, and if they approached the tasting table, also received a $1 discount coupon to buy the jam. For attracting initial interest, the large display of 24 jams did a better job, with 60 per cent of people who passed the display stopping. Forty per cent stopped at the six jam display. But only three per cent of those who stopped at the 24 jam display purchased any of the jam, compared with almost 30 per cent who stopped at the six jam display.
This result has been one of the centrepieces of the argument that more choice is not necessarily good. The larger display seemed to reduce consumer motivation to buy the product. The theories around this concept and the associated idea that more choice does not make us happy are often labelled the choice overload hypothesis or the paradox of choice.
Fast-forward 10 years to another paper, this one by Benjamin Scheibehenne, Rainer Greifeneder and Peter Todd. They surveyed the literature on the choice overload hypothesis – there is plenty. And across the basket of studies, evidence of choice overload does not emerge so clearly. In some cases, choice increases purchases. In others it reduces them. Scheibehenne and friends determined that the mean effect size of changing the number of choices across the studies was effectively zero.
More pointedly, the reviewed studies included a few attempts to replicate the jam study results. An experiment using jam in an upscale German supermarket found no effect. Other experiments found no effect of choice size using chocolates or jelly beans. There were small differences in study design between these and the original jam study (as original authors are always quick to point out when replications fail), but if studies are so sensitive to study design and hard to replicate, it seems foolhardy to extrapolate the results of the original study too far.
That is not to say that there is not something interesting going on here. Scheibehenne and friends suggest that there may be a set of restrictive conditions under which choice overload occurs. These conditions might involve the complexity (and not the size) of the choice, the lack of dominant alternatives, assortment of options, time pressure or the distribution of product quality. These considerations are not issues of the size of the choice itself but the way the choice is undertaken. And since the jam study appears tough to replicate, these conditions might be particularly narrow. Still, the common refrain of making it easy for customers – as recommended for dealing with choice overload issues – holds for most of them. But they suggest different and more subtle solutions than simply reducing choice.
There are a lot of interesting studies floating around on choice overload – from decisions about turning off life-support (ungated pdf) to retirement savings (ungated pdf) – and the message is not always the same. But reading through them makes it clear that the jam study is just the tip of an iceberg and not necessarily representative of what lies beneath.
Finally, when Tim Harford wrote about these studies several years ago, he pointed out another often neglected argument about the importance of choice. It is only because we have choice that we are offered any good products at all, with companies incentivised to compete for us as customers. Even if choice has negative consequences, a world without choice might be worse.
Links this week:
- Some gold from Robert Sapolsky – what is going on in teenage brains? Plus a bonus interview.
- The latest issue of Nautilus (the source of the Sapolsky material) contains a lot of other good material – fruit and vegetables trying to kill you and chaos in the brain among them. I recommend scanning the table of contents.
- The changing dynamics of marriage inequality.
- Andrea Castillo with an introduction to the neoreaction (including some “homebrewed evolutionists”).
- Geoffrey Miller has teamed up with Tucker Max and is offering dating advice informed by evolution.
Yesterday was day one of the Marketing Science Ideas Xchange (MSiX). As I mentioned in a previous post, it has been an interesting opportunity to see behavioural science outside of the academic and economics environments I am used to. There were a lot of interesting presentations, and a lot of good books were mentioned along the way.
First, a couple of blasts from the past: Claude Hopkins’s Scientific Advertising (if the one dollar Amazon price is prohibitive, it doesn’t take much searching to find some free pdf versions) and Vance Packard’s The Hidden Persuaders. The idea of injecting more science into advertising is not new.
The usual behavioural science texts got plenty of mentions, particularly Daniel Kahneman’s Thinking, Fast and Slow. System 1 and System 2 thinking were regular frames for the speakers (and in today’s workshops). Richard Thaler and Cass Sunstein’s Nudge and Dan Ariely’s Predictably Irrational also got the expected mentions.
The first three speakers had an evolutionary thread in parts of their talks (nice to see), so naturally a few books I have plugged before came up. Rory Sutherland put up his reading list from Verge, which includes Paul Seabright’s The Company of Strangers, Jonathan Haidt’s The Righteous Mind, Robert Kurzban’s Why Everyone (Else) Is a Hypocrite: Evolution and the Modular Mind and Robert Frank’s The Darwin Economy. All highly recommended, as is the rest of Sutherland’s reading pile, although I haven’t read Stuart Sutherland’s Irrationality: the enemy within, which I suppose will get added to my list.
Another book I had not come across before was Iain McGilchrist’s The Master and His Emissary: The Divided Brain and the Making of the Western World, which looks interesting.
Uri Gneezy and John List got a solid mention from the last presenter, Liam Smith from Monash University’s BehaviourWorks Australia. Gneezy and List’s new book The Why Axis: Hidden Motives and the Undiscovered Economics of Everyday Life is also sitting on my reading pile.
Outside of the presentations, a few other interesting books came up in conversation. They included Jim Manzi’s Uncontrolled: The Surprising Payoff of Trial-and-Error for Business, Politics, and Society, which should be on your reading list. One of my favourite books, Christopher Buckley’s Thank You for Smoking was also mentioned, which was unsurprising considering the potential sin industry clients of many of the conference attendees – and if you do read it, rip out the last couple of pages. While Barry Schwartz’s book The Paradox of Choice was not specifically mentioned, the phrase was regularly used.
Finally, Adam Ferrier, the conference curator, has a book out – The Advertising Effect: How to Change Behaviour. After organising a conference myself earlier in the year, I feel for him – many rewards but so much effort.
Since I first came across it, I have been a fan of Gerd Gigerenzer’s work. But I have always been slightly perplexed by the effort he expends framing his work in opposition to behavioural science and “nudges”. Most behavioural science aficionados who are aware of Gigerenzer’s work are fans of it, and you can appreciate behavioural science and Gigerenzer without suffering from two conflicting ideas in your mind.
In a recent LSE lecture about his new book Risk Savvy: How to Make Good Decisions (which sits unread in my reading pile), Gigerenzer again has a few swipes at Daniel Kahneman and friends. The blurb for the podcast gives a taste. A set of coercive government interventions are listed, none of which are nudges, and it is suggested that we need risk savvy citizens who won’t be scared into surrendering their freedom. Slotted between these is the suggestion that some people see a need for “nudging”.
Gigerenzer does provide a different angle to the behavioural science agenda. His work has provided strong evidence for the accuracy of heuristics and shown that many of our so-called irrational decisions make sense from the perspective of the environment where they were designed (evolved). But his work doesn’t undermine the fact that many decisions are made outside of the environment where they originated – those fast, frugal and well-shaped heuristics have not stopped us getting fat, spending huge amounts on unused gym memberships and failing to save for retirement. Gigerenzer’s work provides depth to the behavioural analysis, rather than undermining it, and points to a richer set of potential solutions.
When Gigerenzer starts throwing around solutions, the difference between his approach and nudging becomes even hazier. In the LSE lecture he suggests that doctors be trained to present risks to patients in a certain way. That doesn’t seem much different from a typical nudge, although here it is the previously statistically-illiterate doctors presenting the information that nudges the patient behaviour.
One other interesting point in the lecture is when Gigerenzer speaks about the failure of breast cancer screening to cut deaths, and the presentation of results in deceptive ways designed to increase screening rates. He proposes presenting information by showing natural frequencies, which would likely reduce the rate of screening. But what of screening that doesn’t have deleterious side-effects for false positives of the same scale as breast cancer? Should they presented as Gigerenzer proposes, or in alternative ways more likely to induce screening? There has been no shortage of work in behavioural science designed to increase screening rates, particularly given the other biases and barriers that need to be overcome. I prefer Gigerenzer’s approach, but can see the arguments that would be mounted for the other side.
Otherwise, Gigerenzer’s speech channels Nassim Taleb on financial markets, before hinting at some of the very interesting work he is doing with the Bank of England. It’s generally worth a listen.