A week of links

Links this week:

  1. Side effect warnings increase sales by building trust. Similar effects for disclosing conflicts of interest (ungated pdf).
  2. Absorbing information on paper versus kindle. Even without digital search, I often find it easier to find favourite passages in the physical form.
  3. Humans aren’t the only ones fighting wars.
  4. I pointed out a couple of weeks ago that Geoffrey Miller had joined forces with Tucker Max to give sex and dating advice. Their reading list is very good, even if you’re not after any advice. Their suggestions as to which movies might provide insight is quite amusing.
  5. Twin research.

Shaping the brain and humans as complex systems

I linked to this interview with Robert Sapolsky a couple of weeks ago, but after glancing through it again, I felt it worth highlighting two paragraphs (both for your interest and so I can find them again). First, on the evolutionary purpose of the teenage brain:

What I’ve been thinking might actually be going on is that adolescence is something unavoidable that emerges not because it’s so cool and adaptive, but because the adaptive thing is wait a long, long time before you have fully wired up your frontal cortex. Why might that be the case? Alright, so we’re born with our genome, the combination of your mother and father’s genes, that wind up in that first fertilized egg and that’s it. That’s your genetic legacy. Every cell in your body is destined to have that exact same genome. That turns out not to be true in all sorts of interesting ways, but what that also means is that when you’re thinking about what genes have to do with the brain behavior, by definition critically, if the frontal cortex is the last part of the brain to develop it’s the part of the brain least shaped by genes, and most sculpted by the environment and experience. And I think basically the only way you can have a species that is as complex and socially resilient and socially context dependent and all those amazing things we do, the only way you can pull that off is to have a frontal cortex whose development just bears the imprint of everything you experienced along the way—in effect, that’s been freed from whatever extent the genes are deterministic, which is not very. I think ironically what the evolution of the frontal cortex has been about is genetic evolution to free it as much as possible from the straight jacket of genes.

Second, on reductionism in neurobiology:

[R]eductionism doesn’t actually tell you a whole lot about how this stuff works. I mean reductionism is perfect for like telling you why your clock is broken. What you do is you break it down to its component parts. You find the part that’s got a tooth missing from the gear. I guess there’s not a clock on earth that works this way anymore, but your Renaissance clock. You fix the missing tooth, you put it back, you add the pieces back together and it works. The way to understand a complicated system is to understand its component parts. The way in which that steps away from the ideology is the component parts of the genes and the nerve transmitters and the hormones and the early experience. Okay, so that’s a more sophisticated version of reductionism. You got to be reductive about lots of different domains. But nonetheless, even that more multidisciplinary version of reductionism isn’t going to work because that’s not how complex systems work and humans are a complex system. You got these emergent non-linear chaotic properties. What’s that another way of saying? If you knew every individual’s genome and exactly which gene was active at which point, are you going to be able to predict who’s going to do what next? Absolutely not. If you added in knowing the levels of every hormone in their body at that point, if you added in… it doesn’t work that way. The reductionism breaks down because the reductionism breaks down in the same way that like a cloud that isn’t producing enough rain during a drought or something, the solution isn’t to study half the cloud and then get a research grant to study a quarter of the cloud and smaller, smaller pieces and finally understand the reductive basis of the non-rain and add it up together. That’s not how clouds work when they don’t rain. Humans are more like clouds than they are like clocks. We’re not reductive in that way, which is the case for any complex system.

And if you haven’t read the full interview, do it.

A week of links

Links this week:

  1. Academic urban legends spreading through sloppy citation. In PhD land, I have constantly found myself following citation chains that don’t lead to what they claim.
  2. Some progress in the replication wars. I’ll post about some of the specific examples over coming months.
  3. The evolutionary emergence of property rights (ungated working paper). HT: Ben Southwood
  4. Attribute substitution in charities – the evaluability bias. HT: Alex Gyani
  5. Peter Turchin reviews Richard Wrangham’s Catching Fire: How Cooking Made Us Human.
  6. Arnold Kling on Nicholas Wade. Comments and pointer from here.
  7. Polygenic modelling in cattle breeding. Humans next.
  8. An interesting debate on Cato Unbound this month – the libertarian case for a basic income guarantee.

Not the jam study again

Go to any behavioural science conference, event or presentation, and there is a high probability you will hear about “the jam study”. Last week’s excellent MSiX was no exception, with at least three references I can recall. The story is wonderfully simple and I have, at times, been mildly sympathetic to the idea. However, it is time for this story to be retired or heavily qualified with the research that has occurred in the intervening years.

As a start, what is the jam study? In 2000, Mark Lepper and Sheena Iyengar published their findings (ungated pdf) about the response of consumers to displays of jam in an upmarket grocery store. Their paper also contained similar experiments involving choice of chocolate and essay questions, but those experiments have not gained the same reputation.

On two Saturdays, they set up tasting displays of either six or 24 jars of jam. Consumers could taste as many jams as they wished, and if they approached the tasting table, also received a $1 discount coupon to buy the jam. For attracting initial interest, the large display of 24 jams did a better job, with 60 per cent of people who passed the display stopping. Forty per cent stopped at the six jam display. But only three per cent of those who stopped at the 24 jam display purchased any of the jam, compared with almost 30 per cent who stopped at the six jam display.

This result has been one of the centrepieces of the argument that more choice is not necessarily good. The larger display seemed to reduce consumer motivation to buy the product. The theories around this concept and the associated idea that more choice does not make us happy are often labelled the choice overload hypothesis or the paradox of choice.

Fast-forward 10 years to another paper, this one by Benjamin Scheibehenne, Rainer Greifeneder and Peter Todd. They surveyed the literature on the choice overload hypothesis – there is plenty. And across the basket of studies, evidence of choice overload does not emerge so clearly. In some cases, choice increases purchases. In others it reduces them. Scheibehenne and friends determined that the mean effect size of changing the number of choices across the studies was effectively zero.

More pointedly, the reviewed studies included a few attempts to replicate the jam study results. An experiment using jam in an upscale German supermarket found no effect. Other experiments found no effect of choice size using chocolates or jelly beans. There were small differences in study design between these and the original jam study (as original authors are always quick to point out when replications fail), but if studies are so sensitive to study design and hard to replicate, it seems foolhardy to extrapolate the results of the original study too far.

That is not to say that there is not something interesting going on here. Scheibehenne and friends suggest that there may be a set of restrictive conditions under which choice overload occurs. These conditions might involve the complexity (and not the size) of the choice, the lack of dominant alternatives, assortment of options, time pressure or the distribution of product quality. These considerations are not issues of the size of the choice itself but the way the choice is undertaken. And since the jam study appears tough to replicate, these conditions might be particularly narrow. Still, the common refrain of making it easy for customers – as recommended for dealing with choice overload issues – holds for most of them. But they suggest different and more subtle solutions than simply reducing choice.

There are a lot of interesting studies floating around on choice overload – from decisions about turning off life-support (ungated pdf) to retirement savings (ungated pdf) – and the message is not always the same. But reading through them makes it clear that the jam study is just the tip of an iceberg and not necessarily representative of what lies beneath.

Finally, when Tim Harford wrote about these studies several years ago, he pointed out another often neglected argument about the importance of choice. It is only because we have choice that we are offered any good products at all, with companies incentivised to compete for us as customers. Even if choice has negative consequences, a world without choice might be worse.

A week of links

Links this week:

  1. Some gold from Robert Sapolsky – what is going on in teenage brains? Plus a bonus interview.
  2. The latest issue of Nautilus (the source of the Sapolsky material) contains a lot of other good material – fruit and vegetables trying to kill you and chaos in the brain among them. I recommend scanning the table of contents.
  3. The changing dynamics of marriage inequality.
  4. Andrea Castillo with an introduction to the neoreaction (including some “homebrewed evolutionists”).
  5. Geoffrey Miller has teamed up with Tucker Max and is offering dating advice informed by evolution.

Our visual system predicts the future

I am reading John Coates’s thus far excellent The Hour Between Dog and Wolf: How Risk Taking Transforms Us, Body and Mind. There are many highlights and interesting pieces, the below being one of them.

First, we do not see in real-time:

When light hits out retina, the photons must be translated into a chemical signal, and then into an electrical signal that can be carried along nerve fibers. The electrical signal must then travel to the very back of the brain, to an area called the visual cortex, and then project forward again, along two separate pathways, one processing the identity of the objects we see, the “what” stream, as some researchers call it, and the other processing the location and motion of the objects, the “where” stream. These streams must then combine to form a unified image, and only then does this stream emerge into conscious awareness. The whole process is a surprisingly slow one, taking … up to one tenth of a second. Such a delay, though brief, leaves us constantly one step behind events.

So how does our body deal with this problem? How could you catch a ball or dodge a projectile if your vision is behind time?

[T]he brains visual circuits have devised an ingenious way of helping us. The brain anticipates the actual location of the object, and moves the visual image we end up seeing to this hypothetical new location. In other words, your visual system fast forwards what you see.

Very cool concept, but how would you show this?

Neuroscientists … have recorded the visual fast-forwarding by means of an experiment investigating what is called the “flash-lag effect.” In this experiment a person is shown an object, say a blue circle, with another circle inside it, a yellow one. The small yellow circle flashes on and off, so what you see is a blue circle with a yellow circle blinking inside it. Then the blue circle with the yellow one inside starts moving around your computer screen. What you should see is a moving blue circle with a blinking yellow one inside it. But you do not. Instead you see a blue circle moving around the screen with a blinking yellow circle trailing about a quarter of an inch behind it. What is going on is this: while the blue circle is moving, your brain advances the image to its anticipated actual location, given the one-tenth-of-a-second time lag between viewing it and being aware of it. But the yellow circle, blinking on and off, cannot be anticipated, so it is not advanced. It thus appears to be left behind by the fast-forwarded blue circle.

A quick scan of the Wikipedia page on the flash-lag effect suggests there are a few competing explanations, but it’s an interesting idea all the same. It would explain that feeling of disbelief when a batter swings at and misses a ball that moves unexpectedly in the air. They would have seen it in precisely the place they swung.

The below video provides a visual illustration.