Rosenzweig’s Left Brain, Right Stuff: How Leaders Make Winning Decisions

rosenzweigI was triggered to write my recent posts on overconfidence and the illusion of control – pointing to doubts about the pervasiveness of these “biases” – by Phil Rosenzweig’s entertaining Left Brain, Right Stuff: How Leaders Make Winning Decisions. Some of the value of Rosenzweig’s book comes from his examination of some classic behavioural findings, as those recent posts show. But much of Rosenzweig’s major point concerns the application of behavioural findings to real-world decision making.

Rosenzweig’s starting point is that laboratory experiments have greatly added to our understanding about how people make decisions. By carefully controlling the setup, we are able to focus on individual factors affecting decisions and tease out where decision making might go wrong (replication crisis notwithstanding). One result of this body of work is the famous catalogue of heuristics and biases where we depart from the model of the perfectly rational decision maker.

Some of this work has been applied with good results to areas such as public policy, finance or forecasting political and economic events. Predictable errors in how people make decisions have been demonstrated, and in some cases substantial changes in behaviour have been generated by changing the decision environment.

But as Rosenzweig argues  – and this is the punchline of the book – this research does not easily translate across to many areas of decision-making. Laboratory experiments typically involve choices from options that cannot be influenced, involve absolute payoffs, provide quick feedback, and are made by individuals rather than leaders. Change any of these elements, and crude applications of the laboratory findings to the outside world can go wrong. In particular, we should be careful not to compare predictions of an event with scenarios where we can influence the outcomes and will be in competition with others.

Let’s take the first, whether outcomes can be influenced. Professional golfers believe they sink around 70 per cent of their 6 foot putts, compared to an actual success rate closer to 55 per cent. This is typically labelled as overconfidence and an error (although see my recent post on overconfidence).

Now, is this irrational? Not necessarily suggests Rosenzweig, as the holder of the belief can influence the outcome. Thinking you are better at sinking six-foot putts than you actually are will increase the chance that you will.

In one experiment, participants putted toward a hole that was made to look bigger or smaller by using lighting to create an optical illusion. Putting from a little less than six feet, the (amateur) participants sank almost twice as many putts when putting toward the larger looking hole. They were more likely to sink the putts when it appeared an easier task.

This points to the question of whether we want to ward off biases. Debiasing might be good practice if you can’t influence the outcome, but if it’s up to you to make something happen, that “bias” might be an important part of making it happen.

More broadly, there is evidence that positive illusions allow us to take action, cope with adversity and persevere in the face of competition. Positive people have more friends and stronger social bonds, suggesting a “healthy” person is not necessarily someone who sees the world exactly as it is.

Confidence may also be required to lead people. If confidence is required to inspire others to succeed, it may be necessary rather than excessive. As Rosenzweig notes, getting people to believe they can perform is the supreme act of leadership.

A similar story about the application of laboratory findings is the difference between relative and absolute payoffs. If the competition is relative, playing it safe may be guaranteed failure. The person who comes out ahead will almost always be the one who takes the bigger risk, meaning that an exaggerated level of confidence may be essential to operate in some areas – although as Rosenzweig argues, the “excessive” risk may be calculated.

One section of the book focuses on people starting new ventures. With massive failure rates – around 50% failure after five years (depending on your study) – it is common for entrepreneurs to be said to be overconfident or naive. Sometimes their “reckless ambition” and “blind faith” is praised as necessary for the broader economic benefits that flow from new business formation. (We rarely hear people lamenting we aren’t starting enough businesses).

Rosenzweig points evidence that calls this view into question – from the evidence of entrepreneurs as persistent tinkerers rather than bold arrogant visionaries, to the constrained losses they incur in event of failure. While there are certainly some wildly overconfident entrepreneurs, closure of their business should not always be taken as failure and overconfidence as the cause. There are many types of errors – calculation, memory, motor skills, tactics etc. – and even good decisions sometimes turn out badly. Plus, as many as 92% firms close with no debt – 25% with a profit.

Rosenzweig also notes evidence that, at least in an experimental setting, entrepreneurs enter at less than optimal rates. As noted in my recent post on overconfidence, people tend to overplace themselves relative to the rest of the population for easy tasks (e.g. most drivers believe they are above average). But for hard tasks, they underplace. In experiments by Don Moore and friends on firm entry, they found a similar effect – excess entry when the industry appeared an easy one in which to compete, but too few entered when it appeared difficult. Hubristic entrepreneurs didn’t flood into all areas, and myopia about one’s own and competing firms’ abilities appears a better explanation for what is occurring than being the result of the actions of overconfident entrepreneurs.

There is the occasional part of the book that falls flat with me – the section on the limitations of mathematical models and some of the story telling around massive one-off decisions – but it’s generally a fine book.

* Listen to Russ Roberts interview Rosenzweig on Econtalk for a summary of some of the themes from the book.

One comment

  1. The proud ignorance of social scientists and the humanities professionals/economists, along with B-school folks, about behavior over all, let alone primate, let alone human behavior is astounding but predictable. Magical thinking always sells best – esp in academics, b-schools and to policy makers.

    There aren’t medically, biologically or physiologically coherent models and theories of simple organism behavior. We don’t know what make worms behavior or “decide” or act. So to even speculate about mammal, primate and human behavior is simply deeply dishonest – and silly.

    It does appear that all actions, across species is “decided” in 140 ms completely unconsciously and instinctively. Medically, this leaves no time for the elaborate pop culture beliefs and ideas about human exceptionalism and “executive” anything. There is nothing evidence-based and biologically sound about any of the ideas in this post.

    Still, silly smarmy ideas always sell best. Finally how does one write any professional piece without citations!? Duh

Comments welcome

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s