This article was first published in the Association for Business Psychology newsletter on April 20th i.e. just after the upcoming UK general election was called, but written beforehand. I’ve consequently made a light edit to reflect last week’s events.
2016: The Year of the Underdog
By most objective standards, 2016 was a pretty remarkable year. In sport, we saw 2000-1 outsiders Leicester City winning the Premier League, and supposedly cursed perennial losers the Chicago Cubs winning baseball’s World Series. In culture, we unexpectedly lost a number of iconic figures, particularly in popular music. And in politics, we saw Britain vote to leave the EU and Donald Trump become President of the US – both of which were seemingly unthinkable last January.
The unique property of political events is that our view of their likelihood (and that of the bookies) is largely determined by one thing: polling. Now UK Prime Minister Theresa May has called a snap election, media coverage will contain rolling polling updates predicting the outcome until the second results are confirmed. For those of us concerned with human behaviour, and how it often differs from our intentions as described to others, the consistent failures of pollsters to accurately predict recent elections is an interesting phenomenon. In addition to the Brexit poll and US Election, the 2015 UK and Israeli General Elections and this week’s Dutch election were also badly forecast by polling. By contrast, in sport events like Leicester’s win are the exception rather than the rule – the bookmakers always win in the long run (as I know to my cost).
But in polling, if anything, it seems like the problem is getting worse, not better. President Trump has caught on to this, and even used it to discredit his worst ever approval ratings.
Understandably, the polling industry is concerned about dwindling credibility. As are those who have built careers on using this data.
At the Ideas42 Behavioural Summit in New York last October (just before the November presidential election), I saw prominent US polling expert Nate Silver – creator of the excellent FiveThirtyEight website – talk about the sophisticated predictive model he and his team built based on aggregated polling data. This correctly predicted the outcome in 49 of 50 states in the 2008 Presidential Election, and all 50 (plus the District of Columbia) in 2012.
Then, Silver pointed out that this model gave an increasing (but still slight) chance every day of Trump winning the electoral college, but losing the popular vote (it was about a 10-15% chance on that day, and up to 25% by election day).
FiveThirtyEight eventually predicted a 71% chance of Clinton winning, although that was a significantly lower likelihood than other forecasters. In October they were forecasting the odds of Trump winning as almost exactly the same as the Chicago Cubs winning their two-horse race the week before the election.
I guess we should have taken that as a sign…
The Chicago Cubs most famous fan, Bill Murray, celebrates their World Series win – after a short, 108-year wait
So why were these predictions wrong? At the conference (and more recently in posts such as this) Silver was at pains to point out at their “model” of behaviour is exactly that. It is designed as a forecast, and has a margin of error built in – just like their baseball model. It is not a perfect predictor of actual behaviour (if such a thing could ever exist), and he pointed out that (for example) the Brexit result was well within an acceptable level of polling error in most cases. The error was in perceiving that because all the polls predicted a narrow win for ‘Remain’ vote, that meant the likelihood was higher than 52%. He expressed disbelief at some of the odds ahead of the vote being offered by bookmakers for a ‘Leave’ outcome, especially given the UK’s track record of polling inaccuracies.
This applies to predicting sports events too, of course, but there you are predicting the behaviour of 18-30 people at a time. With an election, it’s several million, so that potential margin of error is several magnitudes greater.
But this doesn’t fully explain why the polls are consistently inaccurate, nor why this is especially true recently. Many theories have been put forward. Is it simply ‘shy Tories/Leavers/Trumpers’ (i.e. people embarrassed or uncomfortable declaring a preference for certain candidates or parties misleading the pollsters)? Bad methodologies? Or sampling errors?
Those with knowledge of behavioural economics may nod sagely at this point and say: “Well, if you ask people to account for their actions, of course it will be wrong.” If our actions are often unthinking and irrational, as behavioural economics has consistently proven, then how can people consciously account for them with any degree of accuracy?
This fundamental problem with polling (and traditional market research in general) was neatly articulated by Nobel Prize-winning behavioural economist Daniel Kahneman in conversation with Silver in New York. He pointed out that there is a key difference between building models of behaviour and how people behave in the real world: “The problem with algorithms is you feed the same data in twice and you get the same answer. The same is not true of people.”
But surely voting (one hopes) is a considered, rational, conscious (i.e. ‘system two’) act? Simple human irrationality can’t explain the error. What purpose would it serve people to lie to a census taker? Or is it being done subconsciously, without them realising?
A Voting Heuristic
My hypothesis is that there is a specific set of heuristics and biases that help to explain the incorrectly forecasted elections, which only kick in at the polling booth and which (I would argue) has been further strengthened by the unpredicted political events of 2015/6. Like most heuristics they are not open to introspection, so can’t be used to rationalise actions, and are ‘noise’ affecting decision-making (as Kahneman categorised heuristics in New York) that people may not be comfortable with in any case. And that’s why the polls don’t reflect them.
Let me explain.
Behavioural economics tells us that many behaviours are influenced by self-serving biases. We will often behave in ways that benefit ourselves and our ‘in-group’ (people like us), but our need to maintain self-esteem encourages us to represent our behaviour in ways that justify our actions favourably.
We would rather not confront the fact that we often behave selfishly, because it harms our ego and makes us feel bad (“I won’t give my spare change to that beggar because how do I know if he’s genuine? Besides, I give plenty to charity anyway”). Everyone likes to think they are behaving (largely) for the good of others, and we seek to rationalise our actions in ways that are consistent with that self-image. Because people who don’t think that lack empathy – which could be sociopathic.
Professor Dan Ariely at Duke University in the US (and colleagues) has conducted a huge amount of research in this area, especially on dishonesty and ‘self-signalling’ (effectively priming our own behaviour through projected self-image). As an example, committing one act of mild dishonesty (wearing counterfeit goods, for example) can then lead us to behave dishonestly in more overt ways (like cheating on a test). But we won’t acknowledge this when confronted with it, because we want to maintain a positive self-image, and will post-rationalise our behaviour accordingly.
In short, we all regularly cheat ourselves about our levels of selfishness, because it is critical to maintaining our self-esteem. We have all experienced occasions where we have been surprised by the levels of self-interest displayed by others (“who keeps stealing from the office fridge?”).
However, the national scale of elections only increases this effect, and as a civic duty (and a privilege reserved for democracies) the cognitive dissonance of behaving in a self-interested way is much more salient. And, as in so many areas of life, when that dissonance kicks in people ignore evidence contradicting that behaviour and misrepresent their intentions.
The (Self-Serving) Ayes Have It
I think it is possible to view many votes cast in the UK and US General Elections, and the EU Referendum, in terms of this kind of self-serving bias. Perhaps more so than any other elections in recent memory. Consequently that need to avoid cognitive dissonance based on preservation of our own self-image led many to (subconsciously) deceive the pollsters.
At the EU Referendum, a vote for Leave was ‘Putting Britain First’ as the campaign slogan had it. The entire premise was that a vote for your ‘in-group’ i.e. the British, would put your interests ahead of others.
And in the US… Trump couldn’t have been more clear about his protectionist, pro-US stance. In fact it was the only clear policy intention. ‘Make America Great Again’ – if you are American, there couldn’t be a clearer appeal to the self-serving bias of putting America ahead of the needs of the rest of the world. We’re now seeing that manifested in his defiantly protectionist trade and immigration policies.
The 2015 General Election had a bigger blue/yellow divide than ever before – and than predicted
The 2015 UK Election is more complex, but also explicable in the same way. The polling errors then were thus: under-estimated votes for the Conservatives in England (a recurring error across elections) and for the Scottish National Party in Scotland. The Conservative manifesto was largely based around cuts to public services that were going to disadvantage certain groups, and benefit others.
Post the independence referendum in 2015, the SNP manifesto commitments largely focussed on how they would seek to increase public services and defy these cuts. So Scots could confidently vote SNP safe in the knowledge that (a) it would not lead to independence in the short term, and (b) it wouldn’t mean the same cuts to public services that the rest of the UK would endure. A win-win if you were Scottish, but inherently self-serving for Scotland versus the rest of the UK.
Consequently votes for both were self-interested to varying degrees, depending on the perceived ‘in-group’.
These contexts were more stark than they had been for some time, and consequently the potential for polling error was higher than normal (in my view). But the dissonance for a large number of those voters of voting to benefit their ‘in-group’ to the detriment of others may have been too much psychologically for many to bear, and they either misdirected the pollsters or genuinely couldn’t attribute their subconscious intentions.
But Why Isn’t Everyone Doing It?
But if we’re all subject to this kind of bias, why isn’t it manifest in everyone’s (stated) voting preferences? Wouldn’t it be as true of Remain/Clinton/Labour voters too? Well, maybe it is – it’s just that if the self-serving bias can be more easily post-rationalised as not being self-serving, it won’t create a dissonance (and therefore a predictive error). Some of the data on voting trends supports this.
The commonality in Leave/Trump votes is that they are strongly correlated to level of education. In short, if you had higher education of some form you were much less likely to vote for either (hence all the discussion of ‘intellectual bubbles’ and ‘echo chambers’).
Therefore a difference in ‘in-group’ and ‘out-group’ distinction derived from education level may determine the nature of the bias. If education and the pursuit of knowledge is about exploring and understanding things that sit outside your immediate environment, for the university educated, your ‘in-group’ is wider. In a literal sense, going to university often means leaving your immediate vicinity, friends, family, perhaps even going abroad. Those whose interests you hold dear are more likely to be from farther afield, and your concept of ‘self-interest’ is likely to include people from across Europe, or outside the US. It is well known that the majority of Americans don’t hold a passport – I would love to know the stats on how that differs amongst Trump vs. Clinton voters.
We know how you voted…
Thus a higher educated person, voting based on the basis of a self-serving bias, would be more likely to confidently state a preference for Clinton or Remain without dissonance – as it would still benefit their ‘in-group’ but not be overtly detrimental to ‘out-groups’. Because their ‘in-group’ group is much more likely to include people from outside their immediate vicinity.
So what do we do about it?
In summary, I think it is as simple as this. Everyone largely votes in their self-interest. But we maintain an illusory visage that we vote for the benefit of others, to fit in with our own self-image as good citizens. But for some, the cognitive dissonance of voting in a way which is patently self-interested leads us to misattribute our actions, be it through post-rationalising, or (in a minority of cases) misleading others on how we will vote. Hence the polling errors – and they have been magnified by the specific contexts of the recent elections.
Will this problem persist, or is it a 2016 phenomenon? Whatever the merits of increased globalisation, the world is getting smaller and consequently our exposure and association with other cultures one would assume should (gradually) increase. As that perceived in-group increases in size, one could assume the scope of the self-serving bias becomes wider and more inclusive. But if politicians continue to pursue protectionist policies that dissonance may only get greater.
By contrast, recent events in the Netherlands perhaps also indicate that when that dissonance is made plain, then behaviour may change accordingly. For the upcoming UK general election, the key policy ‘battlegrounds’ will determine whether this bias comes into play. If it does become a second Brexit vote as some are predicting, we may see some interesting effects – under-representation of votes for the Liberal Democrats for example, as they are (currently) the only major overtly pro-Remain party.
One thing is for certain: recent events reinforces the importance of recognising, and accounting for, the subconscious drivers of human behaviour – and how we survey them. Simply asking people how they intend to behave is not going to be effective, and we need to use other (more implicit) techniques to actively identify how they will behave on election day. At CSG, we use these techniques to more accurately gauge how people will behave, and (where possible) assess and test actual, rather than claimed, behaviour.
Importantly, it also emphasises that the only way to avoid the ‘echo chambers’ that lead to the perception of common consensus (when none exists), we need to respect that others’ behaviour may be influenced by a completely different context.
 An additional, sobering, fact that their model threw up: if the vote were only conducted by white men (as it was before 1920, with black voters only being enfranchised in 1965), then Trump would win all but 10-12 states.
 There is a (valid) counter-argument that the very act of voting itself is a denial of self-interest – in that people vote for who they think is best at running the country, rather than who is going to best serve their personal interests. Whilst some may vote on this basis I would suggest this is naïvely optimistic at best, and is a more accurate reflection of post-decision rationalisation than most people’s real motivation for voting.
 Though of course this has now changed post-Brexit, as the SNP are now actively pursuing a second referendum on the basis of Scottish opposition to leaving the EU (Scotland voted 62-38 in favour of Remain).
 There’s also potentially a ‘mere exposure’ effect going on here i.e. simply meeting people from outside the US or from Europe may influence your behaviour accordingly, by generating more empathy with those groups.