At noon on the day of the EU referendum, Populus released a prediction poll. Strikingly this showed a slightly larger lead than any other published poll – and therefore turned out to be furthest from the result.
When we published the poll, we put it in this context:
“Populus has not published a voting intention poll since the universal failure of the polls at the general election last May. We took a conscious decision to stand back and look seriously at the causes of the problem and to conduct a series of reviews and experiments in order to produce polls that provide consistent results and in which we have faith…
Since the political cycle produces few opportunities to test methodological theory against real life, we have decided to put our final referendum poll – conducted separately from and independently of the Remain campaign for whom we have been working – into the public domain. We do not do so lightly, but we have always believed in being transparent in our methods and wanted to be open about our current view of the state of public opinion.
What we reproduce here is a product of tighter quotas, including balancing our sample by social attitudes, perceived national identity, political interests and recent voting behaviour as well as by traditional political and demographic methods. These results also reflect not only people’s stated likelihood to vote but also their propensity to do so based on their characteristics as well as the underlying attitudes of those who are likely to have voted but who have not stated a referendum preference.”
Since the referendum we have, of course, intensively analysed why our poll was so wrong. We have peeled back the data, layer by layer, to look at the effect of each of the different methodological stages and the weightings that we applied to the raw data.
We conclude that some of our innovations were right. In particular, ensuring that poll samples define their national identity (‘English/Scottish/Welsh only’, ‘English/Scottish/Welsh more than British’, ‘British only’ etc.) in the same proportions as the population proved important. For polling on political issues, online samples tend to include too many respondents who align their national identity with England (or Wales) rather than Britain – and these people are disproportionately hostile to the EU. The polling organisation that ended up closest to the final result introduced this important change to its methods during the referendum campaign, informed by Populus analysis published in March (‘Polls Apart’).
But two methodological steps were wrong – and caused us to overstate support for remaining in the EU by over 6%. These were the ways that we tried to take account of how undecided voters would end up voting, and the way that we estimated the likelihood to vote of different groups.
Polls at the end of campaigns are treated as predictions of the result, not just snapshots of opinion at a particular moment in time.
In these final polls, therefore, we have to take account of which way the people will vote – if they do – who are still saying ‘don’t know’ when asked the voting intention question.
We also have to come to a conclusion about who will actually vote, when we know that people tend to overstate how likely they are to vote (in our poll, the average answer, on a 0-10 scale of likelihood to vote, was 8.55, implying a turnout of over 85%) and that in a referendum with turnout of around 70%, which 70% of people actually vote makes a huge difference to the result and its scale.
One of the clear lessons from the failure of the polls at the 2015 general election was that underlying attitudes (particularly preference for Cameron over Miliband and, on the economy, for the Conservatives over Labour) proved a better indicator of how many people ended up voting, than their declared voting intention.
We applied this insight to undecided referendum voters – and this was a significant factor in our poll being wrong.
What we did was to take voters in our poll who said they were very likely to vote in the referendum, but still hadn’t decided whether to vote ‘Remain’ or ‘Leave’, and allocated them to one side or the other, on the basis of their answers to these questions, which regression analysis had shown to be strong predictors of a Remain or Leave voter:
‘Is Britain stronger by being in the EU, or weaker by being in the EU?’
‘Would remaining in the EU or leaving the EU be most risky for you personally?’
‘Would remaining in the EU or leaving the EU be most risky for you and your family?’
Those giving consistent ‘Remain’ answers or ‘Leave’ answers to these questions were reallocated accordingly. Far more undecided voters aligned with the Remain argument on these questions, so this reallocation increased the Remain vote in our poll – and made it more wrong.
Past elections and referendums have proved that estimating turnout, and who will actually vote, on the basis only of how likely respondents say they are to vote, is highly prone to error.
Populus decided to experiment with a different approach entirely. Our detailed analysis of the 2015 general election identified very strong relationships between demographic factors and likelihood to vote. Although we noted that these factors were not consistent over successive general elections, since the last general election was only a year ago, we decided to apply a ‘demographic propensity’ analysis to our EU referendum poll. This, in effect, assumed that even if the percentage turnout was not the same as at the general election, any increase in turnout (which we always assumed very likely) would occur fairly evenly across all demographic groups. The turnout weighting had the effect of further increasing the apparent Remain vote share in our poll – and, therefore, making our poll more wrong still.
A key factor in the referendum result was significant differential turnout, which our poll did not reflect. Overall turnout was about 8% higher than at the 2015 general election. But turnout in Scotland, which needed to be a key stronghold if Remain was to win, was lower than at the general election. The increase in turnout in London – the bedrock of the Remain vote – was much lower than in the rest of England. Among those who didn’t vote at the general election, but did vote in the referendum, blue-collar Eurosceptics vastly outnumbered pro-EU 18-24s.
Having now studied turnout at the referendum and compared it to our analysis of the demographic composition of the voting electorate at previous referendums and general elections, we have concluded that turnout patterns are so different that a demographically based propensity-to-vote model is unlikely ever to produce an accurate picture of turnout other than by sheer luck.
We will continue to examine these methodological challenges in producing accurate snapshots and predictions of how the country will vote. We will not publish another such poll until we are confident that it is right.