How do polls work?

Opinion polls have had a rough ride in a political sense over the last few years.  Despite getting it broadly right in the Independence Referendum of 2014, the General Election of 2015 shook the industry to its’ core as the Conservatives marched to victory despite all the predictions.

Last year’s political shocks of Brexit and Trump have also weakened belief in the polls, despite the final indications in each showing that they were close races towards the end.

Polls are still our best way of gauging public opinion, though, and for the vast majority of the time they are accurate to the point they claim to be.

To believe in polls, you need to understand where they come from – and the scientific method by which the poll numbers come from.

Here’s a quick taster to how most polling organisations reach their conclusions:

 

Polling the field

Of course, all polls start by asking lots of people their opinions of various questions.  Here in the UK, the standard number of people involved in a final poll’s results is 1,000, whereas in America it can be lower at around 500.

This just means the number of people that actually responded, whereas over 10-20x that amount can be contacted to take part but avoid the request.

The way these people are asked varies depending on what method a pollster uses: telephone, online or face-to-face interviewing.

Telephone polls are perhaps still the most common, and for this pollsters randomly select phone numbers from the directory and ask people to take part.

Pollsters include: IPSOS Mori, ICM, ComRes

Online pollsters are obviously growing in their presence, and their methods are a bit different.  Most of them have a “panel” of people who sign up to take surveys with them.  When these pollsters ask questions, they put out a request to a random selection of their panel to take part.

Pollsters include: YouGov, BMG, Opinium, Survation

Face-to-face pollsters are on the way out, but still used in some contexts.  These involve pollsters calling on random houses up and down the country and asking people to take part at their home, asking the questions there.

Pollsters include: Kantar Public UK

You’ll notice all three of these methods include randomly selecting participants, which is part of the science of conducting polls.  By randomly sampling the population, you have a higher probability that the group you end up asking questions to will be representative of the general public.

Even though random sampling works in most scientific fields, because politics has traditionally been influenced so much by demographics, pollsters do something to make sure their results from their questions are reflective of the different identity groups that make up the population.

Quotas

When you think about it, there are so many ways to cut across a population and divide them into groups.  The most basic are gender, age, education levels, income etc. and we know from Census and other national data the general proportion of the population that falls into each subcategory of these groups.

So when pollsters get their responses from people, they try to make sure that the number of people they ask are reflective of this.  For example, pollsters will aim to have a 50-50 split between men and women in their polls to reflect the differing opinions men and women have when it comes to politics.

The problem is though that by randomly sampling it’s impossible to rely on getting an exact 50-50 split.  It’s possible, but not always likely – and this becomes even more difficult when you use more complicated groupings as political pollsters do – such as trying to find a proportional sample of a political party’s followers, i.e. making sure that 40% of the sample are Tory voters, because the Tories got 40% of the vote at the last General Election.

It might be possible to get a perfectly proportional sample, but the only way to do that would be to interview thousands, and thousands of people.  When you think about it, the only way to get a perfectly proportional sample is to interview everyone, which is essentially an election.

So pollsters settle for a group of about a 1000 people, where they can have broadly proportional results.  Scientific probability theory tells us that if you ask 1000 people, the responses you get to a question will have a margin of error of about 3%.

To give you an example of how this works, imagine we asked 1000 people if they preferred chocolate or strawberry ice cream.  56% come back preferring chocolate, 44% prefer strawberry.  Because of the margin of error we have from only interviewing 1000 people, those numbers could be out 3% either way.  This means as many as 59% of people might prefer chocolate, but equally it could be as low as 53%.

For politics, though, we take one final step to make sure that even though we don’t have a perfect proportional sample that the final results are more closely related to the demographic make-up of the country.

Weighting

At this stage we have all our results, and they are broadly proportional of the population – but still not exactly right.

What we now do is mathematically change the weight of some votes to make sure that the total “value” of each demographic group is equal to it’s proportional share of the population.

For example, say our interview of 1000 people ended up asking 550 men and only 450 women.  If we multiply the votes of each women by 1.11 and divide the value of men’s answers by 1.11, it’ll be as though 500 men and 500 women answered the poll – making it proportional.

This weighting is commonly done on a lot of different features of poll results, to try and mimic the exact population that would turnout to vote at an election.  We know not everyone votes at elections, and we know what groups tend to vote less than others, so it makes sense to make our poll results reflect those people rather than the population in general.

So this means that poll results are weighted by things like age (as old people tend to vote more), income (as those from lower-income households tend to vote less) and, more recently, education.

Generally in the UK, because we a relatively wide party system and some degree of loyalty to these parties, pollsters also weight by party support too, in the same way they find a quota of party supporters.  In the case of modern Scotland/Britain, pollsters also look to the results of the Indyref/Brexit votes to make sure the proportion of voters in sample is representative of the political beliefs of the country.

 

After all this, the final numbers come out.  Short of posting the individual persons’ responses, all major UK pollsters publish data tables along with their poll findings to show exactly how they weighted their poll results from the people they asked – making the process transparent and clear to all.  That means there can be no accusations of numbers being “fudged”, or the numbers coming from nowhere.

Polling is important as it allows the public to understand where the rest of the public stand on certain issues, and to allow political parties and campaigns to shape their message to what the public want.  In a democracy as large as ours, with millions of people, polls are one of the only ways the voices of the masses can truly be heard in the years that span between elections.

Understanding them might be a burden of effort to begin with, but I hope this post has showed the mystery behind the magic of polling and why they’re to be trusted, rather than criticised.

Leave a Reply

Your email address will not be published. Required fields are marked *