Fri | Apr 26, 2024

The importance of polls and the East Portland election (part one)

Published:Monday | March 25, 2019 | 12:00 AMDon Anderson/Contributor
Jamaica Labour Party supporters in Port Antonio, Portland, on nomination day, March 15, to support their candidate in the April 4 by-election, Ann-Marie Vaz.
People’s National Party (PNP) supporters during a political rally in Port Antonio, Portland, to introduce the party’s candidate for the April 4 East Portland by-election, Damion Crawford.
1
2

Political Polls have become a very important aspect of the run-up to elections worldwide, and no less so in Jamaica. In the United States (US) presidential election of 2016, I did an assessment of the number of polls that were conducted during the six months leading up to the elections in November. Counting just those conducted by major polling houses in the US, more than 180 such polls were done over the period, averaging 30 per month. Incidentally, the large majority of these had Hillary Clinton winning the election, a situation that did not materialise, giving rise to questions about the usefulness of polls. The fact is that polls do very well at assessing how people are likely to vote in an election, but polls do not have the ability to adequately measure last-minute extraneous factors that have been shown to cause swings away from established data collected and analysed up to the week before an election. This is because unless a polling team is in the field two days before an election, with little time to analyse and assess information collected at that time, predictions on the outcome of an election are generally made without the benefit of last-minute interventions by one party or another.

Correct perspective on sample size

Much has been made of sample sizes that are used to measure likely outcomes of elections. Very few of those 180 polls evaluated during the US presidential election involved interviews with more that 1,500 voters. Indeed, a significant number of them used sample sizes of between 500 and 1,000. Population of the US? Three hundred and fifty million. Clearly, there is some sharp science associated with sampling.

The reality is that sample size is not the most critical factor in determining the accuracy or reliability of the numbers chosen to provide answers to pollsters. While the size of the sample is no doubt important, far more important is representativeness of the sample. This means that as long as the sample is highly reflective of the make-up of the overall population from which it is chosen, then this sample is said to be representative of the population and hence enhances the reliability of projecting the information obtained to the overall population and allows for accurate assessments to be made about the population. In essence, one does not need to eat the whole bowl of soup to know how it tastes, but rather needs to ensure that the first and subsequent spoonfuls of soup are properly stirred. The first spoonful will give the consumer a clear indication of how the rest of the soup tastes. Subsequent spoonfuls, properly stirred, will enhance the appreciation of the taste. Eating the whole bowl does satisfy the hunger, but it does not serve to increase the appreciation of the taste. The same is true for sampling in polling. A large sample of 2,000 interviews that is not representative of the population from which it is chosen is not as reliable as a sample of 400 persons that mirrors the make-up of the population from which it is chosen.

Margin of error assessment

This leads to the calculation of the margin of error. Pollsters and researchers talk of a margin of error. What this tells you is that the data presented is always an approximation of the actual situation, and the calculation obtained tells you how far off the actual situation is from the data presented. Clearly, the larger the margin of error, the less accurate the data is and the more it varies from what is actual. Margin of error, then, is an important factor in assessing the accuracy of a poll. Quite often, the pollster will indicate that the data has a margin of error of plus or minus three per cent at the 95 per cent confidence level. This means that the pollster is 95 per cent confident that the data is within plus or minus three per cent of what it actually is. It is highly unlikely that any poll or research can achieve a more reliable estimate of the actual – a lower margin of error than plus or minus three per cent. Not even a census of the population can achieve this.

Politicians use this margin of error in different ways depending on, for example, how close the reported voting intention is. If, for example, the poll reports that 39 per cent of the people say that they will vote for Candidate A and 36 per cent say they will vote for Candidate B (a gap of three percentage points), the candidate with the 39 per cent support will generally say that means it could be a six per cent gap, taking greater comfort. The candidate who is trailing by three per cent will normally say that it means it is a dead heat. This, in reality, is what it means. That three per cent could mean either a dead heat (minus three per cent) or that the gap, based on the calculated margin or error of plus or minus three per cent, is six per cent (plus three per cent). Rarely will the candidate with the three per cent lead think it could be a dead heat, and conversely, rarely will the candidate trailing by three per cent think it could be six per cent.

Value of trend of data/predictive tool

The real value of polling lies not in a single poll but in the data read over a series of polls. A single poll is nothing more than a snapshot at a point in time. A series of polls conducted at consistent intervals, using the same basic questions, is a great projective and predictive tool. In the 2007 election, for example, which was held in September, I started to poll in January. At that point, the ruling People’s National Party (PNP) had a nine percentage point lead over the opposition Jamaica Labour Party (JLP). I did six polls over the period, between January to September, with each one showing the PNP’s support dwindling gradually and the gap between the two narrowing. But it was not until I did the last poll, two days before the election, that the lines crossed for the first time and the JLP held a one per cent lead over the PNP. That consistent trend over the nine months allowed me to predict the outcome of the election on CVM TV the day before the election. Aided by polls conducted in marginal, I predicted that the JLP would win the election by 32 seats to the PNP’s 28, representing a margin of one per cent. That was precisely the outcome of that election.

The bandwagon effect

Polling results can have a serious positive or negative effect on supporters, depending on how the data look.

Scenario 1: The poll indicates that the gap between contending candidates or parties is two percentage points. This is a dead heat, and, based on the margin of error of plus or minus three percentage points, means, in essence, that either could win. That result, however, would need to be weighed against trend data, if available. This has the effect of intensifying the campaign and engaging the support of both teams to tip the scale in their favour. Both teams are galvanised.

Scenario 2: The poll indicates that the gap between the two parties or candidates is seven per cent. Especially if this poll is conducted close to the election, it generally has a twofold effect. The supporters of the party that is seven percentage points ahead are more galvanised to go out to drive home that advantage. The campaign is energised, and campaigning intensifies.

On the other hand, the support for the party or candidate trailing by seven per cent tends to wane. Supporters are convinced that they cannot win, so why bother to go out to vote? This is not to discount the possibility of a reverse effect, where the trailing party doubles its effort and the leading party becomes complacent. More often than not, however, the former situation prevails. This is the bandwagon effect and is often one of the factors that impacts the ‘accuracy’ of political polling.

Extraneous factors: vote buying, intensification of economic activity

While polls scientifically measure how people are likely to behave on election day, they quite often cannot capture the impact of extraneous factors such as vote buying and intensified economic activities immediately prior to the election. In our context, both political parties have been said to have engaged in vote buying over several elections. Our evaluation suggests that the electorate has become more mature and smarter in each election. There was a point in time, when a party could be virtually assured that voters would comply with financial arrangements made to determine their action on election day. Today, there is some level of uncertainty that they will be as compliant as expected. But, for sure, it is said that inducements, generally of a financial nature, are offered to citizens to either vote or not vote. The challenge for the pollster is that while the majority of those who receive inducements will honestly tell the pollster who they plan to vote for, there is a fringe element that remains very cagey/doubtful of confidentiality and either does not respond to this question or responds based on how they gauge the interviewer. In a very close election, this fringe could matter.

Don Anderson is a marketing consultant and political pollster.