Biased predictions to blame for exit poll fiasco
I have never been a fan of exit and opinion polls. Rather, I consistently say that we shouldn’t treat them seriously; they should be regarded as game shows. Thus, I am not at all astonished by the exit poll fiasco in the just-concluded Lok Sabha elections.
Usually, people are bewildered by the disparate predictions of exit polls. The uncertainty persists until the EVM votes are counted. But it was a different scenario this time. Nearly all exit polls got it wrong.
These days, dozens of these polls are conducted by various organisations, with sample sizes ranging from a few thousand to a few lakh. I have heard that those conducting a recent exit poll claimed that its sample size was one crore! But how do the polls stand? How do the investigators question the voters? Unfortunately, I was never asked whom I would vote for or had voted for by an exit poll or opinion poll surveyor. I asked my acquaintances, and to my surprise — and regretfully — none of them were ever approached by a pollster. Overall, such a low-probability event might have happened to me by chance. However, how do pollsters make poll predictions worldwide? It’s primarily a statistical task, for sure. One must choose a sample size that will keep the prediction within a predetermined error range, say 3 per cent. One requires a sampling frame, random sample selection and proportionate representation across society. And then one must appropriately analyse the data using some auxiliary information, such as the voter list or other demographic data.
However, since the majority of the pollsters don’t disclose information about their sampling frame, sampling technique and summary statistics across several margins, we are unable to determine where they make grave mistakes and how those mistakes accumulate to produce a blunder.
Ideally, an exit poll survey should be conducted on voters just leaving the polling booths, which ought to be chosen at random with appropriate weightage. Every fifth or 10th person in a chosen booth must be questioned about his or her vote; this is called ‘systematic sampling’. The investigator has to decline if some overly enthusiastic people want to opine in between. Are our exit polls adhering to these standards? Do the investigators cover voters at different periods of the day? Do the pollsters visit sensitive booths in the right proportion or in the most remote areas of the country for their surveys? Usually, we don’t know the answers to these questions. Only the estimated magic numbers are provided by the pollsters. To us, everything is merely an output from a black box, and we are unable to assess its quality.
What about sample sizes? The claim of a one-crore sample size by a polling organisation brought to mind a prediction made during the 1936 US presidential election by a reputed magazine, Literary Digest. The poll, which had a sample size of 24 lakh, showed that the Democratic candidate, incumbent President Franklin D Roosevelt, would receive only 43 per cent of the votes and the Republican nominee, Alfred Landon, would get 57 per cent. In reality, the outcome was even worse than the opposite: Landon received only 38 per cent of the votes, while Roosevelt got an overwhelming 62 per cent. A retrospective examination showed that the sample had a serious selection bias: the people chosen for this opinion poll were drawn from lists of magazine subscriptions, club memberships and telephone directories, all of which at the time were considered to be indicators of wealth. A significant non-response bias was also present in the sample; out of the one crore people the magazine approached for their opinions, only 24 per cent replied. However, this was an opinion poll survey. Thirty-one years later, in the Kentucky governor election, pollster Warren Mitofsky of CBS News would conduct the first exit poll.
In any event, both exit and opinion polls have consistently produced inaccurate results across the globe. Politicians of all hues would concur that these predictions can’t be blindly believed.
The key point is that poll predictions can go horribly wrong, even when pollsters adhere diligently to statistical rules. The rationale is that the poll is really a psychological one in which researchers attempt to gauge respondents’ political inclinations, a very delicate subject. In a political society like ours, this is an even more delicate matter. Would everyone feel comfortable telling some anonymous pollster what they really thought about politics and voting, especially if their choice ran counter to the prevailing wind? Thus, if respondents’ comments are believed at face value, the poll’s predictions would be glaringly inaccurate. However, one doesn’t need to know who is lying specifically; only a reasonable estimate of the overall proportions of individuals supporting various parties would suffice. Do our pollsters employ a suitable methodology in order to make sense of their predictions by using suitable statistical filtering of the responses? I have never heard of a pollster claiming to have used any kind of methodology for filtering in his or her predictions, at least.
Let’s acknowledge that supporters of particular political parties are reluctant to participate in these kinds of surveys. As a result, supporters of Opposition parties have artificially boosted the sample. In that scenario, one should expect a skewed and biased prediction. In the UK, pollsters are aware of the ‘shy Tory factor’ or the reluctance of Conservative party members to participate in such surveys. A similar ‘shy Trump factor’ surfaced following Donald Trump’s election in 2016. Do our pollsters know which party’s supporters are reluctant to participate in these kinds of exercises? Do they alter the sampling design appropriately to address this important issue?
Following its dismal 1936 predictions, Literary Digest did not conduct the exercise subsequently. Perhaps it understood that this wasn’t its task. Nonetheless, throughout the Assembly elections scheduled for later this year, we will be inundated with poll predictions. And perhaps their ‘success’ would be applauded if some of them turned out to be true — by chance.