Votes, Lies, and Surveys

Orlando Roncesvalles, Visiting Professor of Economics at Silliman University, weighs in on the  Holmes – Virola debate* in his monthly column at Dumaguete MetroPost.

Letter from Dumaguete
May 5, 2022

Do statistics tell the truth?

How credible are pre-election survey results? This is the crux of a debate between a pollster and a respected authority on statistics and polling methods. Pulse Asia (PA) has published a survey showing a voter preference for Bongbong Marcos over Leni Robredo. Dr. Romulo Virola analyzed the survey, identified its methodological “flaws,” and concluded that the survey is “biased” against Robredo. Ronald Holmes of PA disagrees.

The debate revolves around an important element of surveys that has a technical meaning — the question of the ‘representativeness’ of the sample used in the survey. The sample is of course a subset of the entire universe of voters. Practically all election polls use a sample of 1,000 to 2,400 ‘likely voters’ to represent a total voting population that typically numbers in the millions — 240 million in the case of the US, or 65 million here in the Philippines. Although the sample is relatively small, it can be justified if it uses techniques — usually something called ‘randomization’ — that ensure representativeness. The problem is usually not in the size of the sample but in its composition. The important question is whether we can be confident that the behavior of the sample conforms with the behavior of the larger population.

If the sample does not represent or replicate the likely behavior of the overall population of voters, the survey is said to exhibit ‘sampling bias.’ The bias results from the exclusion/inclusion of some members of the population, and is said to be the main reason why some polls have failed to accurately predict the outcome of an election.

The debate between Virola and Holmes centers on the distribution of the survey sample among the various socio-economic classes (A through E) and among age groups in the Pulse Asia sample. Both sides acknowledge that the A, B, and C classes (those with high incomes, typically with college educations) are under-represented in the PA survey, while the D and E classes are over-represented. Both sides also see that the youth (those in the18-41 age group) is under-represented while the older (58 and over) group is over-represented.

Virola corrects for these under- and over-representations by making assumptions about the true voter preferences of the various groups. He makes an assumption that the 18-41 youth group favors Robredo by a 55-45 margin, from which he asserts that the “biggest source of possible bias on the PA survey in favor of Marcos is the underrepresentation of the youth.”He also thinks that support for Robredo is higher from the group of higher-income and college-educated voters. Ronald Holmes of PA disagrees with Virola on the basis that other surveys, also conducted by PA, show that Marcos “has a marginally or significantly higher support” in the groups that Virola assumed would be more supportive of Robredo.

My own assessment of the debate is that Holmes is correct if we count only what we can see, whereas Virola may be correct if we could count what PA did not count. In other words, it is a debate about the preferences of voters who were not included in the sample. (It is a debate akin to anAgatha Christie murder mystery. Can we say that the butler did it if no one has seen him do it?) It seems that this debate cannot easily be settled.

Incidentally, Holmes states that under PA’s sampling method,“probabilistically selected respondents come from various socio-demographic groups.” On its face, this is not controversial — after all, no matter who the respondents are, they will naturally come from various or different groups. Holmes uses the term “probabilistically selected respondents” because the PA method involves an element of randomization (see below).

The issue then boils down to sampling bias. The most famous example of sampling bias is the one that caused the pollster to predict (wrongly) a victory for Dewey against Truman in the 1948 American presidential election. The source of sampling bias then was the use of a telephone survey. Dewey supporters were more likely to have telephones, and this skewed the poll results accordingly.

Avoiding sample bias is not easy because the choice of the sample must not be dependent on criteria that are known — from independent research — to have an effect on the behavior (responses) of the sample in relation to that of the population. The conventional scientific approach is to choose the sample on a randomized basis, which is easier said than done.For example, the pollster may have access to the official voters’ list, and then uses a random number generator to choose the respondents. What happens if the chosen respondent cannot be reached?

Another approach is to ask potential respondents questions that may reveal a bias, and to exclude them on the basis of their answers; this approach is akin to jury selection in American courtroom trials, but it requires a high level of transparency (and integrity) in the methods of the pollster

The PA methodology seems to be a mix of choosing a pre-determined number of respondents by regions, and drilling these regions down to local government units, and ultimately to households chosen through a random process. There seems to be no safeguard method for avoiding sampling bias, other than possibly a ‘re-weighting’ of the raw data so that certain subgroups are said to be neither over- or under-represented

Do voters pay attention to poll surveys? Voters in the Robredo camp appear to be worried. They may wonder if the poll surveys are reliable.The pollsters themselves acknowledge that surveys are only a snapshot in the path toward elections — even if the sample used is representative, voters can and do change their minds on the eve of elections. One commentator has noted that election surveys are not likely to influence most voters either because they are unaware of the survey results, or because they have their own minds anyway. Still, supporters of either candidate can take their cue from surveys (regardless of or adjusted for ‘flaws’) in order to work harder for their candidate.

A credible election requires at least a margin of 1 percent of total votes.That suggests something like 650,000 on a full voter turnout. If a survey of 2,400 respondents gives a candidate a 10 percent vote margin, the question that possibly matters is whether that is a good enough basis to ‘predict’ the outcome.

If there were no margins for error, the 10 percent translates into 6.5 million votes; if there are margins for error, could the 6.5 million votes“disappear”? I suggest that yes, these projected votes may come or go because of: (1) human error in conducting the surveys (something that pollsters concede and is also borne by historical accounts of poll ‘blunders’); (2) the influence of ‘extraneous’ factors such as fear or simply non-responsiveness on the part of respondents (in other words, respondents may lie or decline to participate); or (3) the influence ofshenanigans like vote-buying and cheating in whatever form (there is a suggestion that the sampling method may be vulnerable to ‘trolling’ if interested parties are able to track the target respondents).

Is the true margin of error of a survey independently discoverable? I do not know. Voter attitudes (not necessarily their preferences) can be inferred from what has become known as Google Trend ‘polls’ that supposedly also predict the outcome of elections. Search activity is correlated with voter attitudes, and may provide collateral evidence on the unreliability of a parallel traditional poll survey. It is worth noting, however, that although Google Trends appear to ‘predict’ a Leni victory, such ‘polls’ have their own sampling bias — they include only those with access to the internet.

The conclusion for the voters is then one of uncertainty. It appears that voters should do what the pollsters have been saying all along: Don’t go by the polls, vote your conscience, and if there’s no cheating, democracy wins. How can I say this?

Let me quote Dr. Mahar Mangahas from his academic paper in 2009. He said:

“SWS [the polling firm headed by Mangahas] never contends that a survey of a sample of the votes can judge the accuracy of a full count of the votes. On the contrary, the reverse is true: it is the full count that judges the quality of a sample survey. In the Philippines, it is far better to judge official results by comparing them with the parallel counts of the non-governmental [elections watch organization] than with sample surveys.

Mangahas is perfectly correct on this score. A sample is just a sample.The proof of the pudding is in the voting, and that doesn’t take place until May 9.

*

* “On disinformation regarding our pre-election surveys” by Ronald D. Holmes, President, Pulse Asia Research https://drive.google.com/file/d/136jgTARQ8HBahOephd1pqtfVY3i3x2Gx/view?fbclid=IwAR2ZkxSNaiQEfK3m4GdsOmOCpybYcPiQDzABojP00jP9e1DYpVjL7Sgu3w8

“Statistically Speaking v2.0…..Leni Could Win If the ‘Flaws’ of the Pulse Asia Survey Were Rectified!!!” by Romulo A. Virola https://www.facebook.com/romulo.virola/posts/pfbid02kLjm8VYzCyhmTMgLpUaneevEyVa8rV8s5hajFhghmcyMBaDuo3cMrSHEw6hyc4qAl

Comment