Pollsters and Polling: Still Standing


Photo credit: UNO Political Science Department

When Donald Trump upset Hillary Clinton in the 2016 election, pollsters were chastised for “getting it wrong.” The national polls had Clinton slightly ahead of Trump, but the Republican nominee ended up winning the election and becoming president. If people were skeptical and questioned the accuracy of polling before the election, they became downright suspicious and hostile to the entire process of opinion polling afterwards. In short, critics concluded that pollsters and their polls cannot be trusted.

But what if I told you that the pollsters did get it right in 2016. That, in fact, polling error at the national level was lower in 2016 than it was four years earlier. In 2016, the overall average reported by RealClearPolitics (RCP) just prior to the election had Clinton up by 3 percentage points. After all the votes were counted, she won the national vote by 2 percentage points, resulting in a polling error of 1 percent. Compare that with the 2012 election where the RCP average had Obama up by less than 1 percent, but he won the election by 4 percentage points. There was more polling error in the 2012 election than in the 2016, yet no one complained about the discrepancy in the results. Why no criticism in 2012, but a huge outcry in 2016? The expectation was that Obama would win a second term and he was re-elected. Four years later, all the models pointed to Clinton winning the election, but Trump triumphed. As a result, pollsters and the polling industry were maligned for delivering results that did not match the expected outcome.

A post-election review of the polls found that while the national polls were accurate, state polls were not. State polls tend to be undertaken less frequently than national polls, and they are often conducted by smaller polling organizations. Incorrect state polls proved to be a problem because it is the Electoral College, not the popular vote, that determines who wins the presidency. Trump won the presidency by narrowly carrying Michigan, Pennsylvania, and Wisconsin, yet polls showed Clinton leading, if barely, in those states. Moreover, those states had voted Democratic for president six elections running.

Perhaps the biggest single problem with 2016 state polls was a decision by many of them not to “weight” for educational attainment. Weighting refers to adjusting a sample so that the sample’s demographic characteristics more accurately reflect the population’s overall characteristics.

-ADVERTISEMENT-

Recent electoral outcomes have been increasingly driven by a few key demographic factors, notably educational attainment. Exit polls from 2016 and 2018 show those without a college education backed Republicans and Trump, while college-educated voters voted for Democrats. In addition, those who are college-educated are more likely to respond to polls than are those without a college education. Since the state polls did not weight for educational attainment, the polls produced results that skewed in favor of Clinton. Pollsters are now weighting for educational attainment.

Understanding Polling

As the 2020 presidential campaign moves forward, citizens can expect to be inundated with the results of public opinion polls sponsored by the media, by political campaigns, and by many private and non-profit organizations. As such, citizens need to be astute consumers of polls and be aware of the factors that affect the poll results so they do not accept or reject them too quickly or uncritically.

Let us start with a basic definition of the process. Survey research or polling is the method used to gather information about a large number of people by interviewing only a few of them. For example, a researcher wishes to study a large population to learn about the level of support for candidates in an upcoming election. One population of interest in Louisiana would be the state’s 2.9 million registered voters. It is not feasible to ask each and every registered voter who they will vote for. Using the principle of sampling, researchers can ask a subset of that population of registered voters who they support in the election, and within a margin of error and a level of confidence, generalize the results to the larger population. In short, we take a sample from the larger population, interview that sample, and take the results from that sample and infer them back to the larger population.

Sampling

One question that persistently arises in discussions about opinion polling is how is it that a sample of 1,000 respondents can accurately represent the views of millions of adults? The answer comes from sampling theory. Of the many aspects of survey research, sampling arouses the greatest skepticism among Americans. However, sampling is something that most, if not all, are familiar with but may not realize. When you visit the doctor for your yearly physical, one procedure that is normally done is a blood test. A healthcare worker draws a vial of blood from you to be examined for any deficiencies or diseases. They do not drain all the blood from you to learn about the condition of your blood. They extract and examine a “sample” of blood. A sample of the total amount of blood is sufficient to produce accurate results because the sample has properties identical to the those of the remaining blood in your body.

The same principle applies in public opinion polling. Sampling is the selection of a subset of respondents from a larger population. When the sampling process is conducted properly, the subset will be representative of that larger population. That is, the sample will demographically mirror the population. Probability sampling is cited as the number one characteristic that makes a poll scientific. It allows the researcher to calculate margin of error and permits them to generalize the results of the sample to the larger population from which it is selected. Non-probability samples are used in polls that allow respondents to select themselves into a sample. This should be viewed with caution. For example, radio call-in shows and unscientific opt-in Internet polls may have large numbers of respondents, but these respondents may not be representative of the larger public. Any efforts to generalize beyond these self-selected samples is misleading.

The preeminent method of probability sampling is simple random sampling. Random sampling assumes that each person within a population has a known and equal chance of being selected in the sample. If every individual in the population has an equal chance of being selected, the result will be a sample that accurately reflects the larger population from which they were drawn. For example, we know from the Louisiana Secretary of State’s voter file that females comprise 56% of registered voters in Louisiana. If a sample of 10,000 registered voters is randomly selected from the list of 2.9 million registered voters, females will comprise 56% of the sampling frame. Other demographic categories such as age and race will be also be accurately reflected. The sampling frame is the list of people that will be called or contacted to be interviewed.

But once the interviewing process is completed and you have your 1,000 respondents, what if there are biases in response rates? For example, what if women are more likely to respond than men? Since we know the true population proportion for gender, the researcher, using weights, brings the final sample numbers into line with the overall population values. For example, if females constitute 60% of the sample but 56% of the overall population, we will weight each female respondent by 0.93, thereby reducing the percentage of females in the sample to 56% (0.93 x 0.60).

Question Wording

Another factor that citizens should consider when examining a poll is question wording. Once we know who we are going to interview, we set about to frame the questions we will ask. The questions we ask determine the answers we get. In polling, words matter. Citizens should be on the lookout for the use of a loaded word or an inflammatory phrase in a survey question since that can affect the pattern of responses. For example, a researcher is interested in learning what Americans think about federal financial assistance to individuals. How the question is worded will influence responses. The pollster may ask Americans whether they favor or oppose government programs providing financial assistance to individuals who are struggling economically. Or the pollster may ask Americans whether they favor or oppose federal welfare programs for the poor. Few people support welfare, but more might support assistance to those who are economically struggling. As you can see, one can easily construct questions that will generate the responses they want. Citizens should pay close attention to how the question was worded. Ask a question one way and you may get one answer; slant the wording differently, and you may get a different one.

Margin of Error

The next aspect of opinion polling that mystifies people is the idea of a margin of error in the results. The margin of error tells you how many percentage points your results will differ from the real population value. For example, a colleague and I conducted a telephone poll of likely voters last October to learn who they supported in the runoff election for governor. We had 722 likely voters respond to the poll, which yielded a margin of error of +/- 3.6% at a confidence level of 95%. The poll found that 50% preferred John Bel Edwards and 47% supported Eddie Rispone.

The margin of error is determined by the sample size. Generally speaking, the more people responding to a poll, the smaller the margin of error. At a certain point, usually around 4,000 or 5,000 respondents, the margin of error reaches about 1 percentage point and stops shrinking. So, it is fairly common for polls to survey about 1,000 respondents, which has a margin of error of +/- 3%. Since polls cost money, researchers need to balance accuracy with affordability.

As previously mentioned, the results from our telephone poll had a margin of error of +/- 3.6%. That meant the support for Edwards ranged from 46.4% to 53.6% while support for Rispone ranged from 43.4% to 50.6%. The margin of error was larger than the gap between the two candidates. Although the poll showed Edwards ahead, the race was best described as too close to call.

Margins of error and confidence levels reflect the fact that there is room for error in polls. Pollsters realize that any survey or poll will differ from the true population by a certain amount. A poll with a 3% margin of error means that our results will be within 3 percentage points of the real population 95% of the time.

Reporting a margin of error is part of the polling process. The goal of every legitimate pollster is to be as accurate as possible, but we hedge our bets a bit since error can be introduced at various stages of the process. For instance, a poll may include coverage error where some groups of people will not have the opportunity to be included, measurement error where the questionnaire is flawed and does not measure what it was intended to measure, non-response error which results from not being able to interview people who would be able to take the survey, and interviewer effects where the interviewer can, consciously or not, influence how the respondent answers the questions.

Conclusion

Unfortunately, many people simply dismiss polling and polls because “nobody asked me my opinion,” or “poll results can be twisted,” or they have “doubts about limited sample size.” Nonetheless, polls and surveys are all around us and they are not going away. They are conducted on almost any conceivable topic, be it a genuine public policy issues, approval levels of elected officials, performance of the American economy, or on something more lighthearted such as which soft drink is favored or who should be inducted into the Rock and Roll Hall of Fame.

Should citizens be skeptical of polls? Of course. Not all polls are created equal as some will be conducted to manipulate and manufacture public opinion rather than inform and educate the public. Citizens should carefully evaluate polls by looking underneath the hood and ask who conducted the poll, when and how it was conducted, is it a probability or non-probability sample, how large is the sample, what is the margin error, who was interviewed, and how were the questions worded. In the interest of transparency, credible polls will disclose this information.

Finally, the state of public opinion is strong even with the perceived misfire in 2016. Critics will say pollsters got it wrong, but, in fact, it is critics who have it wrong. Are response rates down from previous years? Yes. Are polls still accurate? Yes. According to Nate Silver at FiveThirtyEight the overall accuracy of the 2016 polls was only slightly below average by historical standards. Moreover, American election polls post-2016 are about as accurate as they have always been. The polling industry is as healthy, and as accurate as it has ever been.

Help Keep Big Easy Magazine Alive

Hey guys!

Covid-19 is challenging the way we conduct business. As small businesses suffer economic losses, they aren’t able to spend money advertising.

Please donate today to help us sustain local independent journalism and allow us to continue to offer subscription-free coverage of progressive issues.

Thank you,
Scott Ploof
Publisher
Big Easy Magazine


Share this Article

Leave a Reply

Your email address will not be published. Required fields are marked *

Things to Do

Help Save Local Independent Journalism

Help us continue to bring you stories that matter by donating monthly or providing a one-time donation