Skip to ContentGo to accessibility pageKeyboard shortcuts menu
OpenStax Logo
Introduction to Political Science

5.5 How Do We Measure Public Opinion?

Introduction to Political Science5.5 How Do We Measure Public Opinion?

Learning Outcomes

By the end of this section, you will be able to:

  • Describe different methods of measuring public opinion.
  • Explain the shortcomings of these methods.

Earlier in this chapter, we discussed how writing to public officials is an important facet of political participation. Before we had a good way to measure public opinion, constituent letters were one of the few ways officials could gauge how the public felt. The advent of public opinion polls provided a scientific way of identifying and measuring opinions. Social scientist Jean Converse, in her history of the field, writes that surveys can be traced back 2,000 years but were forged in the 20th century as a way to understand mass populations and societies and to gain insight into elites.127 Over time, polls and surveys have become more precise through careful sampling and improved techniques. A sample is a group selected by a researcher to represent the characteristics of the entire population, and because we can never poll the entire population, getting the right sample is important to the accuracy of any poll. But how can we accurately gauge the opinions of the whole country on a sample of 1,400 or 2,000 people? The way the sample is drawn affects its accuracy. In the most common method, probability sampling, researchers randomly choose samples from the larger population. This method requires that everyone has an equal chance of being part of the sample and that they are randomly selected, which allows researchers to make generalizations about the larger population. If a researcher chooses people at random from a population, it is likely that their views will match the opinions of the larger population as a whole. These types of samples are often generated through random digit dialing, in which respondents are chosen at random by a computerized phone number generator. Researchers then use these randomly generated phone numbers to reach people at home and ask them about their opinions. While random digit dialing has been the go-to for decades, the decrease in landlines, increased adoption of cell phones, and increased time that people are at work have all contributed to the decreased reliance on home-based phone numbers. A Los Angeles Times article found that Internet-based surveys and automated interviewing systems (as opposed to live pollsters) were particularly accurate and may reflect a shift in how researchers measure public opinion moving forward.128

The difficulty of reaching people for polls is not just an American phenomenon. Researchers in Japan have found steep decreases in responses to nationally conducted surveys, with the steepest declines in metropolitan areas and among younger demographics. Scholars point to increased commute times, longer work hours, and higher mobility among younger Japanese as contributing to this problem.129 Sampling in countries facing violence or instability can be a serious—and dangerous—problem for pollsters. James Bell, director of international survey research for the Pew Research Center, notes that when Pew conducted polls during civil unrest in Ukraine and Venezuela, polls needed to be conducted face-to-face rather than by phone. In addition, sometimes the data acquired in polls must be processed locally if pollsters cannot immediately evacuate the area.130

What Can I Do?

The Importance of Empirical and Quantitative Skills

A line graph shows consistent levels of support for political parties in New Zealand between January 1, 2009 and January 1, 2012. The National Party received the most support, with around 50%, the Labour Party received the second most support, with around 30%, and the Green Party received the least support, with around 10%.
Figure 5.12 This graph shows support for political parties in New Zealand between 2009 and 2012, according to various political polls. (credit: “File:NZ opinion polls 2009-2011 -parties.png” Mark Payne, Denmark/Wikimedia Commons, CC BY 3.0)

Everyone loves a good public opinion poll. As you’ll see in other chapters, data from polls are utilized throughout society, from the media to candidates running for office, and even to decide what gets included in legislation. However, being able to properly understand what the data are telling us is a skill that is developed and can be utilized in a wide range of areas. If you can look at a set of numerical data or observable facts and reach an informed conclusion about what is happening—for example, about whether a group of voters prefers a certain candidate or if a group of residents wants a park built in their town—this is really no different from determining if a group of consumers prefers brand X bread or brand Y bread. In the modern digital era, we have a wealth of information at our fingertips. Being able to properly understand and interpret that information is a skill that is becoming fundamental in today’s workforce.

There are also “nontraditional” sampling methods, which may be less scientific but offer certain benefits. One nontraditional method is a convenience sample, which, as the name suggests, is a sample based on convenience rather than probability. If you do not have the funds to create a poll based on a probability sample and random digit dialing, you might instead ask your classmates or your coworkers to respond to your survey with their opinions on the last election. While this method is both convenient and easy, we cannot extrapolate much from the information beyond the sample from which it is drawn. Another type of polling method is called cluster sampling, in which researchers divide the overall population into clusters, based on characteristics such as shared cities or schools, then randomly select people from within those clusters to poll. This type of sampling is cheaper than probability sampling, but the results are also not quite as representative because they are not randomly drawn.

How reliable are polls? One of the most basic issues with a poll is a sample that is too small, which leads to sampling errors. Generally speaking, the larger the sample, the less chance of error. A typical sample of 1,500 people will have a sampling error of approximately 2.6 percent, which is generally considered an “acceptable” margin of error in public opinion polling. This means that out of 1,500 respondents, if 60 percent say that the LA Lakers is their favorite NBA team, due to sampling error, the true figure could be anywhere between 57.4 percent and 62.6 percent who prefer the Lakers. The smaller the sample, the larger the error.

The methods by which respondents are contacted can also affect a poll’s accuracy. According to the Centers for Disease Control and Prevention (CDC), in 2020, 83 percent of Americans aged 30–34, 74.5 percent of those aged 35–44, and almost 60 percent of those aged 45–64 used cell phones exclusively.131 This trend away from landlines can contribute to selection bias, whereby the sample drawn is not representative of the population being studied. In this case, any sample drawn from people using landlines would probably skew heavily toward individuals who are much older and those who are likely to be at home more often.

The design of the survey itself can limit a poll’s accuracy. Question wording, interviewer bias, and response bias can all lead to measurement error, or limitations in response validity due to survey design problems. Questions should be worded in a straightforward manner in order to solicit a truthful response. Studies have shown that alterations in question wording, also known as question wording effects, change how people respond to polls and surveys. For example, University of Chicago Professor Kenneth Rasinki found that even the slightest changes in wording altered people’s support for government spending,132 while Cornell University Professor Jonathon Schuldt, Indiana University Professor Sara Konrath, and University of Southern California Professor Norbert Schwarz found that responses changed depending on whether they used the term “global warming” or the term “climate change.”133 Bias that stems from the identity of the individual conducting the interview known as interviewer bias, can also change people’s opinions. For example, Princeton University Professor Daniel Katz found that the social class of the interviewer had an effect on survey response,134 while a study of breast cancer patients found that response rates to surveys were higher when the race of the person administering the survey was the same as that of the respondent.135 Similar effects have been found when interviewers are of different genders.136 In other words, people sometimes respond differently based on the gender of the person conducting the survey.

Inaccuracies can also arise from response bias, when respondents inaccurately report their true opinions for one reason or another. One famous example of response bias is called the “Bradley effect.” This theory refers to a phenomenon observed in the 1982 California gubernatorial race between Tom Bradley, a Black man, and George Deukmejian, a White man of Armenian descent. In polls leading up to this race, Bradley was shown to be in the lead, but he ultimately lost by a narrow margin. The theory behind the Bradley effect is that White voters are unlikely to admit to bias against minority candidates, and as such, polls may overestimate support for a minority candidate. Also known as social desirability bias, this type of response bias occurs when respondents give the answer they think they should give, and not what they really feel.

Order a print copy

As an Amazon Associate we earn from qualifying purchases.

Citation/Attribution

This book may not be used in the training of large language models or otherwise be ingested into large language models or generative AI offerings without OpenStax's permission.

Want to cite, share, or modify this book? This book uses the Creative Commons Attribution License and you must attribute OpenStax.

Attribution information
  • If you are redistributing all or part of this book in a print format, then you must include on every physical page the following attribution:
    Access for free at https://openstax.org/books/introduction-political-science/pages/1-introduction
  • If you are redistributing all or part of this book in a digital format, then you must include on every digital page view the following attribution:
    Access for free at https://openstax.org/books/introduction-political-science/pages/1-introduction
Citation information

© Jan 3, 2024 OpenStax. Textbook content produced by OpenStax is licensed under a Creative Commons Attribution License . The OpenStax name, OpenStax logo, OpenStax book covers, OpenStax CNX name, and OpenStax CNX logo are not subject to the Creative Commons license and may not be reproduced without the prior and express written consent of Rice University.