EsquireDaily
News Opinions Politics World

Why Polling on The 2020 Presidential Election Missed the Mark

Senator Susan Collins did not lead in a single publicly released poll during the final four months of her re-election campaign in Maine. But Ms. Collins, a Republican, won the election comfortably.

Senator Thom Tillis, a North Carolina Republican, trailed in almost every poll conducted in his race. He won, too.

And most polls underestimated President Trump’s strength, in Iowa, Florida, Michigan, Texas, Wisconsin and elsewhere. Instead of winning a landslide, as the polls suggested, Joseph R. Biden Jr. beat Mr. Trump by less than two percentage points in the states that decided the election.

For the second straight presidential election, the polling industry missed the mark. The miss was not as blatant as in 2016, when polls suggested Mr. Trump would lose, nor was the miss as large as it appeared it might be on election night. Once all the votes are counted, the polls will have correctly pointed to the winner of the presidential campaign in 48 states — all but Florida and North Carolina — and correctly signaled that Mr. Biden would win.

But this year’s problems are still alarming, both to people inside the industry and to the millions of Americans who follow presidential polls with a passion once reserved for stock prices, sports scores and lottery numbers. The misses are especially vexing because pollsters spent much of the last four years trying to fix the central problem of 2016 — the underestimation of the Republican vote in multiple states — and they failed.

The Latest Polls Compared With Election Results


Final 2020 Poll Avg. 2020 Result
U.S. † +8 Biden +5 Biden
N.H. +11 Biden +7 Biden
Wis. +10 Biden 1 Biden
Minn. +10 Biden +7 Biden
Mich. +8 Biden +3 Biden
Nev. +6 Biden +3 Biden
Pa. † +5 Biden +1 Biden
Neb. 2* +5 Biden +7 Biden
Maine 2* +3 Biden +7 Trump
Ariz. +3 Biden 1 Biden
Fla. +2 Biden +3 Trump
N.C. +2 Biden +1 Trump
Ga. +2 Biden 1 Biden
Ohio 1 Trump +8 Trump
Iowa +1 Trump +8 Trump
Texas +2 Trump +6 Trump

† These reflect Times estimates of the final vote margin once all votes are counted. * In Maine and Nebraska, two electoral votes are apportioned to the winner of the state popular vote, and the rest of the votes are given to the winner of the popular vote in each congressional district.

“This was a bad year for polling,” David Shor, a data scientist who advises Democratic campaigns, said. Douglas Rivers, the chief scientist of YouGov, a global polling firm, said, “We’re obviously going to have a black eye on this.”

The problems spanned the public polls that voters see and the private polls that campaigns use. Internal polls, conducted for both Democratic and Republican candidates, tended to make Republican candidates look weaker than they were.

In response, polling firms are asking whether they need to accelerate their shift to new research methods, such as surveying people by text message. And media organizations including The New York Times, which financially support and promote polls, are re-evaluating how they portray polls in future coverage. Some editors believe the best approach may be to give them less prominent coverage, despite intense interest from readers and despite the dominant role polls play in shaping campaign strategies.

This year’s misleading polls had real-world effects, for both political parties. The Trump campaign pulled back from campaigning in Michigan and Wisconsin, reducing visits and advertising, and lost both only narrowly. In Arizona, a Republican strategist who worked on Senator Martha McSally’s re-election campaign said that public polling showing her far behind “probably cost us $4 or $5 million” in donations. Ms. McSally lost to Mark Kelly by less than three percentage points.

Mr. Biden spent valuable time visiting Iowa and Ohio in the campaign’s final days, only to lose both soundly. Democrats also poured money into races that may never have been winnable, like the South Carolina Senate race, while paying less attention to some of their House incumbents who party leaders wrongly thought were safe. The party ended up losing seats.

“District-level polling has rarely led us — or the parties and groups investing in House races — so astray,” David Wasserman of the Cook Political Report, a nonpartisan publication that analyzes races, wrote last week.

Image
Credit…Todd Heisler/The New York Times

The full explanation for the misses will not be knowable for months, until the election results are finalized, detailed poll data is released by survey firms, and public voter files — showing exactly who voted — are also released. Once this information is available, some of the immediate postelection criticism of polls may end up looking even worse than the polls themselves, academic researchers caution.

But the available facts already point to some likely conclusions:

  • People’s decreasing willingness to respond to polls — thanks partly to caller ID — has reduced average polling response to only 6 percent in recent years, according to the Pew Research Center, from above 50 percent in many polls during the 1980s. At today’s level, pollsters cannot easily construct a sample of respondents who resemble the population.

  • Some types of voters seem less willing to respond to polls than others, perhaps because they are less trusting of institutions, and these voters seem to lean Republican.

  • The polling industry tried to fix this problem after 2016, by ensuring that polling samples included enough white working-class voters in 2020. But that is not enough if response rates also vary within groups — for instance, if the white or Hispanic working-class voters who respond to polls have a different political profile than those who do not respond.

  • This year’s polls may have suffered from pandemic-related problems that will not repeat in the future, including a potential turnout decline among Democratic voters who feared contracting the coronavirus at a polling place.

  • A much-hyped theory that Trump supporters lie to pollsters appears to be wrong or insignificant. Polls did not underestimate his support more in liberal areas, where supporting Mr. Trump can be less socially acceptable, than in conservative areas.

  • In what may be the most complex pattern, polls underestimated the support of multiple Senate Republican candidates even more than Mr. Trump. This means the polls missed a disproportionate number of Americans who voted for both Mr. Biden and a Republican Senate candidate — and that the problems do not simply involve Mr. Trump’s base.

Defenders of the polling industry point out that the final national error may not be very different from the historical average — and that polls can never be perfect, given the difficulty of capturing the mood of a large, diverse country. National polling averages showed Mr. Biden with a lead of about eight percentage points. Once all the votes are counted, he appears likely to win the popular vote by about four or five percentage points.

“The problem is,” Patrick Murray, the director of polling at Monmouth University said, “even if the polls end up being significantly closer to the final results when it’s counted, most people remember how they felt the morning after.”

Regardless, there are reasons for concern: Polling now seems to suffer from some systemic problems, which create a misleading picture of the country’s politics.

This problem has sprung up at the same time that a deeply polarized country has become more intensely interested in politics than it once was. The share of eligible voters who turned out this year may have reached the highest level since 1900. Book sales about politics have soared, as have ratings for television news and subscriptions to publications that cover politics. Many people crave polls, which can have an addictive quality, especially during an election that both parties described as an existential battle for America’s future.

Politics has become a high-stakes spectator sport at the same time that the country’s ability to understand it has weakened.

Credit…Byron Rollins/Associated Press

Pollsters have been grappling with some of the same challenges since the creation of the industry in the early 20th century.

One of the first polls to receive widespread attention came from The Literary Digest, a magazine, and it was published days before the 1916 presidential election. The magazine asked readers in 3,000 communities to mail in sample ballots and then reported that President Woodrow Wilson was in a stronger position than his Republican opponent, Charles Evans Hughes.

Wilson won the election, and The Literary Digest poll became a national phenomenon, correctly pointing to the winners from 1920 through 1932. In 1936, though, the poll showed that Alf Landon, the Republican nominee, would easily defeat Franklin D. Roosevelt. Instead, Roosevelt won in a landslide, and the error changed the industry.

Although Literary Digest’s sample of 2.4 million respondents was enormous, it was not representative. The magazine’s circulation skewed toward affluent Americans, who were more hostile to Roosevelt and the New Deal than most voters. A less prominent pollster that year, George Gallup, had surveyed many fewer people — about 50,000 — but he had been careful to ensure they matched the country’s demographic mix. Mr. Gallup correctly predicted a Roosevelt win.

Mr. Gallup’s methods shaped the industry, which today consists of dozens of organizations, with a wide range in quality. Typically, a poll surveys hundreds or a few thousand people and then extrapolates their answers to represent the broader population. If a poll cannot reach enough people in a certain demographic group — say, white Catholics or older Black men — it counts those it does reach in the group more heavily.

Still, the Gallup methods were not perfect, for some of the same reasons that polls have struggled recently. Within some demographic groups, Gallup turned out to be interviewing more Republican voters than Democratic ones, and it overestimated the Republican vote share in 1936, 1940 and 1944.

In retrospect, a front-page article in The New York Times on the Sunday before the 1936 election is a telling case study. Written by Arthur Krock, the newspaper’s Washington bureau chief, the article asked more than 200 experts to forecast the result. The article’s headline read: “Experts Predict Roosevelt Victory With Probably 406 Electoral Votes.” In the House and Senate, Republicans would likely make gains, the experts said.

That forecast was roughly consistent with the Gallup poll — and it ended up being badly off. Roosevelt won 523 electoral votes, and Democrats made big gains in Congress. But neither the Gallup error nor the misleading Times survey attracted much attention, because they had still forecast a Roosevelt victory.

“This was merely due to luck,” Dennis DeTurck, a mathematician at the University of Pennsylvania, has pointed out. “The spread between the candidates was large enough to cover the error.”

Credit…W. Eugene Smith/The LIFE Picture Collection, via Getty Images

In 1948, the pollsters’ luck ran out. They continued to overestimate the Republican vote share, reporting throughout the campaign that President Harry Truman trailed his Republican opponent, Thomas Dewey. This time, the election was close enough that the polls pointed to the wrong winner, contributing to perhaps the most famous error in modern journalism, The Chicago Tribune’s banner headline “Dewey Defeats Truman.”

That error led to a new overhaul of polling. Pollsters redoubled their efforts to build samples that were representative of the country. They were not perfect: Presidential polls in the 1950s and 60s missed by about four percentage points on average, similar to this year’s miss, according to the website FiveThirtyEight.

“Polling has always been challenging,” Nate Silver, FiveThirtyEight’s editor in chief, said.

But polls still tended to point correctly to the winner of the presidential race in those years, partly because Americans were so willing to respond to polls.

People tended to answer their telephone when it rang. They also tended to trust major institutions, like the government, the media and higher education. And they did not have a constant source of entertainment in their pocket — a smartphone — that an extended telephone survey kept them from using, as Mr. Shor, the data scientist, said.

“Decades ago, most people would be happy to answer the door to a stranger or answer the phone to a stranger,” Courtney Kennedy, the director of survey research at Pew, said, “and those days are long gone.”

Credit…Damon Winter/The New York Times

When response rates began falling in the 1980s and 1990s, many people in the polling industry worried that they were facing a crisis: How could they accurately measure Americans’ opinions if many refused to respond to surveys?

But no crisis materialized. If anything, surveys became somewhat more accurate, as pollsters refined their methods. Once researchers analyzed the data, they landed on an explanation for why low response rates were manageable: “Whether somebody was going to participate in a survey was not really related the things surveys tended to measure,” Ms. Kennedy, of Pew, said.

Some groups, like college graduates and politically engaged people, were more willing to respond to polls. But the differences did not break down along partisan lines. Similar numbers of Democratic and Republican voters were declining to answer questions, which allowed the polls to be accurate.

Then came the election of 2016.

Nearly every poll showed Hillary Clinton to be leading Mr. Trump in Michigan, Pennsylvania and Wisconsin. The leads were big enough that her campaign paid relatively little attention to those states. But she lost all three narrowly, giving Mr. Trump a stunning victory.

Afterward, pollsters dug into their data, comparing it to precinct-by-precinct election results and to voter files, which show who voted but not how they voted. In early 2017, the leading industry group, the American Association of Public Opinion Research, or AAPOR, released its conclusions.

One factor was largely unsolvable: Late-deciding voters, accurately identified as undecided in polls, broke strongly for Mr. Trump. Many may have been swayed by James Comey, the F.B.I. director, who nine days before the election sent a letter to Congress announcing that he was again looking into Mrs. Clinton’s use of a private email server as secretary of state.

Credit…Todd Heisler/The New York Times

In the late 20th century and early 2000s, missing some of these voters had not been a big problem, because white college graduates and white non-graduates voted similarly. In 2016, these voters shifted to Mr. Trump, and the polls had failed to capture it.

The report by AAPOR offered an optimistic conclusion. The national polls had been close to correct, overestimating Mrs. Clinton’s vote share by only about one percentage point. That was well within the range of historical polling errors. And there were obvious steps the industry could take to improve in the future, by including more working-class voters or weighting the ones who responded more heavily.

Perhaps most important, the polling association argued, the 2016 experience did not suggest a systematic problem in which polls favored one party. In some years, like 2012, polls slightly underestimated the Democratic share, and in other years, like 2016, they slightly underestimated the Republican share. The report said the direction of those misses was “essentially random.”

The midterm elections of the following year, 2018, initially seemed to support this conclusion. The polls correctly suggested that Democrats would sweep to victory in the House, while Republicans would retain the Senate. State polls were off by an average of about four percentage points, which was historically normal.

The underlying details contained some reasons for concern, though. While polls in some liberal states, like California and Massachusetts, had underestimated the Democrats’ vote share in 2018, polls in several swing states and conservative states, including Florida, Georgia, Michigan and Pennsylvania, again underestimated the Republican share.

For the second time since Mr. Trump’s entry in politics, the polls had somehow failed to reach enough Republican voters in the swing states that decide modern presidential elections. A third election — his re-election campaign — was looming in 2020, and it was one that millions of Americans, both his supporters and critics, would be following passionately.

Credit…Ruth Fremson/The New York Times

By the final weeks of this year’s campaign, the polls seemed to be telling a clear story: Mr. Biden had led Mr. Trump by a significant margin for the entire race, and the lead had widened since the summer. Some combination of the coronavirus, Mr. Trump’s reaction to police brutality and his erratic behavior at the first debate had put Mr. Biden within reach of the most lopsided presidential win since Ronald Reagan’s in 1984.

In the campaign’s final 10 days, a Wall Street Journal/NBC News poll showed Mr. Biden up by 10 percentage points nationwide, as did a YouGov poll. Fox News put the lead at eight points, and CNN at 12. Other polls also showed the lead to be at least eight points.

In Wisconsin, which Mr. Trump had won narrowly in 2016, a YouGov poll found Mr. Biden up by nine percentage points. A New York Times/Siena College poll showed the lead was 11 points, while Morning Consult said 13 points. A Washington Post/ABC News poll reported the lead was 17 points. In congressional races, the Cook Political Report called the Democrats clear favorites to retake the Senate and gain seats in the House.

That, of course, did not happen. Mr. Biden won Wisconsin by less than one percentage point, while Republicans fared much better in congressional races than polls had suggested.

Even Republican-aligned firms struggled. The Trafalgar Group, for example, was closer than other pollsters in some states. But Trafalgar fared worse than the competition in other states, suggesting that Mr. Trump was ahead in Arizona, Michigan and Pennsylvania. Because polling challenges evidently varied by region, reporting more Republican-friendly results everywhere did not solve the problem.

Credit…Hilary Swift for The New York Times

Since Election Day, some campaign operatives have claimed that their private polls were more accurate than the public ones. They have offered no proof, however, and the behavior of campaigns, including both Mr. Biden’s and Mr. Trump’s, suggests private polls were also inaccurate.

What went wrong? There are almost certainly multiple causes, pollsters and political scientists say. One possibility is that the pandemic may have led to an unexpected falloff in Election Day voting among Democrats, given that the party emphasized mail voting. Another is that Democratic voters, energized by the Trump presidency and bored during the pandemic, became newly excited to respond to polls.

But the most likely explanation remains an unwillingness among some Republican voters to answer surveys. This problem may have become more acute during Mr. Trump’s presidency, because he frequently told his supporters not to trust the media.

“I think when all the votes are counted, what we are going to see is a far smaller polling error, potentially even minimal, in many of the states where the presidential was competitive,” Jefrey Pollock, a Democratic pollster and president of the Global Strategy Group, said. But he acknowledged that some polls were off and added, “As professionals, we have to question whether a segment of the electorate has opted out of talking to us.”

B.J. Martino, a partner at the Tarrance Group, which works for Republicans, said, “If there is an underlying issue, it’s not getting those folks on the phone to begin with.” Paul Maslin, a Democratic pollster, said: “They don’t trust the news media. They don’t trust elites. They don’t trust scientists. They don’t trust academics. They don’t trust experts.”

Credit…Cameron Pollack for The New York Times

These voters do not fit any one demographic group, which is part of why they are so difficult to reach. Instead, they appear to be a distinct group of voters within some groups. Imagine if, say, the independent, Hispanic, middle-aged, working-class women who were willing to answer a survey also happened to lean more Democratic than the same demographic profile of voters who were unwilling. And then imagine that the pattern holds only in certain states.

Even if pollsters constructed samples with the right mix of groups — by race, gender, age, income, education, religion and party registration — they might not capture the electorate’s mood. Those demographic factors, Mr. Shor said, “are not enough to predict partisanship anymore.”

In some ways, the problem is new. It is a reflection of modern technology, political polarization and more. In other ways, though, the problem has existed since the 1930s, when polls also undercounted segments of the working-class vote.

One difference is that those undercounted voters leaned Democratic at the time, which led polls to understate the strength of Roosevelt and Truman. The partisan effect has since flipped, with the white working class now backing Republicans, but the underlying dynamic has remained the same.

Pollsters managed to fix the problem after their “Dewey Defeats Truman” reckoning. The question is how they can do it again now, when survey response rates have fallen well below 10 percent.

Credit…Doug Mills/The New York Times

Perhaps the one pollster who has emerged from the last few years with the best reputation is J. Ann Selzer, who runs a firm in West Des Moines, Iowa, and who conducts polls with The Des Moines Register.

This year, while other polls were showing a tossup in both Iowa’s Senate and presidential races, Selzer & Co. reported on the campaign’s final weekend that Mr. Trump held a seven-point lead and Senator Joni Ernst, the Republican incumbent, held a four-point lead.

They both went on to win by somewhat more, which suggests that Ms. Selzer is not immune from the current problems. But in both 2016 and 2020, her final polls correctly showed that Iowa had changed from a tossup state to a largely Republican one.

One of her methods, she said in an interview, is keeping her surveys short, because there are differences between voters who are willing to talk at length to a pollster and those who are not. Most of her surveys last less than 15 minutes. In her final survey before an election, she tries to keep the interviews under eight minutes.

“There’s a self-selection in people’s willingness to talk to polls,” she said. She recalled conducting a 45-minute-long survey for a private client years ago about Transcendental Meditation. “Our finding was that about half the people we talked to had an experience with Transcendental Meditation,” she said. “Do you think that’s true?”

The polling industry group, AAPOR, had announced months before the election that it would conduct a post-mortem analysis for 2020. This analysis — and others, done by individual pollsters — will probably shape the specific measure that pollsters take.

Regardless, there is unlikely to be any single step that fixes the polls’ recent anti-Republican lean. Instead, pollsters are likely to try a mix of many small measures, like Ms. Selzer’s short interviews.

One option is to create new screening questions about whether respondents trust other people and major institutions — and then weight less trustful respondents more heavily in a poll’s final results. Pew, in recent years, has asked questions about whether people spend time volunteering, as one measure of trust.

Another is to expand the use of text messages and other nonverbal communication, like Facebook messages, in surveying people. “We’re going to see more diversity in polling methodologies,” said Kevin Collins, the co-founder of Survey160, which collects data through text messaging.

Credit…Maddie McGarvey for The New York Times

Some pollsters also wonder whether the problems may recede when Mr. Trump is not on the ballot. But he seems unlikely to be the only cause of errant polling, given how badly many congressional surveys missed the mark this year. In Arizona, Georgia, Maine, Michigan, North Carolina and South Carolina, the final publicly released Senate polls fell short by more on average than the final presidential polls.

A separate set of changes may involve how the media present polling and whether publications spend as much money on it in the future. “The media that sponsor polls should demand better results because their reputations are on the line,” James A. Baker III, the former secretary of state, wrote in The Wall Street Journal this week.

Among other questions, editors are grappling with the best way to convey the inherent uncertainty in polls.

Mr. Silver, then an independent blogger, created a breakthrough in 2008 when he began writing about every available poll, focusing on the swing states and talking about the probability of one candidate beating another. Before that, most publications had focused on national polls and largely ignored those done by competing publications.

The New York Times published Mr. Silver’s blog, FiveThirtyEight, from 2010 through 2013, and it is now part of ABC News. Other publications have since taken a similar approach, creating their own probabilistic models.

Mr. Silver and others have tried to emphasize the uncertainty in polls, by giving both candidates in a race a percentage that reflects their likelihood of winning. But many people seem to struggle to make sense of these probabilities in a one-time event like an election. They see that a candidate has a 71 percent chance of winning, as FiveThirtyEight gave Mrs. Clinton in 2016, and incorrectly think it is akin to a guarantee.

This year, FiveThirtyEight used a dot to portray each percentage-point possibility, stressing that either candidate could win.

Credit…Ryan Christopher Jones for The New York Times

The Times, which in 2016 had given Mrs. Clinton an 85 percent chance of winning, did not create a probabilistic model this year. It instead published a table that included both polling averages and a column showing the likely result if each state’s polls were as wrong as they had been in 2016. That column looks better in hindsight than the polling averages.

Virtually nobody thinks polling is going away. It is too important in a democracy, Mr. Collins, of Survey160, said. It guides campaign strategies and politicians’ policy choices. And there is no alternative method of election analysis with anywhere near as good a track record as polling’s imperfect record.

The only short-term solution, some people believe, is for pollsters and the media to emphasize — and for Americans to recognize — that polling can be misleading. Even an aggregate picture, from dozens of polls, can be meaningfully off, especially in an intensely divided political era.

“There’s something in U.S. culture that has developed a fetish with quantitative forecasting,” Dahlia Scheindlin, a Tel Aviv-based pollster who has worked in 15 countries, said.

If nothing else, the polling of the last four years may have given Americans a better understanding that they should not take polls literally. “The narrative around polling has to change,” Cornell Belcher, a Democratic pollster, said, “because it’s misinforming and it’s setting polling up to fail.”

Reporting was contributed by Thomas Kaplan, Annie Karni, Giovanni Russonello and Matina Stevis-Gridneff.

Related posts

Pennsylvania election officials and Democrats respond to Trump campaign’s plea to join Supreme Court proceedings on ballot deadline

Ray Morrison

Brazil and Argentina Discuss Creation of Common Currency

David Appleton

College Students Are Missing From Campus. Will Their Missing Votes Make a Difference?

Ray Morrison

Who’s the Front-Runner? 5 Takeaways from the First Mayoral Debate

Ray Morrison

NATO Formally Blames Sabotage for Nord Stream Pipeline Damage

David Appleton

The morning read for Friday, Feb. 12

Ray Morrison

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy