What the Biggest Political Polling Mistakes in History can Teach us about Surveys

Polls have been used to try and predict the outcome of important political races for centuries. The first poll was a modest straw poll conducted by the Harrisburg Pennsylvanian newspaper in attempt to determine the 1824 presidential election. Today, as we approach the 2014 mid-term elections, our national attention has turned the ever-increasing number of polls devoted to forecasting election results on the local, state, and federal level. These high-stakes political fortune tellers use advanced statistical modeling and sampling methods to make the most accurate predictions possible. However, history has seen its share of polling blunders, and although no poll will ever be able to predict an election with 100% certainty, you can learn from the text-book survey mistakes made by botched political polls of the past.

1. The 1936 Literary Digest PollLiterary Digest had a formidable record up until 1936. The magazine called the winner of each presidential election correctly in 1920, 1924, 1928, and 1932. However, its streak was about to be tarnished in a big way. The magazine polled two million of its subscribers and arrived at the conclusion that republican challenger Alf Landon would triumph over President Franklin Delano Roosevelt, the democratic incumbent.

Where they went wrong: Sure, Literary Digest polled a lot of people, but the people the magazine polled were not representative of the majority of Americans at the time of the Great Depression. Subscribers to the digest were wealthier than the average person—affluent enough to buy magazines, have a telephone, and own a car. As we all know, a good portion President Roosevelt’s voters were not. He won the election with a convincing 63% of the vote.

Survey Lesson: When conducting survey market research, it’s likely impossible to reach your entire desired population. Don’t panic, you can make sure you get a representative sample by using four different types of random sampling methods. Read our blog posts about Simple Random Sampling, Systemic Sampling, Clustered Sampling and Stratified Sampling for more information.

2. The 1948 Gallup Poll – In 1948, Gallup reported that Thomas Dewey would beat Harry Truman in the presidential race, calling 45% of the vote for Dewey and 41% for Truman. The margin was definitive enough to prompt newspapers to get a jump on the next morning’s headlines—in fact, the Chicago Tribune overzealously went to press with a front-page story declaring the race for Dewey.

Where they went wrong: In the end, Truman won 50% of the vote compared to Dewey’s 45%. Truman famously posed with incorrect headline, and Gallup was forced to explain the reasoning behind its mistake. It turns out, the company stopped polling during the last few weeks of the campaign, figuring voters weren’t likely to change their minds.

Survey Lesson: Don’t make any assumptions until you collect all of your data. If you’ve given respondents two weeks to complete your survey, wait until the full amount of time has passed before you begin to act on the results.

3. The 1996 Arizona Primary – During the 1996 republican presidential primary, three major TV networks used exit polls to determine that candidate Pat Buchanan would win Arizona, followed by Steve Forbes, and the eventual nominee, Kansas Senator Bob Dole. However, once the votes were tallied, it was Forbes who was victorious, with Dole in second and Buchanan in third.

Where they went wrong: Little did pollsters know, Buchanan supporters were actively seeking them out in attempt to influence late voters by making it seem like a Buchanan win inevitable. The particular polling service that all three networks relied on was not aware of a “Buchanan bias” at the time.

Survey Lesson: This poll fell victim to an all-too-common self-selection bias. Results were eschewed because pollsters heard overwhelmingly from a small minority, thus ruining the poll’s “random” sample. While survey samples are always subject to the free will of respondents, emailing your survey to respondents rather than just leaving it up on your website for respondents to find is a good way to minimize self-selection bias. Emails are still vulnerable to self-selection bias, but at least you’re able to measure the amount of non-responses and adjust accordingly.

Like political polling, collecting feedback for business purposes is all about asking the right questions of the right people at the right time. To make sure your survey doesn’t meet the same fate as these storied polls, download our complimentary eBook, Crimes in Survey Design.

Written by Alli Whalen


Written by Cvent Guest