Exit polls are surveys given to voters who have just cast their ballots, consisting of a long list of questions about who they are, which candidates they supported, and why. In 2004, the exit polls indicated that John Kerry beat George Bush, sparking a controversy over exit poll accuracy and election fraud that continues to this day.
With the 2008 election now upon us, here are a few things to keep in mind about exit polls:
Exit polls are not half-baked random samples of voters.
This year’s national exit polling will be funded by the National Election Pool (NEP), a consortium of CBS, ABC, NBC, CNN, FOX, and the AP. It will involve some 3000 employees and over 100,000 respondents, and employ a methodology based on four decades of experience. The polling will cover the presidential and senatorial races, as well as House of Representative elections in those states with a single statewide congressional district.
Since the 2004 fiasco, it has become somewhat trendy to denigrate the significance of exit polls, but historically, they are far more reliable than pre-election polls. In some fledgling democracies, exit polls have been a significant part of the international monitoring process designed to prevent election fraud. Here in the U.S., the TV networks use them to call the elections as early as possible, while giving their commentators something to talk about in between commercials.
No one has to admit anything to an exit pollster.
This year, much has been made of the Bradley effect, where people lie to pollsters rather than admit that they’re going with the white guy. The Bradley case involved pre-election phone polls in 1982, but exit pollsters saw the same effect in the 1990 Virginia gubernatorial race. In that contest, face-to-face interviews with voters indicated that an African-American Democrat named Doug Wilder had run away with the election. In fact, he had barely won. Nowadays, the exit poll is a written questionnaire filled out anonymously and placed in a box. Oddly enough, some people still lie. But the Bradley/Wilder effect is at least minimized.
The 2004 and 2006 exit poll numbers currently posted on network news sites are not the same numbers that caused all the commotion.
The posted exit poll numbers were changed the day after the election to conform with the official election results, just as they will be this year. This has caused a lot of confusion, and led many people to conclude that the media is trying to hide something. In fact, there is nothing sinister about it. Adjusting the numbers to match the official tally is standard procedure for the polling company, which is simply trying to make its demographic data more accurate. Suppose, for instance, that exit polls revealed that most of the people who voted for Candidate A thought that the Iraq war was the most important issue, while Candidate B’s supporters were chiefly concerned with the economy. If exit poll respondents overwhelmingly indicated that they voted for Candidate B, then we would conclude that many more people care about the economy than the war. But if the official count indicates that Candidate B just squeaked by, or actually lost, that means that voters don’t care about the economy as much as we thought.
The polling company, which has no interest in challenging the validity of the election, assumes that the official count is right, and “corrects” its final numbers to conform with that count so that the demographic data shifts to reflect the true mindset of the electorate.
Exit poll results that leak out early on election day are essentially meaningless.
Those who seek to use exit polls to detect election fraud need only be interested in the last set of figures that the NEP provides it subscribers after the polls have closed, but before the official results are in. These numbers have not yet been adjusted to conform to the official count, but they have been weighted--an all-important step in the exit poll process.
Weighting is complicated, but it works something like this: Suppose that historical voting patterns and recent trends indicate that about 10% of the electorate will be women under 30, while another 10% will be men over 65. Ideally, each group would account for about 10% of the exit poll surveys. Now imagine that most of the pollsters are male college students earning a little extra money, and for some strange reason, they end up surveying a lot of young women and not very many old men. If the young women tend to prefer a different candidate than the old men, which is usually the case, then simply adding up the surveys would indicate that the young women’s candidate was doing quite a bit better than he actually was.
So the polling company weights the responses; that is, they give less weight to the young women’s responses, since there should have been less of them, and more to the old men’s, since there should have been more of them.
In reality, the exit pollsters are not all randy young men--in 2004 most of them were women--and they are trained to approach voters at specific intervals: every third voter, or every tenth voter, for example, depending on the size of their assigned precinct. But sometimes unsupervised temporary workers don’t quite do what they’re told.
In any case, it’s impossible for a voluntary survey to perfectly reflect the demographics of the electorate, so exit poll responses are weighted for a number of categories, including age, gender, and party affiliation. The math gets complex, but weighting is considered essential to the accuracy of the poll.