This year, large counts of undervotes in Michigan suggest this as a possible explanation.
2. StatisticsExit polls are the best independent check that we have. Broadcast and internet news media contract with companies to ask voters who they voted for in a specially selected set of sample precincts that mirror the demographics of the entire state. Exit polling is a mature, sophisticated science, routinely used for international verification of elections in Europe and Latin America. But in America, polling companies use exit polls not to check up on the validity of the reported results but to predict them on election night. So exit poll methodology is calibrated by learning from "mistakes" in previous elections. This means that any systematic corruption of the election machinery eventually corrupts the exit poll as well. A more immediate problem: Exit poll raw numbers are never* reported to the public, and are blended with officially-reported results during the course of the night after an election, so that the news sources can offer their best prediction of the outcome.
For example, the results that are displayed right after the polls close may be based completely on exit polls; but two hours later when 2/3 of the precincts have reported their numbers, the same display on the same web site will contain different numbers, consisting of a weighted average of exit polls and reported results that reflects the best projection that polling scientists are able to make with that available information. They are still reported as "exit polls," but the numbers hardly represent an independent check. And by morning of the following day, the reported "exit poll" numbers are actually dominated by the official results, supporting the illusion that there is good agreement between the polls and the official results.
Statisticians and citizen activists in the election integrity movement have learned to catch screen shots of exit polls announced the moment that polls close, minimizing the opportunity for adumbration. But the truth is that we never know what we are getting, and polling companies consistently decline our requests for raw numbers or accounts of their methodology.
There is also an inherent problem in exit polls: Participation in exit polls is, of course, voluntary, and "response bias" is the name given to the phenomenon that Republicans might be more likely to say "yes" to the poll than Democrats, or vice versa. I have personally planned and analyzed two independent exit polling projects (in 2006 and 2016), and I can testify that this is a very real problem, that it can run in either direction, and that it is difficult to predict or to control for.
So the question for the statisticians: When exit poll results differ significantly from the reported election results, how can we know if we are looking at a corrupted election or at response bias?
Answer: Wherever we can, we try to do controlled experiments, comparing performance of the same poll in two places or two races on the same ballot, in circumstances where we are confident that one of the two is unlikely to be stolen, and so can be referenced as a control.
Ingenuity is the soul of this art. My favorite example is a self-funded, independent poll of the 2006, commissioned and funded by Election Defense Alliance, for which I was an analyst. [Fingerprints of Election Theft] This was not an exit poll but a telephone poll. The total response rate for such polls is typically between 1 and 4% (even in those days before robo-calling was quite so ubiquitous). Response bias might be expected to be an insurmountable problem. To control for this, we compared hotly-contested races with races where a wide margin was expected. Our premise was that if a race is expected to be a lopsided victory for one side or the other, then the perpetrators would not be motivated to try to steal it. It is races that are projected to be tight (within 10%) that provide plausible targets. EDA's Jonathan Simon who searched the country for Congressional Districts where there was at least one competitive and one non-competitive race on the same ballot. For example, there might be a Congressman with long tenure who was widely expected to prevail, but a Governor's race with no incumbent and a tight race between a Democrat and a Republican. Because we had the raw data, we were able to perform a sensitive pair analysis, using the responses for a contested and an uncontested race on the same ballot, using answers from the same respondents. It worked, and we were able to get excellent statistics from lousy data. The uncontested races agreed well with our telephone poll, but the contested races showed a consistent bias favoring Republicans.
Using exit polls from news pages on election night, we do not have access to the raw data, and finding appropriate controls is yet more difficult. Comparison is necessarily less direct. We have compared contested and uncontested elections in Congressional races across the country; we have compared Republican primaries to Democratic primaries held at the same site, same day; we have compared exit poll results from regions that use hand-counted and computer-counted votes. Each election presents its unique challenges. We have often found statistical cause for suspicion. Typically, the aggregate exit poll discrepancies are significant at a level of many standard deviations, probabilities astronomically small. But when we limit analysis to cases where we have a clear test-control pair, the argument becomes technical, and it's rarely an eye-popping result that we can explain to the public, or to an election official.
Next: Part 3 - Four Stories of Election Theft
_______________
* Never? Well, hardly ever. On the night of the 2004 general election, there was a computer glitch that prevented the National Exit Poll from diluting their poll numbers with the official returns as they came in. As a result, the disparity between exit polls and official counts appeared starkly for the public to see. Two months later, NEP responded to public questions with a white paper on their methodology. Working with election integrity advocates across the country, I drafted a response to their response, interpreting evidence within their white paper as suggestive of election theft.
(Note: You can view every article as one long page if you sign up as an Advocate Member, or higher).