Center Stage: RLAs
Those presenting at MIT's Audit Summit showed that many steps in the overall voting process could be tightened by statistical or other data-based audits, just as they cited pros and cons of the various statistical audit techniques presented. The summit gave a clear impression that this field was rapidly evolving.
The conference featured ways to streamline random ballot selection before applying the statistical math: foot-high piles of ballot cards are loosely shuffled, as one would do with sections from a deck of cards, with top cards pulled before starting the counting analysis. It discussed how voter registration data could correctly (or not) parse voters into districts and precincts. But organizers were primarily focused on showing what a risk-limiting audit was (or one version) and touting Colorado's experience pioneering it.
Risk-limiting audits were developed by Philip Stark, an acclaimed mathematician and chair of the Statistics Department at the University of California in Berkeley. The genesis goes back a dozen years, when many states were buying the voting systems now being replaced. Stark and others realized there were not very accurate ways to re-check vote tabulations. Under that umbrella are two terms and tasks that often get confused and sometimes overlap: audits and recounts.
Audits, generally speaking, seek to evaluate specific parts of a larger process. The basic idea is using independent means to assess an underlying operation or record. In elections, some states audit vote count machinery before winners are certified. But other states do it afterward. Recounts, in contrast, are a separate final tally before the winners are certified, and often operate under different state rules (as seen in Florida this November). In some cases, these two activities overlap: pre-certification audits can set a stage for recounts.
For many years, vote count audits meant taking a second look at ballots that were a small percentage of the overall total and seeing if those votes matched the machinery's count. In many states and counties, that meant looking at one race, and often grabbing ballots from a few precincts that met the sample size. That analysis didn't reveal much about countywide or statewide counts. Stark and others sought a better way to check counts.
Their solution is a form of statistical analysis called a risk-limiting audit. This process requires all of the voted ballots be assembled in a central location. Depending on the threshold of assurance sought (typically 95 percent), ballots are randomly drawn and manually examined to see if the individual ballots match the overall initial tabulation. Participants tally the observed votes. The math here is based on probability. In races where the winning margins are wide, the volume of ballots to be pulled and noted to reach that 95 percent assurance can be small. That is RLA's selling point: it can be expeditious, inexpensive and offers more certainty than prior auditing protocols.
But when election outcomes are tighter, including those races triggering state-required recounts, the inverse becomes true. Much larger volumes of ballots must be pulled and examined to satisfy the 95 percent threshold. How many ballots varies with the number of votes cast in the entire contest, the margins between top contenders and the chance element of random draws. At some point, an RLA's efficiencies break down because the manual examination turns into a large hand count of paper ballots (as seen in the final stage in Florida's recent simultaneous statewide recounts).
This spectrum of pros and cons was seen in the RLA demo led by Stark at MIT.
Rolling the Dice to Verify Votes
After several panel discussions and testimonials, the audience simulated one version of a risk-limiting audit. What emerged was a scene akin to a science fair. People stood in groups around tables. They rolled dice to generate random numbers. They substituted decks of cards for ballotswith red and black cards standing in for candidates. They followed Stark's printed instructions to evaluate two hypothetical contests.
Using a combined deck of 65 cards (52 black, 13 red) for the first audit, the task was seeking a 90 percent assurance the results were correct. The cards were separated into six piles. The cards were counted to ensure they added up to 65. Each group made a "ballot look-up table" to randomly select cards from the batches. Dice were rolled to generate random numbers. Then, as cards were pulled from the piles, a score was kept -- awarding varying points depending on whether the card was red (subtract 9.75) or black (add 5). Once the score got to 24.5 -- according to the underlying math -- the audit was done.
"Audit!" Stark's instructions ended. "The number of ballots you will have to inspect to conform the outcome is random. It could be as few as five. On average you will need to inspect 14 ballots to confirm the contest results."
Most of the participants could follow these instructions -- although some groups finished after pulling fewer cards than others. In the second exercise, using 50 red and 50 black cards to simulate a tie, the 90 percent confidence threshold also was applied. Cards were sorted, randomly pulled and notes taken. But because this intentionally was a tie, the process was more laborious. Stark's instructions anticipated that consternation, noting this audit, were it real votes, would become another process full manual count:
"Continue to select cards and update the running total until you convince yourself that the sum is unlikely ever to get to 24.5 (generally, the running total will tend to get smaller, not bigger). When you get bored, 'cut to the chase' and do a full hand count to figure out the correct election outcome. Because this risk limit is 10 percent, there is a 10 percent chance that the audit will stop before there is a full hand count, confirming an incorrect outcome."
If this process seems complex, it was. Not everyone in a room filled with election insiders (including some of the nation's top election law and political science scholars) could follow the instructions. A few big names were publicly teased. One participant later suggested that the process might have been easier had a walk-through first been shown on the auditorium's screen.
(Note: You can view every article as one long page if you sign up as an Advocate Member, or higher).