But will states and counties check results based on estimates or accounting?
In early December, the county officials running Florida's elections unanimously endorsed a new way to recount ballots and more precisely verify votes before winners are certified.
The Florida State Association of Supervisors of Elections wanted to avoid complications seen in November's multiple statewide recounts. They are urging their legislature to sanction a process pioneered in a half-dozen of its 67 counties. That auditing technique involves using a second independent system to rescan all of the paper ballots to double-check the initial count using powerful software that analyzes all of the ink marks. That accounting-based process also creates a library of ballots with digital images of every ballot card and vote, should manual examination of problematic ballots be needed.
Yet a few days later at a Massachusetts Institute of Technology Election Audit Summit, this emerging approach in the specialized world of making vote counting more accurate and trust-able was not part of any presentation. Instead, state election directors, county officials and technicians, top academics in election science and law, technologists and activists were shown a competing approach with differing goals and procedures.
MIT's summit mostly showcased statistically based analyses. In particular, the approach called a risk-limiting audit (RLA) predominated. Unlike Florida's push for a process that seeks to verify every vote castor get as close as possible -- an RLA seeks to examine a relatively small number of randomly drawn ballots to offer a 95-percent assurance that the initial tabulation is correct. It is being piloted and deployed in a half-dozen states, compared to ballot image audits in a smaller number of states and counties.
While the MIT summit's planners could not have anticipated the Florida Supervisors of Elections' action when planning their agenda, their embrace of RLAs -- a view also being promoted by progressive advocacy groups -- reflects a factionalism inside the small but important world where election administrations confront new technologies. At its core, what it means to audit vote counts is being redefined in ways offering more or less precision, accompanied by tools and techniques that may -- or may not -- be practical in the most controversial contests, those triggering recounts.
It is not surprising that individuals who devote years to solving problems (and those who are drawn to one remedy before others) have biases. But as one of the summit's closing speakers noted after many presentations on statistical audits and on tightening up other parts of the process, it is unwise to take any tool off the table. That advice resonates as many states are poised to buy new vote-counting systems in 2019 and 2020.
"I truly believe that the area for the fastest growth for all of us is going to be auditing," said Matthew Masterson, Department of Homeland Security senior adviser on election security, ex-U.S. Election Assistance Commissioner and an ex-Ohio election official. A few moments later, he offered this caution to those present and watching online: "There is no one silver bullet that solves the risk management process in elections. The risks are complex and compounding. So we need to avoid, 'If you do this one thing, you're good to go,' because that's not reality in elections."
The MIT Audit Summit comes amid the biggest turning point in election administration since the early 2000s. Legacy vote-counting systems are being replaced by newer systems revolving around unhackable paper ballots. Meanwhile, advances in counting hardware and software, diagnostic analytics and audit protocols present new ways for verifying vote counts. That's the promise and the opportunity. The reality check is that election administrators must weigh the costs, efficiencies, complexities and compromises that accompany the acquisition and integration of any new system or process.
This juncture poses large questions for the world of election geeks, as those at the MIT summit proudly call themselves. Is technology changing the way that officials and the public should think about, participate in, and evaluate reported outcomes? Can a new tapestry of tools and procedures yield more transparent and trust-able results? Are new developments better suited for verifying the votes in some races but not others?
These concerns are not abstract. The political sphere is beset by "post-truth" realities, where facts are often subordinated to partisan feelings. Yet federal courts routinely issue rulings citing the citizenry's expectation of one-person, one-vote, including constitutional guarantees of equal protection in elections. These clashing factors and expectations put additional pressures on election administrators to earn the public's confidence.
In the world of administering elections, the options and decisions surrounding the choice of voting systems and related procedures quickly become concrete, including strengths and weaknesses of any tool or practice. As Ohio State University election law professor Edward (Ned) Foley said in the summit's closing panel, the debate over how accurate vote counting should be is as old as the United States.
"This is a theme that goes all the way back through the beginnings of American history," Foley said. "This debate was between Alexander Hamilton and James Madison. When they had their disputed elections way back in the beginning, Madison's view was, 'We've got to get the correct answer in this election right. We cannot tolerate an error in the vote-counting process in this particular election.' Hamilton said, 'No, no, no. That's too difficult a standard. At some point, you have to stop the counting process and say we've got to have a winner. And you know what, there's going to be another election"'"
Looking at an audience, Foley asked, "What's our standard of optimality, because we are never going to have perfection?"