148 online
 
Most Popular Choices
Share on Facebook 16 Printer Friendly Page More Sharing
OpEdNews Op Eds    H2'ed 5/19/09

Connecticut Election Audit: 'Ridiculous, Unacceptable, Unconscionable'

By       (Page 4 of 5 pages) Become a premium member to see this article and all articles as one long page.   1 comment
Message Bev Harris
Become a Fan
  (5 fans)
From the UConn Post-Election Audit Report, May 12 2009:

The VoTeR Center]s initial review of audit reports prepared by the towns revealed a number of returns with unacceptably high unexplained differences between hand and machine counts ... As a result the [Secretary of the State's] Office performed additional information-gathering and investigation and, in some cases, conducted independent hand counting of ballots...

The main conclusion in this report is that for all cases where non-trivial discrepancies were originally reported, it was determined that hand counting errors or vote misallocation were the causes. No discrepancies in these cases were reported to be attributable to machine tabulation. For the original data where no follow up investigation was performed, the discrepancies were small, in particular the average reported discrepancy is lower than the number of votes that were determined to be questionable...

The main conclusion of this analysis is that the hand counting remains an error prone activity. In order to enable a more precise analysis it is recommended that the hand counting precision is substantially improved in future audits. The completeness of the audit reports also need to be addressed...

This analysis does not include 42 records (3.2% of 1311 [candidate race counts]) that were found to be incomplete, unusable or obviously incorrect. This an improvement relative to the November 2007 elections.


The Secretary of the State and her Office, are rightfully proud of proposing the audit to the Legislature in 2007. Based on the municipal reports from November, we asked for public follow-up investigations of the initial audit results, at least starting with the largest and most blatant discrepancies and incomplete forms. We appreciate that follow-up of the largest discrepancies was initiated. Yet, we are disappointed that the investigations were not open to public observation and that all incomplete forms were not investigated.

Our comments and concerns:

The investigations prove that Election Officials in many Connecticut municipalities are not yet able to count votes accurately. As we have noted, we appreciate that the largest discrepancies were investigated. We asked for that as a minimum. Yet without reliable counting, initially, or via follow-up we find no reason to agree that the audits prove the machines in Connecticut were “extremely accurate”. Reports in several other states show that officials and machines can count quite accurately. Some audits and recounts by hand show occasionally that initial reported election results created on election night by people and machines are inaccurate because of human and machine errors...that is what is supposed to happen, exactly what audits are designed to do. (We plan on developing a detailed post discussing counting accuracy and reasonable expectations in the near term)

The audit and the audit report are incomplete. The report “does not include 43 records (3.2% of 1311) that were found to be incomplete, unusable, or obviously incorrect”. When we overlook obvious errors, then in future elections creating obvious errors is another route to avoiding detection of errors or fraud --If the counts don’t match, just don’t report the result.

- Even with all the investigations and adjustments we have many unexplained discrepancies. The largest discrepancies were investigated, leaving 98 cases of discrepancies greater than 4 and 34 cases greater than 9. Or 68 with discrepancies over 2% including 31 with discrepancies over 5% of the vote. (Table 1 and Table 2 of the UConn report) We are reminded of Ohio in the same November 2008 election, where a discrepancy of 5 ballots was a matter of serious national concern (
http://www.ctvoterscount.org/?p=1127 ) . With effective initial counting, there should be a small number of counts requiring investigation.

- The Chain-of-Custody is critical to credibility. Even the some of the originally reported data which closely matched machine totals lacks credibility — based on lapses in the chain-of-custody of ballots prior to the initial municipal counting. In several cases the ballots were not resealed after the initial audit counts, and are thus less than fully credible. Once again, we do not have any reason to suspect errors or fraud, we just point out the lack of following procedures, the holes in credibility, and the openings for covering errors and fraud in elections.

- The entire audit process should be open to observation. We do not doubt the hard work and integrity of the Secretary of the State’s election staff of seven, several of whom recounted ballots and performed research in a number of towns. If the recounting and field research had been open to the public, we might be in a position to vouch for the integrity of the process. We, not the public, were informed of the initial “site visits” but our request that they be open to the public, or at least open to us, was to no avail. If any critical part of the audit is performed out of public view, it leaves questions for the public and opens up another avenue for fraud or for errors to be covered up.

- Either “questionable ballot” classification is inaccurate in many towns or we have a “system problem”. Based on our analysis of the audit results and our observations during 18 post election audits, we find that election officials classified way more votes as questionable than necessary. In most cases only a very small number, 1 or 2 votes per candidate, are filled out poorly enough not to counted by the machine, but several towns classify large numbers as questionable. This is a problem as it opens a hole for real problems to go undetected. When 10% of ballots are incorrectly classified as questionable it opens up the possibility of not recognizing a problem of a 10% undercount for a candidate.
Conversely, if the officials are reasonably classifying questionable ballets, then we have a system that 5%, 10%, or 25% of voters in several towns are actually unable to use properly. That would really be a serious “system problem” that needs to be addressed - by better systems not by smarter voters. (If we actually have such a “system problem” then we have a very poor ballot layout, much worse than other states, much worse than the legendary “butterfly ballot” in Florida 2000.)

- Accuracy and the appearance of objectivity are important. We disagree with the Secretary of the State’s initial assessment on December 12 when all the data from the municipalities was available, but no UConn report was available. Secretary Bysiewicz said the audits “have shown extremely accurate machine counts”. While it may actually be the case, the accuracy of all scanners in the audit cannot be proven based on the data available now, and less so based on the data available in December.

- Timeliness is important. We have reports and follow-up, long after the election: The election was November 4th, the audit was complete in early December, the Presidential electors were certified on December 14th, the initial “site visits” began on January 19th and were complete by January 23rd, the data from the initial “site visits” were sent to UConn on February 18th, and further follow up data on April 3, 2009. The report is dated on May 12, 2009 -- over six months after the election. The longer the delay the colder the trail of evidence, the more opportunity for cover-up, and the less value the data.
When the Presidential electors were certified on December 14th, there were huge obvious discrepancies. What if a race for President, a U.S. Representative, or State Legislator was close? Would there have been a swifter response even though the machines were declared “extremely accurate” at that point? Our bottom line:

The problem is not that there were machine problems. We have no evidence there were any. The problem is that when there are or ever were, dismissing all errors as human counting errors, we are unlikely to find a problem. In this audit, the worst discrepancies were investigated, which proved that many discrepancies in these audits were human errors -- not surprisingly based on observation of inadequate counting. However, a sample does not prove all discrepancies are human errors.

Next Page  1  |  2  |  3  |  4  |  5

(Note: You can view every article as one long page if you sign up as an Advocate Member, or higher).

Rate It | View Ratings

Bev Harris Social Media Pages: Facebook page url on login Profile not filled in       Twitter page url on login Profile not filled in       Linkedin page url on login Profile not filled in       Instagram page url on login Profile not filled in

Bev Harris is executive director of Black Box Voting, Inc. an advocacy group committed to restoring citizen oversight to elections.
Go To Commenting
The views expressed herein are the sole responsibility of the author and do not necessarily reflect those of this website or its editors.
Writers Guidelines

 
Support OpEdNews

OpEdNews depends upon can't survive without your help.

If you value this article and the work of OpEdNews, please either Donate or Purchase a premium membership.

STAY IN THE KNOW
If you've enjoyed this, sign up for our daily or weekly newsletter to get lots of great progressive content.
Daily Weekly     OpEd News Newsletter
Name
Email
   (Opens new browser window)
 

To View Comments or Join the Conversation:

Tell A Friend