A Window into the Hart Document Review of the California TTBR
Well, it has been more than 45 days since we submitted our document review report on the Hart System 6.2.1 to the California Secretary of State under the Top-To-Bottom Review (TTBR). According to our contract (see page 10):
No Principal Investigator, UC Senior Reviewer, Associate Reviewer or accessibility expert shall make or release any comments or other information about the processes, procedures, progress or findings of the voting system review or any draft or final report to any third party via any medium for 45 days from the submission of the final report to the SOS, or until the final report is made public by the SOS, whichever is sooner.
This clause means that I can now talk about our Document Review efforts... I do hope the full reports are published soon; I have no clue what is holding them up other than they probably aren't the top priority in the very-busy SoS' office right now (as you'll see, they're nowhere near as sexy as the other three types of reports that were issued).
All of what I say below I say on behalf of myself and no one else. Errors, omissions, etc. are my responsibility alone.
Working Conditions
We were housed in a secure room in the UC Berkeley Law School (Boalt Hall). The time was very tight; we had a handful of days off over about 5 weeks (which is an experience much like the hours election staff have to put in right before an election). The secrecy required for our work was extreme. We worked in a secure facility using a red/black military information protocol where any confidential or proprietary information was labeled red and was not to leave the room. Because our work involved reviewing massive amounts of documentation (thousands of pages per vendor), the secrecy requirements meant that we printed and read on the order of 10,000 pages of documents between the two vendors that we examined in the Berkeley facility (Sequoia and Hart). We killed one entire shredder making sure all the paper and CDs were thoroughly destroyed.
As my thesis is primarily concerned with transparency in electronic voting, it seemed particularly absurd that we had to maintain such a tight protocol to protect nebulous trade secrets of the vendors. However, I do understand that if we're to have a competitive free market here in voting systems, we need to be cautious with inadvertent leaking of information that gives vendors a competitive edge in the market.
Methodology
Our task was twofold: 1) to produce a document that summarized the usability and thoroughness of the documentation for the voting system and the testing documentation from the ITAs and; 2) to assist the source code and red team testers with particular types of document-centric questions (e.g., "Is so-and-so communication protocol ever described explicitly in the documents?").
In terms of methods, the second task was easy enough once we were familiar with the contents of the documents and where certain types of information resided. The first was more difficult. As we describe in our report, a very sound methodology would have involved developing a codebook of requirements, regulations, etc. that voting system documentation must address (from the VSS, etc.) and then coding each document according to the codebook. This is a useful qualitative method that can be quite powerful (and typically involves writing thematic memos about particular issues and then stringing the memos together into a larger analysis). However, we didn't have time for that. Instead, we examined the documents from the point of view of the needs of each individual involved in particular phases of an election. We also spent time with the documents running two mock elections with the equipment in the SoS' facilities in Sacramento.
Findings
I'll naturally let our reports speak for themselves; however, here are a few of our findings for the Hart system:
- The ITA test reports are completely inadequate and do not provide the level of detail that one would need to assess whether or not the voting systems were tested to the letter of the VSS. They also do not provide the level of detail needed to replicate the testing in a laboratory environment. In many places, it is difficult or impossible to tell what types of testing methodologies were employed and under what conditions systems were tested. In some cases, the division of labor between two ITAs resulted in serious deficiencies where certain pieces of the VSS were not tested at all or where it is unclear to what extent certain features were tested.
- The documentation from a usability standpoint was pretty good. It could be improved, naturally. Specifically, the documents do a good job of describing how to do something, but don't discuss at all why one would want to do something and what the implications or technical details might be. Also, the documents were highly referential in that the user would have to leave a particular document to go to another document in order to learn about or complete a specific step. This was particularly a problem for the California Use Procedures document where the public won't have access to the confidential and proprietary documents that election officials have.
- We found that, assuming technically-knowledgeable staff, the documentation for Hart would allow a jurisdiction to be reasonably self-sufficient. We do recommend that jurisdictions "break in" their procedures, machines and staff by running full-fledged mock elections so that any deficiencies can be caught before an actual election.
- We don't believe that the documentation with which we were provided would be sufficient for the State-level certification of these systems. The inadequacy of the ITA reports and the documents that would be needed by reviewers beyond the voluminous technical data packages provided to the ITAs put state-level certifiers at an information disadvantage.
Implications
The implications of the document review reports aren't markedly different than those of the other reports. The common theme is that the national qualification process overseen by NASED was completely inadequate at judging whether or not these systems met the 2002 VSS or, more importantly, if they could be used safely in typical elections environments. Deficiencies in voting systems often fell victim to something we call the "leaky pipeline" where the deficiency would be noted in one or another part of the process but not be carried forward into the next part of the process or even cured before the system was fielded in actual elections.
It's a brave new world out there for voting technologies, and we have a lot of work to do... together.