« Can Gymnastics Scoring Be Made More Fair Through Proper Mechanism Design? | Iggy and moving » |
On "nakedness" of evaluation...
elections, certification/testing, hacks, secrecy, research, policy, usabilityThad Hall has a post up at Election Updates that talks about the need to factor in the implementation environment during voting system evaluation (?E-Vote 2008 Conference ? Certification and Security?).
Thad says,
The point here is that, when we think about paper ballots and absentee voting, we do not typically think about or evaluate them ?naked? but within an implementation context yet we think nothing of evaluating e-voting ?naked? and some almost think it ?cheating? to think about e-voting security within the context of implementation. However, if we held both systems to the same standard, the people in California probably would not be voting using any voting system; given its long history, it is inconceivable that paper ballots would fail to meet the standards to which e-voting is held, absent evaluating its implementation context.
I can make a number of points here but I'll stick to the following:
(I speak for myself.)
The systems included in the CA TTBR were not necessarily evaluated ?naked?; each was analyzed within a threat model including possible threats from different levels of access. Unfortunately, often the access required to exploit found vulnerabilities was trivial. As part of the OH EVEREST review, we not only confirmed the TTBR findings but even extended them, in some cases with attacks that could be accomplished by a voter without raising any suspicion from even the most vigilant pollworker. (Note: in the EVEREST review, we had access to a panel of election officials we could query.)
There is a mountain of non-public evidence, reportage and findings from the TTBR that will never see the light of day. I've heard a number of people express dismay at the red team reports from the TTBR, which were supposed to evaluate the practicality of attacks on found vulnerabilities. People criticized them, the first reports released, as thin and without much detail, regardless of the fact that the source code review, document review and accessibility review reports were not yet released. Well, suffice it to say that the private red team reports from the TTBR are voluminous and include extensive consideration of environmental and implementation factors. These reports won't see the light of day for a long time, if ever, so it's difficult to even speculate on what they found. Of course, to get some idea of the gravity of those findings, all one has to do is to look at the requirements levied by the CA SoS on each system for it to be recertified; the new requirements were no joke and designed to minimize exposure of vulnerable systems to possible threats.
Working with the implementation environment is a difficult task, no matter how you try to do it. The environment changes each election and it changes from precinct to precinct and across jurisdictions. To even describe the environment (in terms of resources, procedures, etc.) one would have to do an extensive combination of surveying and observing; that's no small feat. Even then, mis-specifying even the slightest detail can have profound consequences for security evaluation. Finally, it's difficult if impossible, as we've seen from voting system usability work, to replicate the environment of an election day. This is all to say that we need more efforts geared toward understanding this.
With paper ballot and absentee voting systems at least we have extensive experience, as Thad notes, with methods of subverting known processes. Current procedures and technology for paper ballots and absentee voting are a direct result of centuries of responding to fraud, among other things. With computerized voting systems, it's clear we've essentially "jumped the gun"; that is, we were too quick to accept the efficiency and enfranchisement gains of this technology, without also recognizing its weaknesses and special requirements. The vendors have been learning a hard lesson this decade that certain ingredients in robust, critical systems -- usability, accessibility, security -- must be designed into the systems instead of spackled on after R&D is mostly complete.
I'm not arguing that the TTBR or EVEREST results are perfect, in fact, they're far from that. They do represent technical findings from panels of some of the best experts that work on these issues, but arrived at under ridiculous time, resource and scope constraints. The flaws found in all of those reviews are likely the tip of an unknown iceberg. I can only hope that all of us working on this issue -- researchers, vendors, election officials, advocates -- can concentrate on catalyzing improvements in these systems as well as the environments in which they're designed and operated.