David Chaum: Random-sample elections

Joshua Davis writes in Wired:

Roughly 2,500 years ago, the citizens of Athens developed a concept of democracy that’s still hailed by the modern world. It was not, however, a democracy in which every citizen had a vote. Aristotle argued that such a practice would lead to an oligarchy, where powerful individuals would unduly influence the masses. Instead the Athenians relied on a simple machine to randomly select citizens for office. It’s an idea whose time has come again.

Two separate research initiatives—one from a pioneering cryptographer and a second from a team based at Stanford University—have proposed a return to this purer, Athenian-style democracy. Rather than expect everyone to vote, both proposals argue, we should randomly select an anonymous subset of electors from among registered voters. Their votes would then be extrapolated to the wider population. Think of it as voting via statistically valid sample. With a population of 313 million, the US would need about 100,000 voters to deliver a reliable margin of error.


“The Stanford team” refers, of course, to the activities of James Fishkin that have been discussed here before (for example, here).

The pioneering cryptographer is David Chaum. He proposes to use sampling in voting (with some cryptographic sauce keeping the identities of those in the sample secret) and his major selling point is that this would reduce the cost of elections.

He also makes arguments that are quite familiar in the “policy juries” line of proposals: that with more influence per voter, those sampled would put more effort into information gathering and study, and that many such votes could be carried out in parallel, each dealing with a different issue.

Chaum concludes with the following comments:

Random-sample elections can thus be interpreted more broadly as providing a way forward, from our current paradigm-induced disparity in access to the power of information technology, towards allowing effective voter input to governance.

There may be some who would like to be randomly selected to run government for a limited period, though few today seem to relish jury duty. And there may even be those who wish for a return to Athenian random selection of representatives, ignoring the complexity of today’s policy issues. Similarly, in future there may be some who long to vote in mass elections, perhaps romanticizing about the act of casting a secret ballot in person among one’s neighbors. But there will likely be few who oppose the deeper and wider and more continuous monitoring of the will of the electorate provided by random-sample elections, at least informing if not being binding on government. The Ancient Greeks’ conflation of random selection of officials with democracy may in future be considered no more naive than today’s conflation of mass elections with democracy.

14 Responses

  1. Has the Stanford Institute proposed reducing elections to a random subset of voters? I’m not aware of that. Also it’s a bit odd to claim that the inventors of democracy conflated the random selection of officials with democracy.

    Like

  2. Right. Both Davis and Prof. Chaum are being rather careless.

    Like

  3. Yes, it’s a bit careless, but I take the main point to be that it often makes sense to let a randomly-selected group of people perform a task. Sometimes, the task is expressing opinions about policy (Fishkin). Sometimes, the task is voting for an official (Chaum). It’s important to ask whether a randomly-selected group is right for the task at hand. But I suspect that for many people, the challenge is still getting them to accept that random selection ever makes sense.

    Like

  4. It’s alarming that they conclude that a sample of 100,000 would be necessary in order to deliver a reliable margin of error in a country like the US. This is much larger than the point at which rational ignorance would begin to appear and I think it’s also larger than typical opinion polls. If 100,000 is statistically necessary it would rule out the possibility of sortition as a workable system of statistical representation — the only advantage over everyone voting being a reduction in costs. But it would make no significant difference to the quality of governance — this would require a sample two orders of magnitude smaller.

    Like

  5. Actually, I could not find the 100,000 figure in Chaum’s paper. He has much more familiar numbers in the somewhat vague statement here:

    Cost, compared with a mass election in which tens of millions of voters mark ballots, is extraordinarily low because the number of voters in a random-sample election can be about 1,000-5,000 for a confidence level of about 95%-99%.

    Like

  6. That’s reassuring. I imagine 1,000 would be the upper limit from a rational-ignorance perspective — providing (presumably) 95% confidence. Peter is right though that cost-saving would not be the primary concern, it’s whether sortition is fit for purpose in particular cases. This would be easier to demonstrate in the less radical case of a reduced electorate, but I doubt if the benefits would be that significant — it would simply raise the presentational bar a notch or two with regard to the actors on the audience democracy stage.

    Like

  7. In fact neither 1,000 nor 5,000 nor 200 correspond to any given level of confidence, unless the width of the confidence interval is specified as well.

    Like

  8. My knowledge of statistics is poor, so would appreciate if you could explain the difference between level and width of confidence. How exactly is that operationalised in (say) a public opinion poll? How big would an allotted assembly need to be to be in practice to be “reasonably representative” in terms of sex, age, ethnicity, income, occupational class, educational achievement, religion and other such generalised factors? We tend to throw up the 300-600 figure, but these figures are dictated more by rational-ignorance concerns (and the size of existing legislatures) than representativity claims. I’m still alarmed by the 1000-5000 ballpark as this strikes me as much too big.

    Like

  9. Opinion polling is a well developed science –which means there are verifiable results. Generally the pollsters wish to measure views on simple yes/no questions like: “Should the UK leave the EU?”. The technique which has been perfected (after some very public failures) is (usually) to select a stratified sample of 400 -1600 from the population and pose the question face-to-face. In these controlled circumstances accuracy of plus/minus 1-5% can be expected (which fits the theory. Nice)
    This is not at all like juries, or what I imagine ‘sortinistas’ (to use Keith’s term) expect to happen. Exposition and discussion of the issues are crucial for the democratic outcome. They do not happen in opinion polling.
    For example: In the UK there have only ever been 2 referendums. Last year it was the AV fiasco, where opinion hardly budged from start to finish – the pollsters got it right from the start; overwhelming No (alas for reformers of democracy). The 1975 vote on membership of the EU started with opinion pollsters saying the public were 2 to 1 against staying in the EU (or Common Market as it was then called). After what some might call a deceitful propaganda blitz, the UK voted 2 to 1 in favour of ‘staying in’.
    Opinion polling and Citizen Juries (as I understand them) differ fundamentally. OP requires each unit sampled to be independent; CJ requires inter-mingling of views. OP is a quick-fire reactive process; CJ is a reflective and collective one. Statisticians would not recognise CJ as a scientifically sound method of measuring public opinion.
    Unless you choose to isolate your CJ sample from each other, and only feed them a mutually agreed set of information will you get reliable results. Interestingly, the Doge of Venice who was selected through a sortisional process was immediately imprisoned in his palace with all communication strictly regulated. Note, too how legal juries are subject to very strict rules about communicating, researching etc. These rules are the fruit of experience, there for a reason, surely?
    So if ‘democracy’ is better done sortionally or electorally is back to the political philosophers, I fear. And that’s where I part company with the sortitionistas – I simply can’t see the mechanism for identifying and implementing the General Will in a way that can’t be subverted.

    Like

  10. Conally raises some fascinating issues:

    >Unless you choose to isolate your CJ sample from each other, and only feed them a mutually agreed set of information will you get reliable results.

    Assuming that by “reliable” he means consistent, then this is clearly true, in the sense that you took any number of population samples then you would get the same (or very similar) results. Although this would mean a very constrained form of deliberation, it would be necessary if the democratic criterion were statistical representativity. Independence of judgment is also vital from an epistemic point of view, given the preconditions of the Condorcet Jury Theorem and the wisdom-of-crowds literature.

    >I simply can’t see the mechanism for identifying and implementing the General Will in a way that can’t be subverted.

    This was Rousseau’s principal concern and one which Bernard Manin addresses in his 1987 Political Theory paper on deliberation. Rousseau was stongly opposed to speech-act deliberation as this would have enabled individual wills to predominate. But the danger is that without balanced information the general will is no more than prejudice. The DP methodology attempts a compromise — ie a form of deliberation that involves little more than being presented with balanced information and then deciding the outcome via aggregating the independent votes of the jurors. This doesn’t please the Habermasians and the radicals (who deny that it even merits the word deliberation) but is really the only option if we are concerned not to subvert the general will. Conall is right that this needs to go back to the political philosophers, so strongly recommend the Manin paper.

    Like

  11. > I simply can’t see the mechanism for identifying and implementing the General Will in a way that can’t be subverted.

    I don’t really follow your argument.

    As I see things, it is generally recognized that the best way for a group of people to advance its interests is through open discussion and majority-based decision making, as long as the group is small enough so that an open discussion is possible.

    For a large group, open discussion is not possible and political power is held by a sub-group, so the objective is to align the interests and world-view of the sub-group with the interests of the group. This is done by drawing the sub-group as a statistical sample from the population.

    Like

  12. Yoram, I think we all understand the principle of statistical sampling, Conall especially. The problem is how to maintain the equality of individual wills once the sample has been taken (both within, and between the subgroup and the larger population) or, as Conall puts it, “implementing the general will in a way that can’t be subverted”. Manin articulates this very well in his 1987 paper.

    Like

  13. […] Chouard (and here, here, and here), Lawrence Lessig, David Chaum, Jacques Rancière, Clive Aslet, Jim Gilliam, Loïc Blondiaux, and Andrew Dobson and other readers […]

    Like

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.