On trial: How juries reach their verdicts

The Sunday Times:


The film Twelve Angry Men depicts jurors changing their mind during deliberation (Kobal Collection)

A UNIQUE judicial experiment in which 12 separate juries watched the same trial and came up with different verdicts has led to new calls for an investigation of the jury system.

In the mock trial Alan Johnson, the former Labour home secretary, played the role of an armed robber who stole £68,000 from a betting shop after threatening the staff with a shotgun. Vincent Regan, a film actor, played the role of a firearms expert.

The judge Michael Mettyear, the recorder of Hull and East Riding, who sits on the sentencing guidelines panel, came up with the idea for the experiment and real barristers presented the case. The juries were each put together by the 12 foremen, who were invited to take part by the judge.

“I thought it would be interesting to see if a number of juries listening to the same facts and evidence would come to different conclusions,” Mettyear said.
Continue reading

How Athenians Managed the Political Unaccountability of Citizens

Recent discussions on this blog have focused on the need for ongoing political accountability in any sortition-based political system, so I thought this article by Farid Abdel-Nour and Brad L. Cook in the current issue of History of Political Thought would be of interest:

Abstract: The political unaccountability of ordinary citizens in classical Athens was originally raised as a challenge by ancient critics of democracy. In tension with that criticism, the authors argue that attention to the above challenge is consistent with a defence of Athenian democratic politics. In fact, ordinary citizens’ function in the Assembly and courts implicitly included the burden of justifying their own political decisions to an imagined authority, as if they could be brought to account. By means of practices that encouraged this self-scrutiny, Athenians marked the challenge of citizens’ political unaccountability as an unavoidable but manageable aspect of their democracy.

The authors argue that ‘one type of practice placed citizens’s political decisions under the external gaze of other citizens, another placed them under the gaze of the gods, and yet another placed them under the gaze of an internal imagined audience’ (p. 445).
Continue reading

Dahl: Is Minority Domination Inevitable?

In most of the sciences – whether human, social or natural – there is a symbiotic relationship between theoretical and quantitative approaches. Einstein would not have formulated the theory of special relativity had the Michelson-Morley experiment confirmed the existence of the aether wind. The academic study of politics, however, bucks this trend as theorists and political scientists rarely talk to each other. This is primarily because the term ‘political theory’ is generally preceded by the adjective ‘normative’, so a conversation between theorists and polsci professors might well be seen as a contravention of the naturalistic fallacy.

This is self-evidently the case in the field of social theory, dominated by the long shadow of Rawls and still dedicated to the study of ‘57 varieties of luck egalitarianism’ (Waldron, 2013, p. 21). But why should it apply to democratic theory? – common-sense would dictate this should be a combination of normative and descriptive work, as most modern poleis claim to be democracies. Yet the upgrade panel for my own PhD (on representation and sortition) advised me to choose between the theoretical and empirical literature and not to seek to reconcile the two. The recent thread on this blog discussing Gilens and Page’s claim to have disproved the median voter theorem is a good indication of the sharp divide between the two literatures.
Continue reading

Commentary on Gilens and Page, “Average citizens have no political influence”

This is an interesting paper, that brings admiral clarity to the competing theoretical models that address the problem, ‘Who governs? Who really rules?’ (Gilens and Page. 2014, p.3). However I’m skeptical as to whether the authors’ dataset provides unequivocal support for the general equation between ‘electoralism’ and oligarchical rule claimed by Yoram Gat in his open letter to Professor Gilens, for the following reasons:

1. Dataset
It’s surprising that a total of 1,932 cases yielded as many as 1,779 instances demonstrating a clear relationship between public preferences and policy change (p.10). Most legislative outcomes involve messy compromises involving trade-offs between the preferences and interests of the various parties involved. What criteria were employed by Gilens’s ‘small army of research assistants’ in order to decide that these 1,779 instances involved a ‘clear, as opposed to partial or ambiguous, actual presence or absence of policy change’ (ibid.)? Are public preferences really as unambiguous as the authors claim? An influential work by Benjamin Page’s frequent collaborator Robert Shapiro used the examples of Bill Clinton’s (failed) healthcare reforms and Newt Gingrich’s ‘Contract with America’ as examples of elite- and partisan-driven policy initiatives (Jacobs and Shapiro, 2000). However in the former case survey evidence was ambiguous: a Gallup Poll conducted in early August 1991 indicated that 91 percent of the public felt there was a ‘crisis in healthcare’ (Gallup, 1991, p. 4) and a large majority (75% of adults polled) wanted the government to provide healthcare (Times, 1992). But it was not clear what the public wanted done about health care, being torn between the desire for comprehensive provision and the deep-seated American aversion to big government: ‘different polls and even successive questions in the same polls turn up seemingly contradictory responses’ (Kosterlitz, 1991, p. 2806). In any event, Clinton’s healthcare reforms were defeated: ‘the policy outcome turned, in the end, on the response of the relatively few centrist legislators to – exactly – the median national opinion as measured by polls’ (Quirk, 2009, p. 6, my emphasis). Similarly the GoP ‘Contract with America’ was entirely driven by the median-voter strategy:

The issues that garnered very favourable ratings with the public were included in the contract and those that did not were left off. There was little discussion about how these policies fit together, rather the concern was maximizing popularity. (Geer, 1996, pp. 34-5, my emphasis).

Continue reading

Top-down or bottom-up? Sinister interests – vs – the median voter strategy

For some time I’ve been puzzled as to why empirical political scientists and normative political theorists have taken up antithetical positions on what has to be the central issue of democratic politics – who rules?. In the former community there is widespread agreement that the demos has kratos – elected politicians are obliged to formulate policies that are designed to attract the support of the ‘median’ voter. Political theorists, however (along with their colleagues in media studies), in so far as they are interested in the topic at all, view this as little more than a confidence trick, designed to conceal the identity of the shadowy ‘sinister interests’ who are really pulling the strings of power. Given that political scientists and political theorists are both housed in the same faculties, and drink their cappuccinos in the same common rooms, why should they come to such diametrically opposed conclusions?
Continue reading

Life By Lottery

BBC Radio 4’s flagship Analysis programme next week is devoted to sortition and distributive lotteries:

Should we use chance to solve some of our most difficult political dilemmas? From US Green Cards to school place allocation, lotteries have been widely used as a means of fairly resolving apparently intractable problems. Jo Fidgen asks whether the time has come to consider whether more of society’s problems might be solved by the luck of the draw.

Producer: Leo Hornak. http://www.bbc.co.uk/programmes/b03w02sl

The presenter interviewed Barbara Goodwin and Peter Stone and the producer consulted Conall Boyle and myself. Broadcasting on Monday 24th at 8.30 pm.

The Jury’s Still Out

I recently completed jury service and wanted to share with this forum how it has affected my faith in the potential of randomly-selected legislative juries. I was impressed by the overall impartiality of the system – three sortitions in total (initial random selection from the electoral role, sortition from the jury pool (c.40) to a particular trial, followed by sortition from 20 potential jurors to the panel of 12 in the courtroom itself). I was pleased (and surprised) when the trial judge informed us that we were the judge of the facts, his job was merely to instruct us in the law. I was also impressed by the sample of citizens selected — it struck me as a reasonable cross-section of the general public, a wide variety of ages and backgrounds and a good level of general intelligence (much higher than I anticipated).

What about the deliberations and the verdict? The defendant was a director of a failed company who was accused of intent to defraud his creditors. Complex fraud trials are challenging for randomly-selected juries but this one only required a basic understanding of accounting terminology (balance sheets, trading P&L, solvency etc.). The jury deliberations, however, lasted for a couple of days and in the end we delivered a majority verdict which did not achieve the level of consensus required (10:2), so the judge stood us down, leaving the prosecution to decide whether or not to institute a retrial (at considerable public expense).
Continue reading

Interests, ideas and idealism

In a recent post Terry Bouricius argued that democratic politics is all about establishing a ‘congruity of interests’ between representatives and the represented. This has been an oft-repeated trope on this blog, for example Yoram’s affirmation of Terry’s post:

representatives who naturally, without external incentives, seek to represent the interests of constituents because they are congruous with their own . . . [=] alignment of interests

Since Marx’s inversion of Hegelian idealism (aided and abetted by Freudian psychology and neo-Darwinist biology), it has been fashionable to reduce ideation to (economic) interests, unconscious mental processes and ‘selfish’ genes. Beliefs and other ideational factors are all just so much epiphenomenal froth, that can be adequately explained in terms of interests alone. This is particularly true in the field of politics, where elected representatives only represent the interests of the rich and powerful; ideologies are just the systematic aggregation of interests and the notion that politicians might even be motivated by ideals (changing society for the better, irrespective of their own interests) is just plain naive. The existence of an autonomous field of enquiry called ‘political theory’ is equally laughable. Or so the story goes.
Continue reading

The Blind Break and the Invisible Hand, Part 3: Consent

Although Peter Stone has asked us not to cite from his draft conference paper, this forum is really just an extension of the debate at the University of London workshop, so I would rather quote it verbatim than run the risk of paraphrasing it and getting it wrong. The following quote is taken from the concluding paragraph (p.16):

If democracy is supposed to be about government by consent of the governed, for example, then sortition looks like an obvious dead end. The arguments for taking elections to represent such consent are quite telling; citizens are thought to consent to be governed by elected officials even if they voted against those officials, or failed to vote at all. But however tenuous the link between election and government by consent, the link between sortition and government by consent is even weaker. There is no sense in which citizens can be said to have done anything to consent to randomly-selected officials; indeed, the whole point of randomization is to remove any opportunity by citizens to influence the selection process.

Continue reading

The Blind Break and the Invisible Hand, Part 2: Statistical Representation

The claim that sortition produces a portrait-in-miniature that “stands for” the target population is categorised by Hanna Pitkin (1967) as a form of “descriptive” representation. I prefer the term “statistical representation” as it makes clear that the reference is to the sample as a whole, rather than the individuals that it is comprised of. There is a temptation to think of sortition as just an alternative mechanism for selecting political officers, and that the end result is still “representatives” akin to the (individual) Honourable Members selected by preference election. But the notion of an (individual) “statistical representative” is clearly an oxymoron. An individual selected as part of a aggregatively-representative sample is just a data point, as in a randomised public opinion survey. In a public opinion survey the views of any individual respondent are of no intrinsic interest, the purpose of the survey is to aggregate individual responses as an indication of the prevalence of different viewpoints within the target population. The fact that individual x has a certain view is irrelevant, all that matters is what proportion of the target population shares the same (or broadly similar) views and the same principle would apply to a representative group constituted by sortition. “Statistical representatives” (to describe the component units of a aggregatively-representative body) is an example of the rare group of terms that only exists in plural form. This places serious constraints on the actions of a body selected by sortition, as statistical representativity only applies at the collective (aggregate) level; indeed it is hard to see what representatives can do other than to register their preferences/beliefs via voting (all votes carrying exactly the same weight), as the differences in the “illocutionary force” of the speech acts of individual members of such assemblies will destroy its aggregative representativity.
Continue reading