Paper: No Stratification Without Representation

One fascinating aspect of sortition is that it treats all groups in the population fairly: If a group constitutes x% of the population, the group’s share in the panel will be x% in expectation (that is, on average over many random panels). Furthermore, it is unlikely that, in a random panel, this percentage will deviate much from x%; this event becomes ever less likely the larger the panel is. Unfortunately, there are practical limits to how large sortition panels can get, which means that a certain variance remains. Had the Irish citizens’ assembly been sampled without consideration for gender, for example, a gender imbalance of at most 45 women against at least 54 men would have happened in about 15% of random panels.¹

One way around this problem is stratified sampling. For example, one could fill half of the seats with random women and half of them with random men. As Yoram wrote in an earlier post on this blog, one can still guarantee that every person is selected with equal probability, and thus, that every group will get its fair share of the panel in expectation. Stratification by gender will obviously ensure accurate representation to the genders. But what happens to the representation of other groups?

My collaborators — Gerdus Benadè and Ariel Procaccia — and I studied this question in a paper that we recently presented at the ACM Conference on Economics and Computation. As Mueller, Tollison, and Willett argued as early as 1972,² stratification can greatly reduce the variance in representation for groups that highly correlate with the feature we stratified on. This is good news since correlation is everywhere; for example, stratifying by gender will help to represent opinion groups related to military intervention, gun control, and healthcare.³ We show that there is no real downside to stratification: Even in the worst case, stratification cannot increase the variance of another group by more than a negligible amount. These results hold up to very fine stratifications, where each seat is filled by a random member of a dedicated stratum. This suggests that we should indeed make extensive use of stratification. In a case study on a real-world dataset, we show that stratification can reduce the variance in an opinion group’s representation by a similar amount as an increase of panel size by multiple seats — even if the stratifier does not know the opinion in question!

The main technical difficulty in the paper is working with indivisibilities. For instance, if we split the 99 seats of the Irish citizens’ assembly proportionally by gender, women should get around 50.66 seats. To ensure that every person is still selected with equal probabilities, we need to randomly “round” the seat assignments, giving women sometimes 50 and sometimes 51 seats. This process is somewhat delicate — rounding introduces new variance, which might lead to some unfortunate group becoming much less accurately represented than without stratification. If one uses the rounding procedure suggested in our paper, this is not the case.

The main gap between our mathematical model and current practice is that we assume that panel members are perfectly sampled from the population. In reality, sampling is usually a complicated process based on living addresses and multiple simultaneous representation requirements,⁴ which is only partially transparent and leaves room for sloppiness.⁵ Most importantly, we did not consider self-selection bias, which occurs when sampled members decline to participate. Unfortunately, self-selection invalidates the strong representation guarantees of sortition. Therefore, it is important to keep self-selection as low as possible.

–––
¹ Counting only the 99 members selected by sortition, not the appointed chairperson. Calculation based on the numbers of Irish citizens usually resident and present from the 2016 census, assuming that the fraction of adults within each gender is the same among citizens as in the overall population.
² D. C. Mueller, R. D. Tollison, and T. D. Willett. 1972. Representative Democracy via Random Selection. Public Choice 12, 1 (1972), 57–68.
³ https://scholars.org/contribution/gender-differences-american-political-behavior
⁴ For example, https://www.citizensassembly.ie/en/About-the-Citizens-Assembly/Who-are-the-Members/Red-C-Methodology-Document.pdf .
⁵ For example, https://amp.independent.ie/irish-news/recruiter-for-citizens-assembly-suspended-after-replacement-members-enlisted-through-personal-contacts-and-not-randomly-36629881.html.

45 Responses

  1. In a case study on a real-world dataset, we show that stratification can reduce the variance in an opinion group’s representation by a similar amount as an increase of panel size by multiple seats — even if the stratifier does not know the opinion in question!

    Can you explain this please? How is it possible to measure the incidence of an opinion that you don’t know? Or is this just an extension of your earlier generalisation that “stratifying by gender will help to represent opinion groups related to military intervention, gun control, and healthcare”? My principal problem with stratification is the assumption that stratification by half a dozen variables will generate a representative microcosm when, as you note, self-selection “invalidates the strong representation guarantees of sortition.” Is there any evidence as to what population parameters are under-represented by self-selection?

    Like

  2. Using a few easily verifiable demographic variable for stratification (sex, age, geography, etc. … assuring that the proportion of people between the ages of 25 and 45 on the panel matches the proportion in the general population, etc.) will minimize the error rate from the ideal sample, and should be non-controversial (AND as pointed out, this can automatically improve the representativeness along countless other variables that we cannot see or measure). The big challenge is transparency, public trust, and assurance that the panel selection was not manipulated for nefarious reasons. The higher the stakes of decisions of the panel the greater the temptation for corrupt manipulation AND the greater the suspicion of manipulation, even when done honestly. This is why, once we achieve quasi-mandatory service, it would probably be better to abandon stratification and accept panels will wobble around the ideal, and over time public decisions will end up reverting to the mean. This is another reason why having a large number of short duration panels is safer than a single long-duration “legislature.”

    Liked by 2 people

  3. @Keith:
    Thanks for the questions! In the experiment I was summarizing, we used data from a big survey, so we knew who held which opinion (at least in the sense of an uninformed opinion). That being said, my coauthor fixed a stratification before he had access to this information. For example, he knew neither which of the agents supported the death penalty for murderers nor did he know that this was one of the questions the stratification would be evaluated on. Section 6.2 of the paper describes the experimental setup in more detail, in case you’re interested.
    I agree with the concern in your second point, but I do not think it hinges on stratification. In our paper, we study stratification without self-selection. For instance, the seats reserved for women would be drawn from all women in the population, such that each person participates with equal probability. To me, our findings suggest that stratifying in such a way will lead to a panel that is more representative in more ways more often than if we just chose random people with equal probability, without any stratification. If one gives up equiprobability though, stratification alone will not give back representativeness for arbitrary groups.

    Liked by 2 people

  4. @Terry:
    That’s an interesting point! Can you describe why you expect stratified sampling to be less verifiable than sampling at large? I would imagine that both face rather similar obstacles, and might have technical solutions.
    If privacy were no concern, one could publish a database of all agents, with their reported group memberships. Then, one could take a publicly trusted source of randomness¹ and then run the sampling algorithm on the database. Privacy concerns might be alleviated via anonymization; maybe zero-knowledge proofs or secure multi-party computation techniques can give stronger privacy guarantees? Most technical solutions will be too complicated for the general public to verify, but if manipulation can be detected and denounced by a sufficient number of interested parties, I would be less concerned.

    ––
    ¹ Say, we cryptographically combine the lottery numbers, stock prices, and random strings generated by a bunch of interested parties so that every person has reason to believe that at least one of these sources is completely random and independent from the others.

    Liked by 1 person

  5. Paul, thanks for an excellent piece of work. As the results tally with my intuition, I am biased towards accepting them so I will leave a more critical view to others here..

    The one question which still haunts me is self-selection. Let’s take gender. Women only self-select only at a 20% rate in democratic initiatives.

    Best practice to get 100 participants for a citizen jury is inviting a random 10,000 thus minimising the usual suspects. Self-selection will give us say 1,000 volunteers. We can now sample 50 men from 800 volunteers and 50 women from 200 volunteers. Stratification: check.

    However, frequentists will argue that there may be a bias in the self selected women sample. Those 200 may be different to the 500 from forced sampling

    My intuition is that this will not matter much in practice, as citizen juries – being an interactive, knowledge based method – are better described by Bayesian statistics. (An argument suffices once if everybody can hear it and adapts their subjective belief.)

    Any indication from your data on bias introduced by such self-selection?

    Like

  6. To clarify: Was there anything in your setup asking for some self-selection attribute which could be used to check?

    Like

  7. Paul,
    It is simply that the more steps, the more databases used, in short, the more complicated the selection process, the harder it is for ordinary people to SEE that it was honest, and the easier it is for groups dissatisfied with the decision of a mini-public to question the honesty of the selection process. Age, sex and geographic residency can probably be done fairly cleanly, but since income, education, etc. are not visible and publicly known, they are more problematic.

    Like

  8. Hubertus, thank you for your comments! I realize now that I should have described our experiment in more detail. We did not collect the opinion data ourselves, but used data from the General Social Survey (GSS). The experiment was entirely on the stratifier’s side (building a stratification without looking at the data, then evaluating on the data). Thus, unfortunately, we do not know who would have self-selected — I agree that this would be fascinating to know!

    The best proxy I can think of would be questions on civic participation in other formats, which both the GSS and the ESS (European Social Survey) have. For instance, the ESS 2016 contains the field “Worked in political party or action group last 12 months”, and the GSS 2014 contains the field CNTCTGOV (Contacted politician or civil servant to express view). For both surveys, there are online tools ([ESS](http://nesstar.ess.nsd.uib.no/webview/), [GSS](https://gssdataexplorer.norc.org)), which allow to cross tabulate different fields (both require a free registration, I believe). That might also be relevant to Keith’s last question. Unfortunately, I do not know about any studies dealing specifically with self-selection in the context of sortition. And self-selection might look differently when you have to decline a personal invitation rather than just not joining a group without being prompted.

    Hubertus, your Bayesian view on sortition is interesting. Doesn’t that imply that we should try to be more diverse than a random sample of the population? My initial assumption would be that numbers do matter (in particular, if the end result is voted on by majority). Maybe a biased makeup of the panel will give the panel a wrong impression about the makeup of society (including, the prevalence of subjective priorities)?

    Liked by 1 person

  9. My initial question about error rate and confidence level of stratified samples is still not answered. https://equalitybylot.com/2019/07/21/the-uses-and-abuses-of-sortition/

    Terry said: > 2. Random selection can be combined with stratification in a legitimate way to improve representativeness when mandatory service is not an option (at least if the number of demographic traits is not unmanageable). It does nop harm to keep randomly drawing until an equal number of each agree to serve. Voluntary service is always less ideal, but until laws are passed making service mandatory we will need to rely ona) removing as many obstacles to service as possible, b) providing substantial inducements, and c) some stratification.<

    As far as I understand this is only valid if the same size is maintained
    https://onlinecourses.science.psu.edu/stat506/node/27/

    "The principal reasons for using stratified random sampling rather than simple random sampling include:
    1. Stratification may produce a smaller error of estimation than would be produced by a simple random sample of the SAME SIZE (my capitals). This result is particularly true if measurements within strata are very homogeneous."

    This means that if I take a simple random sample of 471 people out of the population of Belgium (11,26 million) I have 5 % error rate and 97 confidence level according to http://www.raosoft.com/samplesize.html . Let's suppose this is acceptable for our purpose.

    Suppose that we want a geographic stratification, North (Dutch speaking) 6.477.804, South (French speaking) 3.602.216 and Brussels (capital) 1.187.890.
    What do we need in order to maintain the margin of error and confidence level in our strata? If we need, as indicated in the course, to maintain the same size in each strata, then we need 471 x 3 people in total (the calculation shows us that error rate and confidence level stays the same for these numbers of population). Or do you mean that you can divide 471 in proportional numbers (about 235, 120, 47) and still have the same error rate and confidence level in each strata? Not so I presume.

    And how do we proceed with the selection? Suppose we take a lotto drum and we put a numbered ball in it for each citizen (to reply at Yoram's question) and we draw from the whole list until each strata has his quota, balls who do not fit in a strata are put back in the drum, or do we put only the balls in the drum that belong to a strata? I think both systems work but are they both acceptable (by statisticians?) and are they both delivering the same error rate and confidence level?

    We also know (as far as I understand it) from Peter Stone: "The luck of the draw" that we have to calculate error rate and confidence level of each strata, which needs far more people then without stratification for the same error rate and confidence level. From there my question with the geographic stratification.

    Like

  10. Paul Nollen,

    > do you mean that you can divide 471 in proportional numbers (about 235, 120, 47) and still have the same error rate and confidence level in each strata?

    With stratification the objective is lower the variance of opinion in the sample in its entirety, not in the individual strata. As Paul Golz explains, stratification can only reduce variance in the combined sample.

    > we draw from the whole list until each strata has his quota, balls who do not fit in a strata are put back in the drum, or do we put only the balls in the drum that belong to a strata? I think both systems work but are they both acceptable (by statisticians?) and are they both delivering the same error rate and confidence level?

    Yes – both methods yield the same results.

    Liked by 2 people

  11. Hi Paul (Golz),

    Thanks for the paper and the responses.

    One issue that I think is very important and tends to get ignored is that stratification according to certain parameters implies the idea that those parameters are “natural” causes for differences of opinion. This can reduce the cohesion of the group and entrench conventional wisdom and divisions in the population.

    Rather than accept and perpetuate the fact that some characteristics are associated with differences of opinion, it may be a good idea to ask ourselves why this is the case and try to achieve solutions that work across these divisions. The notion that there are static blocks of public opinion that are resistant to change may very well be a characteristic of electoral systems rather than of politics in general. A democratic system should encourage people toward open-mindedness – the willingness to listen to others, to learn new things and to reconsider established opinions. Stratification may discourage such an attitude.

    Like

  12. Paul (Golz),

    > we cryptographically combine the lottery numbers, stock prices, and random strings generated by a bunch of interested parties so that every person has reason to believe that at least one of these sources is completely random and independent from the others.

    I think this underestimates the difficulty of generating verifiable practical randomness: stock prices can surely be manipulated, lottery numbers have to be themselves generated somehow, and, during the process of combination, if one of the parties generating the inputs to the hash has advance knowledge of the inputs of the other parties it can in effect nullify their input and determine the output.

    I think that the random-number generation process has to be very carefully thought out if we want to be reasonably assured that it is manipulation-proof.

    Like

  13. Yoram,
    “Stratification may discourage such an attitude.”

    I have not observed stratification influencing a flex in attitude The biggest (actually huge) influence on people changing their minds is the format of their interaction. Engagement types which ensure that participants have to interact in ways by which they will naturally start considering both sides of an elementary decision variant at least one level deeper than they would normally do.

    The tools by which I ensure that are: Prediki as a prediction market (they will consider Buy/Sell), Kialo as a structured debate (they will consider Pro/Con), plus any old quesionnaire service hoever set up for a “systemic vote” on an equal number of emergent Pros/Cons before proceeding to a vote on the prescriptive variants of a topic, including the status quo.

    Like

  14. Paul:> self-selection might look differently when you have to decline a personal invitation rather than just not joining a group without being prompted.

    Unfortunately not — in the case of the British Columbia constitutional convention, out of the original stratified sample of 23,034 only 1,715 opted to be selected, 964 (4% of the original sample) came to the selection meeting and 158 were randomly selected (Goodin, 2008, p. 14), so there is no way to tell the degree to which the final assembly was an accurate microcosm of the whole population (arguably citizens with a proactive interest in political and constitutional issues would be more likely to agree to participate.)

    As we are all in agreement that self-selection distorts the representativity of the sample then the easiest solution is quasi-mandatory participation — entirely feasible in the case of government-mandated citizens’ assemblies on Brexit or global warming (see parallel thread on Criteria for a Representative Citizens’ Assembly). As Terry points out there is a possibility that a stratified sample might be perceived as open to manipulation by sinister interests. It could be argued that the populist rebellion is partly a response to attempts to privilege the interests of whichever minority group is in favour with the New Left (Phillips, 1995). Much better just to have a large sample, with quasi-mandatory participation — this is really easy to understand.

    What we are proposing on this forum is nothing less than a constitutional revolution, so it is essential that the selection process is seen to be absolutely impartial and that the beliefs and preferences of the vast majority of citizens not selected by lot are faithfully represented. The stakes are really high, so we need to hold out for the real thing, not some cut-down version of sortition.

    Refs:
    ===

    Goodin, R. E. (2008). Innovating democracy: Democratic theory and practice after the deliberative turn. Oxford: Oxford University Press.

    Phillips, A. (1995). The Politics of Presence: The Political Representation of Gender, Ethnicity and Race. Oxford: Oxford University Press.

    Like

  15. > One issue that I think is very important and tends to get ignored is that stratification according to certain parameters implies the idea that those parameters are “natural” causes for differences of opinion. This can reduce the cohesion of the group and entrench conventional wisdom and divisions in the population.

    I see your point. There is a danger that participants see their stratum as a “constituency” and try to defend the interests of their block, rather than entering the process with an open mind. My guess would be that different stratifications are much more susceptible to this than others. Potentially, sampling multiple agents per stratum can remind participants that their strata are still heterogeneous. Even if all members are open to changing their positions, having representative initial opinions can be useful inside of the committee (they might spark different arguments, solutions) and outside (observers can follow how a like-minded participant gets convinced). Even in a sortition utopia, I would expect demographic groups to have different characteristics. For example, risk aversion differs between segments of the population,¹ and I find it unlikely that these differences are primarily caused by the political system.

    ––
    ¹ M. Halek and J.G. Eisenhauer. 2001. Demography of risk aversion. Journal of Risk and Insurance, 1-24.

    Liked by 1 person

  16. @Keith:
    I agree that these numbers are disappointing. I could still imagine that participation would be higher if there was no additional selection step after their opt-in, i.e., if there was no risk of showing up without having your voice heard. Beyond the percentage of participation, it might also be that different groups would opt-in when asked that way; for example, participants might be more motivated by a feeling of responsibility than by a desire for influence.

    Like

  17. Paul,

    > I agree that these numbers are disappointing.

    Those numbers reflect the poor design of the applications of sortition in which they were observed. Why would anyone respond positively to an offer of a seat in a body about which they have never heard, whose decisions are likely to be ignored, whose agenda is set by others, whose information is provided by others? Why would they spend their time and effort when they are poorly paid? Why would they take part in a procedure when it is clear from the outset that the whole thing is no more than a theater of democracy, not a meaningful opportunity to have your say?

    It is very clear that making sure that people are motivated to participate is a major part of the proper design of an allotted body. (Forcing people to show up is worse than useless.)

    Like

  18. Yoram:> It is very clear that making sure that people are motivated to participate is a major part of the proper design of an allotted body. (Forcing people to show up is worse than useless.)

    We all agree that the carrot is better than the stick but, at the end of the day, representing the beliefs and preferences of one’s fellow citizens is both a privilege and an obligation. Republicanism trumps liberalism in this respect, as the fidelity of the representation (and its perceived legitimacy) is essential for the survival of the proposed new democracy. If it’s the case that voluntarism adversely affects the descriptive fidelity of the microcosm then it would be anti-democratic to allow it (it would be the aleatory equivalent of stuffing the ballot boxes). Given that service would most likely be short-term and no actions would be required other than listening and voting (I grant here that I’m prejudging the outcome of the inquiry tribunal suggested in my recent post), then it would be a small price to pay, so I’m genuinely puzzled as to why this is seen as a stumbling block.

    Like

  19. PS, as for the disappointing numbers, we are seeking a switch from 4% to as close to 100% as possible, so it’s hard to see how that could be achieved without quasi-mandatory participation.

    Like

  20. […] for sortition-based decision making bodies. Among other issues, the question of stratification got some attention. It turns out that Sortition Foundation, which is engaged in such activities, has a […]

    Like

  21. The conference just uploaded my talk on the paper (17 minutes + questions) to Youtube: https://www.youtube.com/watch?v=PqutV7x0vE4&t=8

    Liked by 1 person

  22. Hello Paul G,

    I listened to the YouTube and I presume it is for specialists only.
    It did not contribute to my knowledge or changed may distrust for stratification. At 4:11 you state that the Irish convention was with 99 citizens, I did not noticed mentioning the sampling system (possibly I missed it), and to my knowledge this means a 10% error rate with 95 % confidence level at 50% respons distribution. No mentioning why it was 99. I think this might be acceptable for a recommendation to politicians. At 6:42 you mention that increasing the size is impossible because of logistic reasons (In practice there were far larger assemblies, the G1000 is one in Belgium but there were even larger ones) . But you don’t mention the negative aspects of stratification. The financial cost for ‘specialists, the possible manipulation, the lose of trust in this ‘manipulated’ system because it is impossible to evaluate, the impossibility to calculate impartially the error rate and confidence level in a mathematical way.
    I still can’t understand that stratification reduces the number of participants while Peter Stone declares that stratification increases the number of participants with each group. (The luck of the draw)

    ..Suppose, for example, that one wanted to ensure descriptive representation on the basis of sex, race and religion. Presumably this would require ensuring descriptive representation for each combination of thes features (male Budhist, female Catholic Asians, etc.. With, say two sexes, five races and seven religions, one would need to stratify with respect to 2 x 5 x 7 = 70 subgroups. …

    Combining this with the

    https://onlinecourses.science.psu.edu/stat506/node/27/

    “The principal reasons for using stratified random sampling rather than simple random sampling include:
    1. Stratification may produce a smaller error of estimation than would be produced by a simple random sample of the SAME SIZE (my capitals). This result is particularly true if measurements within strata are very homogeneous.”

    This means, as far as I understand it that we would need in The Irish example for a stratified (Peter Stone’s selection) assembly with the same error rate and confident level in each group (these MIGHT be better) 70 x 90 = 6300 citizens.

    Sorry, but if I have to defend sortition proposals I have to understand them myself at least ;-).

    Like

  23. Paul:> This means, as far as I understand it that we would need in The Irish example for a stratified (Peter Stone’s selection) assembly with the same error rate and confident level in each group (these MIGHT be better) 70 x 90 = 6300 citizens.

    Yes, it’s strange that most sortition advocates aren’t even interested in error rate, confidence level etc. I think this is probably because sortition is (wrongly) seen as an offshoot of the deliberative democracy movement, which is more concerned with the factors that lead to good (and equal) discussion between the participants. The principal is that the forceless force of the better argument would (ideally) lead to consensus, and that grubby considerations like voting and majoritarianism involve a “sellout to liberal constitutionalism” (Dryzek). After an extended exchange with Helene Landemore on the need for consistent decision output between different samples of the same population, she acknowledged that the reason we were talking past each other is because she has no interest at all in stochation or democratic legitimacy, only the epistemic benefits of diversity. The worrying thing is that she titled her book Democratic Reason and organisations like newDemocracy and the Sortition Foundation still use terms like “representative sample” whereas we know (for the reasons that you provide) that this is entirely misleading.

    Like

  24. @Paul Nollen:
    The Irish example was meant as a quick introduction to sortition for the conference audience (many of whom have probably not heard of sortition before), not to defend their specific sampling process (footnotes 4 & 5 of my post give more information about their methodology and some reason for doubt). In a similar vain, my point about logistic reasons was not referring to the exact choice of 99. Instead, I meant to say that other desiderata such as cost and the complexity of coordination dissuade practitioners from choosing the sample size as large as one would want from a mathematical point of view.

    I disagree with your impression that stratification makes sortition substantially more complex and less transparent. My impression is that, to make normal sortition transparent, one needs an infrastructure that will already easily allow for stratification.
    > it is impossible to evaluate, the impossibility to calculate impartially the error rate and confidence level in a mathematical way.
    Well, I would argue that we do exactly that. To get a potential point of confusion out of the way: You seem to think of the sampling inaccuracy in terms of “error rate with confidence level”, whereas we measure it in terms of the variance of the distribution. The two are not strictly speaking equivalent. However, for reasonable population and panel sizes,¹ all distributions should be close to a normal distribution. Thus, the number of representatives of a group on the panel is within a derivation of ~2 sqrt(variance) with 95% probability.

    As Yoram mentions above, your reading of the “SAME SIZE” part seems to be a misunderstanding. In the document you linked, “same size” relates the number of samples from simple sortition to the number of samples over all strata, not to the number of samples per stratum. We show that, given a fixed number of samples of your choosing (99, 1000, …), stratification might give you much lower variance than sampling at large for the same number of overall representatives; the effect is larger the better our stratification matches the group whose representation we measure. In the very worst case, stratification might introduce a tiny, tiny amount of additional inaccuracy, but the variance (thus the error rate) will barely change at all.

    ––
    ¹ That is, large population, much smaller, but still large panel, subgroups that are neither a very small percentage of the population nor close to 100%.

    Liked by 1 person

  25. Apologies, I forgot to log in for the last comment.

    @Keith:
    While our paper operates in a framework of sampling inaccuracy, I think that this inaccuracy is not only relevant in a system that ultimately votes with (super-)majority. Even among well-meaning and open-minded individuals, I expect that the number of initial proponents of an idea will have a big influence on whether that idea will ultimately make it into the consensus. Hubertus above seemed to disagree with this view.

    Like

  26. Paul,

    Sorry I’ve read your response several times, but can’t understand your point (probably a language problem). My argument is that a typical refusal rate of 93-96% will almost certainly generate a highly atypical sample. And I get worried when I read the word “consensus” in a sortition-related conversation, especially if there is a reliance on well-meaning and open-minded individuals. As far as I’m concerned any viable political system has to work with the crooked timber of humanity (as Kant put it).

    Liked by 1 person

  27. @Paul G.: thanks for answering. It might be that I misinterpret the ‘same size’ part. It was explained without examples. And that was what I asked for a geographic stratification but I could not understand the answer.
    Suppose that we want a geographic stratification for Belgium, North (Dutch speaking) 6.477.804, South (French speaking) 3.602.216 and Brussels (capital) 1.187.890. In total 11.267.910. (I take the numbers for the whole population for this example and not the electoral list. I think that this makes no difference in the outcome)
    What do we need in order to maintain the margin of error and confidence level in our strata? If we need, as indicated in the course, to maintain the same size in each strata, then we need 471 x 3 people in total (the calculation shows us that error rate and confidence level stays the same for these numbers of population).
    Or do you mean that you can divide 471 in proportional numbers (57%, 32%,10.5% or resp. 268, 150, and 50 citizens) and still have the same error rate and confidence level in each strata? And how is the procedure? Do I make a Simple Random Sampling from the whole population and ‘fill’ the strata? If it is a number I can use to ‘fill’ the strata I can do so, if I can’t use the number (the strata in question is already full) it goes back in the drum.

    Liked by 1 person

  28. @Paul Nollen:
    The second one. Here is what our approach would do in detail (I think your percentages are off): In the first phase, it would randomly give the north 239 or 240 seats, the south 133/134/135 seats, and Brussels 43/44 seats¹, such that the total is always exactly 471. Let’s say that the first stage gives 239, 135, and 43. Then, in the second phase, we select 239 completely random Southern Belgians, 135 completely random Northern Belgians, and 43 completely random inhabitants of Brussels. Essentially, you do what you would have done on the national level three times at the regional level. That, to me, is the easiest way of thinking about it. You can also do what you propose (sampling entirely random citizens, put them in their stratum, throw them back in the drum if the stratum is already full), which gives the same random distribution.
    Hope that was helpful!

    ––
    ¹ I’m getting slightly different numbers of yours because I did not round the percentages.

    Liked by 1 person

  29. @Keith:
    I agree; under a high refusal rate, there are no useful bounds on the error rate. For one, the group defined by “will not participate in a sortition experiment if asked” (93–96%) will be drastically underrepresented at 0%. To get statistical representation, one needs to get the refusal rate close to zero and can then stratify to boost the representation a bit more. What I’m saying is that this can make sense even if one aims for a final consensus.

    Like

  30. Keith’s conjecture is that there may be a significant difference in attitudes, interests, and cognitive style, etc. among decliners as compared to those who say “yes” to the random call (even once we stratify to make sure various demographic traits line up well). This theoretical deviation from a perfect sample that would be achieved through mandatory service has been ignored by most real-world implementations, as they appear to assume that any difference between those who accept and those who decline is not important enough to worry about. My guess is that for SOME issues this difference will indeed be de minimis, but for some issues it may matter a lot… we just can’t know without experiments.

    Of course the OTHER argument is that ALL direct and electoral democratic societies ALWAYS have this same reality, that those who participate (show up at the Assembly, chose to run for office, or vote in an election) may be different than those who stay home. We accept this, and move on. But Keith’s point is that this issue looms larger because the acceptance rate is so small. However, the voluntary acceptance rate is FAR higher than the rate of citizens who seek to run for office, so arguably any mini-public would be FAR more representative than any elected chamber.

    It is a fairly fundamental question… Does democracy mean rule by those people (or a perfect sample of those people) who are willing to rule (with political equality of chance as a requirement)? Or does democracy mean rule by ALL the people (ar a perfect sample of ALL the people), including those who don’t want to participate in ruling? It has always meant the former in reality, but spoken about as if it were the latter.

    Like

  31. Terry: >>including those who don’t want to participate in ruling?

    That’s an important question. We need a theory why those don’t want to participate. Initial sampling invitations could ask that people give their reason for non-participation and incentivise answers.

    Like

  32. Terry:> those who participate (show up at the Assembly, chose to run for office, or vote in an election) may be different than those who stay home. . . Keith’s point is that this issue looms larger because the acceptance rate is so small.

    Actually it’s far worse than that. In direct-democratic and electoral systems everybody has a choice whether or not to show up at the assembly, opt in to the jury pool, run for office or vote in an election. But with sortition in large modern states, the overwhelming majority of citizens (99.99999%?) will not even receive an invitation to participate, so will be effectively disenfranchised. This means that it’s essential that every minipublic accurately represents the beliefs and preferences of the disenfranchised masses. If there is even a suspicion that choosing whether or not to accept the invitation to participate is a relevant population parameter, then voluntarism has to be ruled out. If the citizens’ jury is ad hoc and service is short-term then this will not be an onerous burden. The precautionary principle would suggest that quasi-mandatory participation has to be the default position for a citizens’ assembly with decision power over important legislative or constitutional matters.

    Paul:> if one aims for a final consensus.

    O dear, the dreaded “c” word again. The focus of the deliberative democracy movement on the internal proceedings of the assembly (and the sovereignty of reasons) has distracted sortition theorists from the most important consideration — the representation of the beliefs and preferences of the target population. Consensus has never been the goal of any other democratic system, so why should it apply to sortition-based assemblies?

    Like

  33. Terry,

    > Of course the OTHER argument is that ALL direct and electoral democratic [sic] societies ALWAYS have this same reality,

    Right, but despite claiming to be democratic and despite regularly being referred to as democratic those electoral systems are not in fact democratic. The objective of sortition is to create a system that is in fact democratic. If people cannot participate in the decision making body (as opposed to willingly choose not to) then they are being disenfranchised.

    Like

  34. Yoram:> If people cannot participate in the decision making body (as opposed to willingly choose not to) then they are being disenfranchised.

    Are you suggesting an opt-in system as opposed to random selection from the whole citizen body? Although that would approximate the Athenian jury pool, it’s first time I’ve heard that in the context of modern sortition proposals.

    Like

  35. Yoram,

    By way of clarification: if you are not selected in the draw for the legislative assembly (as will be the case with the overwhelming majority of citizens) then your only franchise is by proxy, and this can only happen if the ongoing representative fidelity of the minipublic vis-a-vis the target population is such that it makes no difference whether or not you (or any other empirical individuals) are included.

    I assume you can’t really be proposing an opt-in system — in addition to the Athenian voluntary allotment pool over-representing the poor, the old and those living in or close to the city, a modern opt-in system would over-represent activists and lobbyists (given that most people would prefer to go to the pub). So not clear what you mean by “those who cannot participate in the decision-making body being disenfranchised.”

    Like

  36. @Keith:
    > O dear, the dreaded “c” word again.
    You seem to have gotten the impression that I took a position on the question of consensus. That was not my intention, as it is outside of my area of expertise. Are you aware of sortition experiments that explicitly aimed to produce a consensus but failed to do so?

    Like

  37. Consensus is the goal of Type 1 (Habermasian) deliberative democracy, and I’m anxious that it should not infect the sortition community!

    Like

  38. I read the objections to consensus from Waldron, Buchanan and May and I stay with Waldron’s option to decide “in the face of disagreement”.

    -The authority of law rests on the fact that there is a recognisable need for us to act in concert on various issues or to co-ordinate our behaviour in various areas with reference t oa common framework, and that this need is not obviated by the fact that we disagree among ourselves as to what our common course of action or our common framework ought to be. Given this as a basis for legal authority, a person should not be surprised to find himself from time to time under a legal obligation to participate in a scheme that he himself regards as undesirable on grounds of justice to pay taxes for example, to provide welfare assistance to people he regards as undeserving. That is more or less bound to happen, given that it is the function of law to build frameworks and orchestrate collective action in circumstances of disagreement.
    The point of law is to enable us to act ” in the face of disagreement”.

    Liked by 1 person

  39. Yes indeed. And in a democracy most will obey laws that have been arrived at by decision-making that represents the beliefs and preferences of a plurality of citizens. In electoral regimes this is true in the formal sense; whereas a stochastic alternative would only apply subject to an empirical demonstration of the italicised condition. This is a deeply non-trivial task which cannot be elided by formal syllogisms. Consensual decision making (the preference of most deliberative democrats) has no time for the aggregation of beliefs and preferences, Habermas sarcastically dismissing Dahl’s sociological approach to preference aggregation as a concern with the “statistical distribution of income, school attendance, and refrigerators”. That’s why I believe sortition theorists should distance themselves from the ideal of consensus, the general will, the tyranny of reason and the other paraphernalia of deliberative democracy.

    Like

  40. Keith, that’s a joke now, right? “sortition theorists should distance themselves from the ideal of consensus, the general will, the tyranny of reason and the other paraphernalia of deliberative democracy.”

    I cannot tell how shocked I am.

    If this is based on the belief that “consensual decision making (the preference of most deliberative democrats) has no time for the aggregation of beliefs and preferences” that’s a wrong belief.

    With a methodic, clean process employing today’s informational tools:
    – structured debate software
    – prediction market software
    – systemic vote software

    an exercise to determine the General Will by minimal opposition to collective action is absolutely feasible, and in rather short time, to boot, compared to the machinations of today’s party system.

    Mind that “consensus” must not be defined naively as “consensus about the individual decision at hand” but consensus that the process involved has produced the best possible decision.

    Liked by 1 person

  41. Hubertus,

    It’s certainly not intended as a joke, but then your last paragraph shows that you’ve shifted the goal posts to a different topic. I agree that there needs to be a consensus that the political process is democratically [not so sure about epistemically] legitimate, even if the individual decision at hand goes against one’s own beliefs and preferences. For refutation of your view that discursive and deliberative democrats are interested in preference aggregation, see:

    We can step back and ask whether democracy does indeed require counting heads. I would argue that a logically complete alternative exists based on a conceptualization of intersubjective communication in the public sphere as a matter of the contestation of discourses. (Dryzek, 2000, p. 84)

    and

    The key consideration here is that all the vantage points for criticizing policy get represented – not that these vantage points get represented in proportion to the number of people who subscribe to them. When it comes to representing arguments, proportionality may actually be undesirable. (Dryzek & Niemeyer, 2008, p. 482)

    Dryzek views preference aggregation as a “sellout to liberal constitutionalism”.

    Refs:
    ====

    Dryzek, J. S. (2000). Discursive democracy v.s liberal constitutionalism. In M. Saward (Ed.), Democratic Innovation. London: Routledge.

    Dryzek, J. S., & Niemeyer, S. J. (2008). Discursive representation. American Political Science Review, 102(4), 481-493.

    Like

  42. Keith wrote:
    >” in a democracy most will obey laws that have been arrived at by decision-making that represents the beliefs and preferences of a plurality of citizens. In electoral regimes this is true in the formal sense.”

    This is not the case for electoral systems either in a formal way nor an empirical way. At BEST a myth can be promoted that in a formal way the principle of CONSENT of the governed allows group decisions to be made that in no way represent the beliefs and preferences of the plurality ((or majority) of citizens. At best, we can pretend that formally elections are meta-democratic. Voters allow their “betters” to make decisions on their behalf, and by not revolting they give tacit approval to the decision-making the legislature engages in. In some jurisdictions some decisions may come close to those the plurality prefers, but in most places they remain very far away. The electoral scheme was imposed on most societies by elites alive many generations ago. So even this meta-democratic consent myth is fake. The mere fact that a voter votes in an election (perhaps to avoid the lesser of evils) does not indicate consent or approval of the election process.

    Of course it would be possible to conduct regular referenda to give formal consent (still rife with rational ignorance) to a law-making process whether by elite elected legislators or randomly selected law-makers. But… to suggest that elections align governmental decision-making (formally or otherwise) with public preferences is nonsense.

    Like

  43. Terry,

    By “formal” all I mean is that anyone can stand for election and voters can choose which box to tick, so there’s no reason why, in principle, the policies of a candidate or party for election should not reflect the beliefs and preferences of a majority/plurality of citizens. Of course we know that in practice this is highly unlikely on account of a host of empirical factors summarised by Manin’s principle of distinction, and the role of those who seek to reform elections is to seek to reduce these factors.

    The same thing applies to stochation — the formal justification principle (the LLN) can be undermined by a host of empirical factors that affect the ongoing representativity of the sample and the role of those who propose and seek to implement sortition is to try to reduce these factors. But if you don’t make the distinction between formal and efficient the danger is believing that any old sortition-based sample will be legitimate.

    >The electoral scheme was imposed on most societies by elites alive many generations ago

    Again that may well be true (although another way of putting it would be to see election as an important development in the move from war-war to jaw-jaw), but it is of no relevance from the perspective of democratic theory. 17th and 18th century social contract theorists didn’t believe that that their distant forbears sat down together to draw up an agreement as to how to transition from the state of nature political society, but it was a good way of presenting a formal model of how everyone might benefit from forfeiting their natural rights for the greater good. We need to respect these distinctions if we’re not going to keep talking past each other.

    Like

  44. PS, the distinction between the formal and the empirical maps well to the division of labour between political theorists and political scientists. The former are concerned with the specification and clarification of political concepts (democracy etc.) whether by “adumbrating” from existing practice (as with Aristotle) or borrowing from other domains (the stochation principle being derived from mathematical statistics).** The role of political science is to measure the extent to which existing, or proposed, institutions realise the concepts in practice (and suggest ways of improving them). The Criteria for a Representative Citizens’ Assembly https://equalitybylot.com/2019/07/25/criteria-for-a-representative-citizens-assembly/ is an exercise in political theory — it’s the job of quantitative and comparative political scientists to see how best to realise them in practice. It’s possible for academics trained in both fields (Fishkin and Goodin being good examples) to cross the divide, but they need to check which hat they are wearing at any particular time.

    ** political philosophers, by contrast, largely concern themselves with deriving normative principles by deduction from armchair thought experiments (Rawls being the obvious case).

    Like

  45. […] at the Harvard Computer Science department titled “Optimized Democracy”. Procaccia has been writing about sortition for a while and sortition plays an important part in this course’s […]

    Like

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.