Large Scale Secure Sortition Part 3: What exactly did Cleve show was impossible? And what possibilities are left?

This article assumes you’ve read my previous article “Large Scale Secure Sortition Part 1: Generating Randomness Collectively with In-Person Gatherings”. In particular it assumes a familiarity with the add-divide-remainder procedure for combining individual submissions to get a final random result.

This article explains the mathematical structure that makes collectively generating randomness difficult and introduces the four major approaches that can lead to systems that work. To do this we’ll go through a version of Cleve’s 1986 impossibility proof and call out the four major assumptions he makes as he makes them. Then we’ll explain how reversing each of those assumptions leads to a working method of collectively generating randomness and talk about its pros and cons.

In the world of CS theory the process of collectively generating randomness is called “multiparty coin flipping” and the seminal paper that kicked off the field was Cleve’s 1986 impossibility result. The paper proves that “it is impossible to have perfectly fair multiparty coin flipping unless a majority of those helping to ‘flip the coin’ are behaving honestly.” But if you keep digging you start finding papers that claim to have methods for fair multiparty coin flipping when just a single person is behaving honestly. Cleve’s proof was correct, but it made some specific assumptions about how those trying to flip the coin were able to structure their communications and what they wanted out of the process. Multiparty coin flipping is only possible if you take at least one of those assumptions and assume the opposite.

The first assumption is that at least half of the participants are dishonest (when we reverse this assumption later we assume that less than half of the participants are dishonest). Cleve is very upfront about this assumption and it is a good one to keep when thinking about designing sortition systems.

I’ll call out the other assumptions as the proof makes them. Let’s dive in.

The Proof

Imagine that Luka and Hugo instead of flipping a collective coin in person have to do it over text. The basic rules are the same: they both flip a coin, Hugo wins if they are both the same(heads heads or tails tails), Luka wins if they are different(heads tails or tails heads). If they both just send the results of their coin flip there’s a big problem. If Hugo sends his result (heads) first, Luka has the opportunity to lie about his (it actually came up heads, but he can say tails) and win unfairly. Or vice versa. Maybe they can agree to both send their result at exactly 12:00, but because online messages can take several seconds to arrive Luka might not be able to tell the difference between a message actually sent at 12:00 and one held back a moment and changed. If we are defining a formal way of collectively generating randomness online we have to assume that they trade off talking (Luka talks, then Hugo talks, then Luka talks). There’s no way to ensure any desired overlap. (This is the proof’s second assumption, and it’s a good one for online communication).

Continue reading

Large Scale Secure Sortition Part 2: “Voting” from home with Non-Malleable Time Lock Puzzles

This article assumes you’ve read my previous article “Large Scale Secure Sortition Part 1: Generating Randomness Collectively with In-Person Gatherings”. In particular it assumes a familiarity with the add-divide-remainder procedure for combining individual submissions to get a final random result.

This article proposes a backup protocol to compliment the one in the previous article that can be used without requiring people to gather in person. However it is so mathematically complicated that only CS and Math PhDs are likely to ever understand the proof of why it’s secure. Given that the goal of using these systems is to generate trust, I cannot recommend this as a primary option, though I can recommend it as a fallback during times (like extreme weather events or pandemics) where gathering large numbers of people is difficult.

Non-Malleable1 Time Lock Puzzles are cryptographic tools that allow you to hide information inside a computational puzzle. Once hidden, the information can only be accessed after the puzzle is solved. Making a time lock puzzle is kind of like turning a QR code into a jigsaw puzzle. If you give the puzzle to someone they’ll only be able to scan it after they’ve completed/solved the puzzle. The main difference is that Jigsaw puzzles can be solved in many different ways by doing the “this piece connects to this piece” steps in any order. Time Lock Puzzles can only be solved in 2 ways.

The first (called opening) is fast but requires a special key that is created alongside the puzzle and known only by its creator. The second (called forced opening) takes much longer and requires you to do a specific list of computations in a specific order. These computations have to be done sequentially (you only know what step 2 is after you finish step 1) so throwing hundreds of computers2 at the problem isn’t any better than using just one. When you create the puzzle you get to decide exactly how many steps it’ll take to solve the puzzle (by choosing how many “puzzle pieces” there are). And since we know about how quickly modern computers can do these steps, we can accurately estimate how long the puzzle will take to solve.

Continue reading

Large Scale Secure Sortition Part 1: Generating Randomness Collectively with In-Person Gatherings

Matthew Gray is a Mathematician, Software Engineer, and Theoretical Computer Scientist currently teaching at Renton Technical College after working at Microsoft Norway. His primary research interests are in Secure Multiparty Computation, Quantum Cryptography, and Coding Theory. Over the last year he has been researching how sortition can be conducted in secure and trustworthy ways.

Judging from the aftermath of contested elections around the world, if large numbers of people question the fairness of a sortition selection there could be dire consequences. Our current systems for generating the randomness needed for selections are not secure enough to silence those questions, especially when used to select national representatives. The current systems are all centralized and non-participatory, some are vulnerable to local cheating, and all are vulnerable to sabotage from well-resourced malicious actors, such as state security services. This article proposes a new option. It lays out a specific decentralized and participatory method of selecting representatives by explains how two people can go about fairly choosing one of them to be selected and then showing how the method can be scaled up for larger selections. It also touches on some of the mathematics surrounding these methods.

Current systems for generating the randomness needed for drawings fall into two main categories. First are physical systems such as dice, floating balls, or names in hats. These work better in small communities where every member can show up and observe. But even in those spaces, if people distrust their neighbors, they will worry about the dice being weighted or someone sneaking extra copies of their name into the hat. Second are digital systems that take some outside sources of randomness and process them to get some final randomness. These outside sources of randomness include stock market indexes, lava lamps, or cameras whose lenses have been painted over. 

Digital systems tend to involve math that is fairly complicated, don’t feel that random, and aren’t interesting to look at. Also, because of the complicated math involved, there’s a chance that these processes aren’t actually random after all. Neither category produces systems that involve citizens or are particularly resilient to sabotage efforts. Weighting dice or hacking a computer is easy. Manipulating the stock market is hard but may not be beyond the abilities of a state security service. However if we include everyone in the process of generating the randomness we can create systems that have no single point of failure.

To introduce the ideas used by the system I am about to propose, let’s imagine that the team captains (Luka and Hugo) in the last FIFA World Cup didn’t trust the coin that was going to be used at the start of the match. One way they could generate the “coin flip” together is for both captains to bring their own coins and flip them simultaneously. If both coins land on the same side (i.e. both heads or both tails) then France wins the coin toss, if they land on different sides (i.e. heads tails or tails heads) then Croatia wins. What is important to note here is that even if one coin is weighted, as long as the other one is unweighted, then the overall “coin flip” is fair. 

Figure 1. The odds of each possible result when one captain brings an 80/20 coin, and the other brings a 50/50 coin.

Continue reading