Liquid democracy and spectral theory

In this post we will show some practical examples of how liquid democracy can be understood in terms of, and make use of results from spectral graph theory. For more background please see [Vigna2009]. Wikipedia says:

In mathematics, spectral graph theory is the study of properties of a graph in relationship to the characteristic polynomial, eigenvalues, and eigenvectors of matrices associated to the graph, such as its adjacency matrix or Laplacian matrix.

What does this have to do with liquid democracy? To answer this, let’s remember what defines liquid democracy is: system of transitive proxy voting. Proxy in that I can delegate my vote to some entity. Transitive because that entity can itself delegate the vote, and so on. Imagine a simple case with three voters, Alice, Bob, and Charlie. Alice delegates to Bob, and Bob delegates to Charlie. It is very natural to represent this as a graph, this is what we call the delegation graph

Alice chooses Bob, Bob chooses Charlie

Assuming each voter starts with one vote, this would give us the following weights for each voter:

Alice = 1, Bob = 2, Charlie = 3

Because Alice and Bob have delegated their votes, Charlie will end up casting three votes, one for himself, one for Bob and one for Alice. What determines these weights is the structure of the graph; who is voting for who is encoded in the connections between vertices. In graph theory, the object that encodes this information is the adjacency matrix. Here’s the adjacency matrix for our example:


Where each row shows who each voter delegated to. Alice (A) delegated to Bob (B), hence the 1 in the second position of the first row. Similarly, Bob (B) delegated to Charlie, (C) as can be seen in the second row. Because Charlie did not delegate, the third row is all zeroes.

We can express the liquid tally above with these equations (1’s represent voter’s initial weights)

0*A + 0*B + 0*C + 1 = A

1*A + 0*B + 0*C + 1 = B

0*A + 1*B + 0*C  + 1 = C

Note how the 0’s and 1’s above correspond to the columns of the adjacency matrix. The above can be represented[1] in matrix form:

(A B C) * AdjacencyMatrix = (A B C)

This is an eigenvalue equation, whose eigenvector (the (A B C) row vector) corresponds to the result of the liquid tally. Note how the equation is recursive, which fits the recursive nature of transitive delegation. Vote weights are calculated in terms of vote weights themselves, the same way each delegate transmits votes that result from previous delegations.

When the adjacency matrix is used to evaluate the importance or influence of nodes in a graph in this way we are speaking of eigenvector centrality. Our example shows that calculating centrality is basically equivalent to tallying in liquid democracy. This is what makes the connection between spectral theory and liquid democracy.

Eigenvector centrality, Katz and Pagerank

Yes, that’s PageRank as in google, in case you thought this whole talk of centrality was a bit abstract, it’s what made google what it is today. Eigenvector centrality, Katz centrality, and PageRank are related methods to measure the importance of a node in a network. We won’t go into the differences between each measure, besides noting that both Katz and PageRank include an attenuation factor that decreases contributions from distant nodes, whereas eigenvector centrality does not.

In order to run some examples we will use the networkX library, which includes several functions for calculating centrality as well as many other useful features. If you want to play along with this code you will need to install the library as well as its dependencies, including matplotlib and numpy. If you’re on windows I suggest downloading the comprehensive WinPython package which includes everything you need here.

Let’s first create the graph corresponding to our example with Alice, Bob and Charlie. Here’s the code that does that

This generated the image shown earlier. Now to calculate the tally, we will run both Katz and PageRank.

which gives us

Both results match the tally we showed before. A couple of minor points above. First, the PageRank result was rescaled to make it match Katz. Second, the adjacency matrix for Katz was reversed as the networkx 1.8.1 Katz implementation is using a right eigenvector (this has been changed to left eigenvector in master).

More importantly, the alpha parameter is a damping factor. In the language of liquid democracy it modulates just how transitive delegation is by reducing contributions the further away the originate. For example, let’s change the above to alpha = 0.5:

Now Charlie receives 25% of Alice’s vote and 50% of Bob’s vote. So alpha quantifies the fraction of the vote that is effectively delegated. We can interpret then that a liquid democracy tally is a special case of Katz centrality and PageRank. In fact, liquid democracy is the limiting case of Katz and PageRank when alpha = 1.0, ie no damping (which is why you get viscous democracy in [Boldi2011]).

What about cycles?

One of the first things you have to deal with if you’ve implemented a liquid tallying algorithm is the possibility of cycles in the delegation graph, otherwise the procedure will blow up. Having detected a cycle at tally time the standard treatment is to consider votes that enter it as lost. In order to prevent that undesirable situation you can do cycle detection at vote time to warn the user that his/her vote may produce such a cycle.

What happens if we add a cycle to our example? Let’s try it


and we get

The reason this happens has to do with the details of the algorithm that calculates eigenvectors; in particular the relationship between its convergence and the attenuation factor alpha[2]. The short story is this: using an attenuation factor of 1.0 on a graph with cycles may cause problems.

Just as liquid tally algorithms have to deal with cycles, so do we in order to make centrality work correctly. Fortunately there are fast algorithms to detect cycles in graphs. NetworkX offers an implementaton of an improved version of Tarjan’s strongly connected components algorithm, we will use it to define a function that removes cycles in a graph

Using this function we can obtain liquid tallies for any delegation graph correctly, using either Katz or PageRank. See the bottom of this post for the full python script demonstrating this.

Liquid democracy and degree (or branching factor)

Before we said that liquid democracy is the limiting case of Katz centrality and PageRank when alpha = 1.0. In the last section we saw another requirement besides that of alpha = 1.0: that the delegation graph must be acyclic, in other words a DAG. There is one more property that we can consider, degree.

A node’s degree is the number of (in our case, outward) connections with other nodes. In terms of delegation, it is the number of delegates that a voter chooses. Standard liquid democracy uses degree = 1, but such a requirement could in theory be relaxed. How does this fit in with Katz and PageRank? Lets construct a graph where voters may choose one or two delegates.

which gives


resulting values

We see how Katz centrality does not yield a correct tally as it is not dividing outgoing weights for voters who split their delegation among two delegates, instead we get inflated weights. But the PageRank result does work, Bob’s two votes are split correctly, and the delegation proceeds normally from then on.

In summary

  • Liquid democracy is a special case of Katz centrality given
    • a damping factor alpha = 1.0
    • a directed acyclic graph of degree d = 1
  • Liquid democracy is a special case of PageRank given
    • a damping factor alpha = 1.0
    • a directed acyclic graph of degree d >= 1

That’s it for our quick tour of the relationship between liquid democracy and spectral theory. We have also seen how liquid democracy could be extended to include damping (as in [Boldi2011]), or to allow “multi delegation”.


[Vigna2009] Sebastiano Vigna – Spectral Ranking

[Page1998] The PageRank Citation Ranking: Bringing Order to the Web

[Boldi2011] Viscous Democracy for Social Networks

[1] For simplicity, I have ignored the initial weights associated with each voter in the matrix equation. These initial weights are what makes liquid tallying equivalent to undamped Katz centrality rather than eigenvector centrality.

[2] For details as to alpha and PageRank See section 5 Limit behaviour.

In the case of the networkX implementation of Katz centrality an alpha of 1.0 is guaranteed to converge as all eigenvalues of an acyclic graph are 0 (see and

Python Script: Random liquid Tallying with Katz/PageRank

Vote-time loop detection and transitive closure


Previously we have spoken about vote-time loop detection as a mechanism to prevent votes from being lost, as the tally in Agora will discard votes that enter a cyle

vote-time loop detection is a mechanism that detects cycles just as they are formed, and prevents them from being established. A feedback message instructs the voter that his/her choice is invalid as it would create a loop, the user must make another selection. How does this detection algorithm work? Well essentially in the same way as it does in the tally algorithm, except that it only checks  loops starting from delegation choices just as they are made by a user, on demand.

But this is not the entire picture, for the following reasons

  • Delegation loops do not invariably result in lost votes. In the event that any of the members in the cycle votes directly, that member will channel all the delegation weight  present in the loop. Still, loops are in principle undesirable.
  • Instead of allowing the user to select an invalid (see the previous point) option and then requiring them to change it, the system could identify problem delegates before the user selects one, pre-emptively.
  • Following the point above, the vote-time loop detection algorithm would have to operate differently, as the starting point would not be a specific choice of delegate, but rather all possible choices. Using the tally-time algorithm would be computationally expensive.

This leads us to two different possibilities, pre-emptive vote-time loop detection, and reactive vote-time loop detection. Reactive vote-time loop detection corresponds to our earlier notion of vote-time loop detection. When a user selects a delegate, the system runs an algorithm that, based on that selected delegate, detects where a loop exists. That is, loop detection begins from the path User => Delegate-choice. This algorithm can proceed by walking the graph, and keeping a record of already visited nodes.

In pre-emptive vote-time loop detection, the system must calculate, for every user of the application, what choices of delegate would result in a delegation loop.  Thus the path begins at each possible user. Because such a calculation is expensive, it should be done “offline”, prior to the display of the delegate choices. The result of such a calculation is then stored so that it can be looked up to determine problem delegates whenever presenting the delegation vote interface.

So how does this data look like? It must contain all the delegation graph information necessary to carry out loop detections cheaply, for all possible users. In effect, this is a matter of graph reachability, and the structure we are looking for is the transitive closure of the relation defined by all existing delegate votes (that is, the delegation graph). The transitive closure of a relation can take the form of a binary adjacency matrix, where a ‘1’ represents that two vertices are reachable, and ‘0’ otherwise.


The transitive closure (TC) of a directed graph can be calculated using Warshall’s (pdf) algorithm[3] that runs in O(n^3) time. Let’s consider how this could be implemented. First consider that the transitive closure can change whenever the delegation graph changes, that is, whenever a delegate vote is cast, changed or removed. So a naive approach would be to calculate the TC each time this happens, and store it for use during lookup.  Since the TC is an adjacency matrix, it can be easily represented as a database table with two columns.

Lookup is also very easy, as transitive reachability is encoded directly in the adjacency matrix. To determine if a delegate D will cause a loop if selected by user U, we simply have to ask if U is reachable from D, because if it were, the extra path would cause a loop. If stored in a rdbms table, the sql to determine if D is problematic is simply

which would return 1 row if U is rechable and 0 rows otherwise.

But perhaps a naive approach would be inefficient. For every delegated vote cast this would entail

  • Loading all delegated votes (the entire delegation graph)
  • Running Warshall’s algorithm

A better solution would be to calculate the transitive closure incrementally, such that only the varying parts of the TC are recalculated. Even better, if the delegation graph is stored in a database, it would be more efficient if the calculation could be done locally on the database without having to load votes and then run the algorithm.

These two desirable properties bring us to the paper Maintaining Transitive Closure of Graphs in SQL, where such a method is described. As an added benefit, it is formulated in terms of simple sql, requiring no specialized extensions[2] that some databases offer that support graph related operations. We can implement this method as two stored procedures[1] that serve to incrementally maintain the TC table.

As an example, we will construct the TC for the following graph


which is reflected in the following sql session


This session shows the edge insertions and the resulting transitive closure table.  Now imagine that delegate 4 has decided to delegate his vote. What delegates would cause a loop? The answer is a straightforward lookup on the TC table:

which returns the rows with values 1, 2, 3, 5, since any of these will result in a loop.

What delegates would cause a loop for delegate 2? We have

 which returns rows with values 1, 5.

As can be seen from these examples, detecting possible loops using the TC table is very easy, provided the TC is maintained properly as delegate votes are casted, changed or removed.


[1] Following are two stored procedures based on this method implemented for mysql.

Adding an edge


Removing an edge

[2] See also

[3] See also

Handshakes and the young Gauss

If a group of n people meet, and everyone shakes hands with everyone else, how many handshakes occur?

I posed this problem to a friend, to see if she would give me the correct answer, which is n(n – 1) / 2. It’s easy to see why this formula is correct: every one of the n persons must shake hands with n – 1 other persons, giving the expression n(n – 1). We must correct this because it double counts handshakes, counting once for each source and again for each target. To fix the double counting we simply divide by 2, which yields n(n – 1) / 2.

My friend went another route, instead solving the problem in sequence: the first person must shake hands with n – 1 people. But the second person, since he/she has already shaken hands with the first person we already counted, has n – 2 people left. This process is repeated, giving a sum formula like so:

n  – 1 + n – 2 + n – 3 + …. + 1

You can easily verify that both formulas give them same result by testing for several values of n. Thus the sum formula should be reducible to n(n – 1) / 2. The trick to make this reduction is to group the terms of the sum formula into pairs that add to the same value. The pairing is done from the “outer extremes” and moving inward, for example with

n – 1 + n – 2 + n – 3 + …. + 3 + 2 + 1

we can make the following pairs

n – 1 + 1,  n – 2 + 2,  n – 3 + 3…

and note that each of these pairs add to n. Using a concrete example, with n = 5:

4 + 3 + 2 + 1 = (4 + 1) + (2 + 3)

there are (n – 1) / 2 pairs that each sum to n, which gives

(n – 1) / 2 * n = n(n – 1) / 2

If n is even, as for example with n=6:

5 + 4 + 3 + 2 + 1 = (5+0) + (4+1) + (3+2)

here there are n/2 pairs that each sum to n – 1:

(n – 1) * n/2 = n(n – 1) / 2

So the sum formula reduces to the original answer. Astute reads will have noticed that the sum formula is just the sum analog of the factorial operation. Values obtained from this sum formula are called triangle numbers, with notation Tn[1]. Triangle numbers have a similar geometric interpretation to square numbers; adding the next value in the series is equivalent to adding an extra “layer” to the previous triangle:

Triangle numbers (wikipedia)

As we have seen, triangle numbers solve the handshake problem, and many other problems that have the same abstract form: how many connections are necessary to connect a group of n nodes such that every node is connected to every other node. This abstract formulation manifests itself for example in computer or telephone networks, yielding Metcalfe’s law and thus also related to the network effect.

Telephone connections (wikipedia)

But besides these relationships, the handshake problem also reminds me of the Gauss Schoolroom anecdote, where a young Gauss and classmates were tasked with adding the numbers from 1 to 100 to keep them occupied. To the teacher’s surprise, young Gauss solved the problem immediately. He too had made the reduction we’ve seen above, and quickly obtained said sum. What Gauss did not know, and I realized thanks to my friend’s answer, is that he was solving the handshake problem as well!


[1] Triangle numbers are in fact of the form n(n + 1)/2, so they give the number of handshakes for n + 1 people, not n

The most incomprehensible thing about the universe is that it is comprehensible

It’s a quote by Albert Einstein, which is where we left off last time. Comprehensible translates to, for example, mathematically intelligible, regular or lawful. These are different ways to say that it is possible to arrive at descriptions of the world that allow us to understand it and make predictions. Einstein’s point was that there is no particular reason to expect the universe to be the way it is, i.e. following elegant mathematical laws. It could have just as well been a chaotic mess impossible to make sense out of.

Regularity in the Sierpinski triangle

It’s hard to tell whether it’s even meaningful to speak of the way the universe could have been without speaking of how the universe and its characteristics arise. Indeed, one of the deepest questions in physics is, why does the universe have the laws it has? (Second only to why is there something rather than nothing?)

But imagine for the moment that the universe was in fact a messy chaos. Well, one thing seems clear, that kind of universe would not contain life, because life is one of the most obvious examples of order and regularity (or if you like, life requires order and regularity to exist), and intelligent life is precisely the kind of life that requires most order.

The point is that our very existence screens off the possibility of a non-regular universe, it is impossible for us to observe anything different because we would not have existed under those circumstances. This point is known as the anthropic principle. Does it answer the question? Not really; the anthropic principle has been labeled as unscientific and metaphysical by critics. You have to be careful to not take the point too far. In this case I’m just saying that life implies a selection effect to the universe it inhabits.

But again, that does not answer the question. However, if we additionally postulate that there isn’t one universe, but many, the situation makes some sense:

Alice: Why is the universe comprehensible?

Bob: The thing is, there isn’t just one, there are many, so it turns out that some of them are comprehensible, just like in a lottery someone must end up winning.

Alice: But what about the coincidence that we landed precisely on a comprehensible one?

Bob: That’s not a coincidence, our very existence implies that the universe we are in must be orderly. We couldn’t have landed in any other one.

Alice: So it’s a combination of those two things that answers the question, the anthropic principle is not enough..

Bob: Yes

Although in fact the question is still not answered because we had to postulate the existence of many universes, and we could in turn ask ourselves why that is the case. Oh well.