Error handling: return values and exceptions

My colleague edulix started a discussion on the golang list about the merits of go’s error handling. This got me thinking about the problem of error handling in general, and that no language seems to have has gotten it quite right. What follows is a quick braindump.

Two approaches

There are two prominent approaches to error handling in software engineering today, using return values and exceptions.

With return value error handling, errors are indicated through the values returned from functions, the caller has to write checks on these values to detect error conditions. Originally, error returns were encoded as falling outside the range of data corresponding to normal operation.

For example, when opening a file with fopen in C++, the normal return is a pointer to an object representing the file. But if an error occurs the function returns null (additional information about the error is available from a global variable, errno). Just as the caller of the function uses normal return values, said caller is responsible for dealing with errors present in the return.

Exceptions establish a separate channel to relay error information back from functions. When using exceptions, a function’s return only holds values that correspond to normal behavior. In this sense the values returned by functions are more immediately identified with the function’s purpose, details regarding what can and does go wrong are separated into the exceptional channel.

For example, the purpose of the Java’s parseInt function is to parse an integer. So the return value’s content is exclusively a consequence of this purpose: the integer parsed from the string. If something goes wrong when attempting to achieve this purpose it will be represented in the throwing of an exception which holds error details. Whereas with return values there is no need for additional machinery beyond that provided by normal function calls, exception requires an extra mechanism: try/catch blocks.

Exceptions’ claimed virtues

exc

Exceptions try to declutter code (http://mortoray.com/2012/03/08/the-necessity-of-exceptions/)

Exceptions aim to be an improvement over return values. Here are some possible shortcomings of the return value approach:

  • The code is cluttered with error checking that reduces its readability, intent is obscured.
  • Sometimes the scope at which an error occurs is not the best one to handle it. In these cases one has to manually propagate error’s backwards, using several returns until an adequate error handling scope is reached. This is cumbersome.
  • If one forgets to check for errors the program may continue to run in an inconsistent state, causing fatal errors down the line that may not be easy to understand.

Exceptions try to address these possible shortcomings:

  • With exceptions normal program flow is separated from error handling code, which lives in special catch blocks for that purpose. This declutters normal behaviour code, making intent clear and readable.
  • With exceptions errors programmers have the option of handling errors at the right scope, without having to manually return across several levels. When exceptions are not handled at the scope the error occurs, they are propagated automatically up the stack until a suitable handler is found.
  • If exceptions are not handled, the program crashes immediately instead of continuing in an inconsistent state. A stack trace shows the error, and the call stack leading to the error.

But it’s not all good

The story doesn’t end here of course, exceptions have their problems. Some say that they are worse than the solution they were trying to improve upon. This is the position that many proponents of Go’s error handling defend, stating that returns with multiple values (feature not present orignally in C, hence errno and the like) are not only sufficient but simply better than exceptions:

  • Because exceptions propagate up and can be handled elsewhere, programmers are tempted to ignore errors and “let someone else deal with it”. This lazy behaviour can lead to errors being handled too late or not at all.
  • It is in principle impossible to know whether a given function call can throw an exception, short of analyzing its entire forward call tree. As a consequence, the use of exceptions in a language is equivalent to hidden goto’s that can jump control back up the call stack at any function invocation. In contrast, when using return values error information is formally present in the function signature.

Whereas before we listed exception propagation as an improvement (allowing error handling at the right scope), here it is listed as a weakness: it can encourage bad programming practice. But it is the second criticism that I find more significant: exceptions are opaque and may lead to unpredictable behaviour. With such an unpredictable ingredient, the programmer is unable to ancipate and control the potential paths his/her code can take, as lurking under every function call is the potential for a jump in execution back up the call stack.

Of course, in real world practice, exceptions are not the randomness disaster that this description may suggest[1]. Properly documented functions do communicate to the programmer, to a reasonable approximation, what can and cannot happen exception-wise. And properly implemented functions do not whimsically throw exceptions at every opportunity, “magically” making your code jump to a random location.

Good intentions: checked exceptions

What about checked exceptions, you say? The rationale for checked exceptions in Java is precisely one that formally addreses the very problem we just described, because

With checked exceptions, function calls must, in their type signature, specify what exceptions can be thrown.

Checked exceptions are part of a function’s type and the compiler ensures that exceptions are either caught or declared to be thrown (a use of the type system that fits well with what I’ve advocated here). Isn’t this exactly what’s needed? The subtle problem is that the phrase ”caught or declared to be thrown” does not mean the same as “correctly handled”. The use of checked exceptions encourages lazy behaviour from programmers that makes things even worse. When forced by the compiler to deal with checked exceptions, programmers bypass it by

  • Indiscriminately adding throws clauses, which just add noise to the code
  • Writing empty try/catch blocks that never get populated with error handling code, potentially resulting in silent failures

The second problem is especially dangerous, it can result in exceptions being swallowed and silenced until some larger error occurs down the line, an error which will be very hard to diagnose. If you recall, this is precisely the third drawback we listed for error handling via return values. Both problems arise, not due to the language feature itself, but out of imperfect programming.

The bottom line for a language feature is not what some ideal programmer can do with it, but the use it encourages in the real world. In this case the argument goes that checked exceptions (and as we saw above, error return values) make it harder to write correct error handling code; it requires additional discipline not to shoot oneself in the foot. Which is a shame, since the rationale behind checked exceptions is very appealing: documentation at the type level and compiler enforcement of proper error handling.

Quick recap. Exceptions aim to be an improvement over return values offering benefits mentioned above: code decluttering, error handling at the right scope via propagation and failing fast to avoid hidden errors. Of these three features, the second is questioned and undermined by the two points that proponents of return values make. Opacity in particular is a strong drawback. Checked exceptions aim to resolve this by formally including exception information in the function type and enforcing error handling through the compiler, but the industry consensus seems to be that they do more harm than good.

The way forward

It’s difficult to reach a general conclusion in favor of exceptions or return values, the matter is unclear. But let’s assume that exceptions are no good. Granted this assumption, it would not mean that the problems with return values magically went away. Is there some way, besides exceptions, to address these problems?

One approach I find promising achieves decluttering but within the confines of errors as return values, using a functional style. This technique exploits several programming language features: sum types, pattern matching and monads. But those are just details, what’s important is the end result, see this example in Rust:

The first two lines of the function are operations that can fail, they return a sum type that can represent either a success or failure. The desired behaviour is such that if any error occurs at these lines, then the function from_file should stop executing and must return that error.

Notice how the logic that achieves this is not cluttering the code, it is embedded in the try! macro, which uses pattern matching on the sum type to either return a value to be used in the next line (the file variable), or short circuit program flow back to the caller. If everything works the code eventually returns Ok. Let’s restate the main points

  • from_file has a sum type return that indicates that the function may fail.
  • the individual invocations within the function itself also return that type.
  • the try! macro extracts successful values from these invocations, or shorts circuit execution passing the error as the return of from_file.
  • the caller of the from_file function can proceed the same way or or deal with errors directly (via pattern matching)

Besides the internal machinery and language features in use, I hope it’s clear that this style of handling errors mimics some of the positive aspects of exceptions without using anything but return values. Here is another example which has a similar[2] intent, this time in Scala

This code shows a series of operations that can each fail: getting a row from a database, getting one of its columns, and then doing a lookup on a map. Finally, if any of the steps fail a default value is assigned. The equivalent code with standard null checking would be an ugly and repetitive series of nested if-blocks checking for null. As in the previous example, this error checking logic is not cluttering the code, but occurs behind the scenes thanks to the Option monad. Again, no exceptions, just return values.

Error handling is hard. After years of using exceptions it is still controversial whether they are a net gain or a step back. We cannot expect that a language feature will turn up and suddenly solve everything. Having said that, it seems to me that the functional style we have seen has something to offer in at least one of the areas where traditional return values fall short. Time will tell whether this approach is a net gain.


Notes

[1] If real world application of exceptions was in fact disastrous, software written with exceptions would just never work, and apparently it does; exceptions are widely in use in countless systems today.

So a reasonable position to take is that criticisms of exceptions are on the mark when pointing out that the exception mechanism is opaque and potentially unpredictable. This criticism does not mean that software written with exceptions is inherently flawed, but that exceptions make correct error handling hard. But then again, error handling is a hard problem to begin with. Does the opacity of exceptions null its advantages?

[2] In this case the problem is how to deal with failure in the form of null values.

[3] In Rust for example, the problem of swallowing errors is also addressed with #[must_use] , see http://doc.rust-lang.org/std/result/

Further reading

Exceptions

http://www.joelonsoftware.com/items/2003/10/13.html

http://www.lighterra.com/papers/exceptionsharmful/

http://mortoray.com/2012/03/08/the-necessity-of-exceptions/

Checked exceptions

http://www.artima.com/intv/handcuffs3.html

http://googletesting.blogspot.ru/2009/09/checked-exceptions-i-love-you-but-you.html

Rust

http://doc.rust-lang.org/std/io/index.html

http://rustbyexample.com/result/try.html

http://www.hoverbear.org/2014/08/12/Option-Monads-in-Rust/

Scala

http://danielwestheide.com/blog/2012/12/26/the-neophytes-guide-to-scala-part-6-error-handling-with-try.html

http://twitter.github.io/effectivescala/#Error handling-Handling exceptions

http://blog.protegra.com/2014/01/28/exploring-scala-options/

https://groups.google.com/forum/#!topic/scala-debate/K6wv6KphJK0%5B101-125-false%5D

Integer encoding of multiple-choice ballots (2)

In the last post we saw how simple arithmetic with the right choice of base can encode integer lists for multiple ballot choices in an elegant and compact way. A couple of points were mentioned in the notes section, one of them was

our scheme allows encoding repeated choices, which are usually not legal ballots. For example, one can encode the choice 1,1,1 but that would be an illegal vote under all the voting systems we mentioned as examples. The fact that legal ballots are a subset of all possible encodings means that the encoding is necessarily suboptimal with respect to those requirements. We will look at an alternative scheme in the next post.

The alternative scheme we show below attempts to optimize the encoding by using a variable base as digits are encoded. The main idea is that as chosen integers in the list are encoded, the remaining ones are constrained since they cannot be repeated. Thus the base can be reduced as less choices are possible. In practice the process is complicated by the need to keep track of what digits correspond to what, as gaps form in the remaining integers and the corresponding meaning of digits changes. Here is a python implementation

and a sample session using both encoders

Note the shorter value produced by the second encoder

encode: 14606467545964956303452810

encode2: 36697695360790800022

Despite the reduction, the second encoder is not optimal (the first encoder is optimal given repeatable choices); the range of output numbers is larger than that of legal ballots. It would be interesting to see how to obtain the most compact solution, a detailed analysis could compare these schemes systematically to get quantitive measures of space efficiency.


Integer encoding of multiple-choice ballots

Secure voting systems supporting privacy through encryption must encode ballot contents into integers before they can be encrypted[1]. This encoding step is mostly trivial. For example, imagine a yes-no-abstain ballot. One can simply apply the following mapping to yield integer plaintexts for corresponding ballots:

Yes => 1
No => 2
Abstain => 3

But things can get a bit more involved when dealing with multiple-selection ballots. These are ballots where the voter makes more than one choice. They can be either ranked ballots, where the voter specifies a preference relation over the selections, or unranked ballots where no such a preference exists. Examples of voting systems using the former are single transferable vote or instant runoff voting.  Systems like approval voting or plurality at large are examples of the second type, using unranked ballots.

Imagine we are using one of these systems to elect a candidate out of a field four: Alice, Bob, Charlie, and Donna. We first apply the trivial mapping:

Alice => 1
Bob => 2
Charlie => 3
Donna => 4

But how do we encode a complete ballot, for example, a ballot with (X corresponds to marked choices)

Alice X
Bob X
Charlie O
Donna O

Unlike the yes-no-abstain ballot above, the content of the ballot corresponds to a list of integers: 1 and 2. We could use the following mapping

encode: Alice, Bob => 1, 2 => 12

The ballot is encoded as the number 12, resulting from the concatenation of the string representations of each of the integers. But what if there are more candidates, and we need to encode something like:

encode: 11, 3 => 113

That won’t work, because 113 could represent either 11 and 3, or 1 and 13.

decode: 113 => ?

We can remedy this by adding some padding such that each choice is well separated:

encode: 11, 3 => “1103″ => 1103

Then when decoding, we convert the integer to a string, split it every 2 characters, and finally obtain integers for each of the candidates:

decode: 1103 => “1103″ => “11″, “03″ => 11, 03

But there’s still a problem, what about this choice:

encode: 1, 13 => “0113″ => 113

We run into trouble here, because the string “0113″ corresponds to the integer 113; there is no mathematical difference between “0113″ and ”113″. To fix this, when decoding we can first check that the string length is a multiple of 2 (since we are using 2 chars per candidate integer), if it is not we prepend the required zeros. The encode-decode process would be

encode: 1, 13 => “0113″ => 113
decode: 113 => “113″ (prepend zero) => “0113″  => “01″, “13″ => 1, 13

I hear you complain that all this concatenation, padding, and prepending looks a bit hackish, can we do better?

Let’s go back to our first example, when we simply wanted to do

encode: Alice, Bob => 1, 2 => 12

This looked very nice and simple. Can’t we do something like this in general, without any string hackery? The first step is to go back to the definition of decimal numbers.

Decimal numbers (cplusplus.com)

In these terms, the encoding 1, 2 => 12 corresponds to

(10^1) * 1 + (10^0) * 2 = 12

Here we have expressed the encoding of 1, 2 using arithmetic, no string operations involved. The ballot choices are interpreted as digits according to the mathematical definition of decimal numbers. (In fact, this is what goes on under the covers when you convert a string like “12″ into the number 12.) This gives us a a purely arithmetical description of the simple mapping we started with. Things then got complicated when we considered the possiblity of more choices (candidates) in the ballot. Let’s apply our mapping to that problematic ballot:

encode: 11, 3 => (10^1) * 11 + (10^0) * 3 = 113

Our new procedure fails the same way: the simple scheme where each digit represents one choice cannot be accommodated by the decimal digit representation of choices, and the result 113 is ambiguous. But wait, who says we have to encode according to a decimal representation? What if we were to map choices to hexadecimal digits,:

encode: 11, 3 => (10^1) * B + (10^0) * 3 = B3

And we’ve restored simplicity and correctness to our scheme. B3 encodes the choice 11, 3 with one choice per digit and no ambiguity! If the B3 looks like cheating, just remember, B3 is a representation of a number that in decimal format turns out to be 179. The encode-decode process could just as well be written as

encode: 11, 3 => 179
decode: 179 => 11, 3

The bottom line is we can encode lists of integers into an integer provided we use the optimal base, which is equal to the number of possible choices in the ballot plus one.

Let’s revisit our original example, with Alice, Bob, Charlie and Donna. Since we have four candidates, our base is 4 + 1 = 5. The encoding is thus:

encode: Alice, Bob => 1, 2 => (10^1) * 1 + (10^0) * 2 = 12 (base 5) = 7 (decimal)

in short:

encode: 1, 2 => 7
decode: 7 => 1, 2

Note that not only is this method simpler with no string operations or padding, but the encoded values are smaller. Compare:

encode: 1, 2 => 12
encode: 11, 3 => 1103

with

encode: 1, 2 => 7
encode: 11, 3 => 179

Which should not come as a surprise, encoding with the specified base is the most compact[2] encoding possible (proof left as excercise for the reader). A larger base wastes space encoding ballot contents that are not possible, whereas a smaller base is insufficient to encode all possible ballots.

Finally, here is a python implementation of the encoder we have proposed

In the next post we will further discuss details as to the compactness of the encoding mentioned in [2].


[1] In the case of ElGamal encryption used in Agora Voting, the plaintext must be encoded into an element of the multiplicative subgroup G of order q of the ring Zp, where p and q are suitably chosen prime numbers. In order to do this, the plaintext must be first encoded into an integer, after which it is mapped to a member of G, and subsequently encrypted.

[2] A few caveats must be mentioned. First, we are using 1-based indices to represent choices, which means some values are unused. Second, our scheme allows encoding repeated choices, which are usually not legal ballots. For example, one can encode the choice 1,1,1 but that would be an illegal vote under all the voting systems we mentioned as examples. The fact that legal ballots are a subset of all possible encodings means that the encoding is necessarily suboptimal with respect to those requirements. We will look at an alternative scheme in the next post.

Voter fraud and bayesian inference – part 3

Here’s part1 and part2.

Welcome back. In the previous posts we saw how to do inference using the beta-binomial to get probabilities for the proportion of fake ballots in an election, as well as an upper bound on the probability that the election result is incorrect. We briefly mentioned the hypergeometric distribution but did discuss it further nor use it.

Like the binomial (and beta-binomial), the hypergeometric distrbution can be used to model the number of successes in a series of sampling events with a binary outcome. The distinction is that the binomial models sampling with replacement, whereas the hypergeometric models sampling without replacement. In other words, if we are sampling from a box, the binomial applies when the sample is returned to the box before drawing more samples. The hypergeometric applies when the sample is not returned. But wait, doesn’t that mean that we’ve been doing it wrong?

When auditing ballots we keep track of those already checked, a ballot is never audited twice. Shouldn’t we then be using the hypergeometric distribution? It turns out that the binomial distribution approaches the hypergeometric distribution in the limit of a large total number of items compared to the number sampled. This fits our case, as we can only audit a limited number of ballots compared to all those cast.

beta5

Hypergeometric for increasing values of N. The bottom right is the corresponding beta-binomial.

As we saw in the previous post, the beta distribution is a conjugate prior for the binomial, which makes inference very easy. Unfortunately this is no the case for the hypergeometric. But because of the converging behaviour seen above, we can stick to the beta-binomial’s easy calculations without sacrificing accuracy. For the sake of completeness we will quickly show the posterior for the hypergeometric, following [1]. Incidentally this “manual calculation” is what allowed us to obtain the images above, through the javascript implementation in the jsfiddle.

hyp

Again, this is just Bayes theorem with the hypergeometric likelihood function and a uniform prior. In [1] it is also pointed out that the normalization factor can be computed directly with this expression

hyp2

We use this in the jsfiddle implementation to normalize. Another thing to note is that the hypergeometric posterior is 0 at positions that are inconsistent with evidence. One cannot have less successes than have been observed, nor more than are possible given the evidence. These conditions are checked explicitly in the implementation. Finally, the jsfiddle does not contain an implementation for obtaining the upper bound on the probablity of election error, only code for the posterior is present. How about forking it and adding that yourself?

In these three posts we have used bayesian inference to calculate probabilities over proportion of fake ballots, and from there to calculate probabilities that an election result was incorrect. These probabilities could be used to achieve trust from stakeholders in that everything went well, or conversely to detect a possible fraud and invalidate an election.

I’ll finish the post by mentioning that the techniques we have seen here can be generalized beyond the special case of detecting fake ballots for plurality votes. For example, one could use bayesian inference to conduct ballot audits for the sake of checking tally correctness, not just from failures in authentication, but from errors in counting. See [2] for this kind of more general treatment


[1] http://www.wmbriggs.com/public/HGDAmstat4.pdf

[2] https://www.usenix.org/system/files/conference/evtwote12/rivest_bayes_rev_073112.pdf

In this work, because results of audits are not just binary, and because tallies are not only plurality, the authors use dirichlet distrbutions and sampling using posteriors to project possible alternative tallies.

Voter fraud and bayesian inference – part 2

We left off the discussion with

We want to calculate the proportion of fake ballots in an election based on the results of limited audits. We have seen how the binomial and hypergeometric distributions give probabilities for the results of an audit given an assumption about the proportion of fake ballots. Bayes theorem can be used to calculate the inverse probability that we are after, once we have specified a prior.

Bayesian inference is a process that takes prior information, and adds evidence to obtain a posterior distribution. In our case this posterior distribution will be over the possible proportion of fake ballots in the set of all ballots. Let’s begin with the binomial case. What prior should we use? One answer is that, since we know nothing about the proportion of fake ballots we should be indifferent about each possibility. This translates into a uniform prior, where all proportions are equally likely. For example

P(proportion = fake ballots/ total ballots) = 1 / (total ballots + 1)

Since there are n + 1 possibilities for the number of fake ballots, we give each of them the same weight, which is 1 / (n + 1).

Beta + Binomial = Beta-Binomial

Before plugging this into bayes, a small technical detour.  Notice how the prior is itself a probability distribution, defined over the 0.0 – 1.0 interval. That is, the minimum proportion (0.0) is no fake ballots and maximum (1.0) is all fake ballots. It turns out there is a paramateric probability distribution one can use for this interval, it’s called the Beta distribution. The Beta distribution has two parameters, alpha and beta. The case of our neutral prior we defined above is equivalent to the Beta distribution with parameters (1, 1)

P(proportion ) = 1 / (n + 1) = Beta(1, 1)

We could express other knowledge with different choices of alpha and beta. But what’s the point of using the Beta, besides having a convenient way to specify priors? The point is that the Beta distribution is a conjugate prior of the binomial distribution. This means that the posterior distribution, once having taken into account available evidence, is also a Beta distribution. Meaning that the calculation of the posterior is much easier, as inference is just a matter of mapping the values of parameters of the Beta to some other values. Here is the posterior of the Beta distribution when it is used as the prior of the binomial (this is called the beta-binomial model).

beta

Equations taken from [1]. The first line is just Bayes theorem, but the payoff is that the last line corresponds to a beta distribution, but with different parameters. In summary

beta2

with a beta prior, bayesian inference reduces to remapping the initial parameters alpha and beta, to alpha + k and beta + n – k, where k is the number of successes and n is the number of trials. Conjugate priors are an algebraic convenience that allow obtaining analytic expressions for posteriors easily. End of detour, please refer to [1] for further details.

Armed with our use of the beta-binomial obtaining the posterior given some audit results is simple. If we audited 10 ballots and 3 of them were fake our posterior would simply be

P(proportion = p | fake audit count = 3 out of 10)

= Beta(1 + 3, 1 + 10 – 3)

= Beta(4, 8)

here’s what Beta(4, 8) looks like

beta3

note how the peak of the distribution is at 0.3, it makes sense since in the sample 3 out 10 ballots where invalid. Evidence has transformed our initial uniform prior into the distribution seen above. This meets our original objective, a way to judge how many ballots are fake in the entire set of ballots based on limited audits. But it’s not the end of the story. What we would like also is to have an estimate as to whether or not the election result is correct. As we said previously, this estimation can be used either as a guarantee that all went well or in the opposite case to detect a problem and even invalidate the results.

fraud

The probablity that an election result was correct, given uncertainty about fake ballots, depends on two things. One is the proportion of ballots that are fake, this is what we already have a calculation for. The other thing is the actual election outcome; specifically a measure of how close the result was. The reason is simple, if the election was close, a small number of invalid ballots could cast doubts on its correctness. Conversely, if the result was a landslide, the presence of fake votes has no bearing on its correctness. For our purposes we will stick with a simple example in which the election decides between two options via simple plurality.

Call the difference between the winning and losing option d

d = winner votes – loser votes

In order for the election to be wrong, there must be a minimum of d fake votes. The existence of d fake votes does not imply that the result was wrong, but d fake votes are a necessary condition. Thus a probability that the number of fake votes is greater than or equal to d represents an upper bound on probability that the election was incorrect. Call this E (for error)

P(proportion of fake votes >= d / total votes) = E

(upper limit on the probability that the election was wrong)

We have P(proportion), it is the posterior we got above. How do we get P(proportion >= some constant)? Through the beta distribution’s cumulative distribution function, which is defined in general as

In order to reverse the inequality, we just need to subtract it from 1 (gives us the tail distribution). We finally have

Probability of incorrect result

= P(proportion >= d / total ballots)

= 1 – Cumulative Distribution Function of P(d / total ballots)

 One final correction. Because we have sampled a number of votes with known results, we must apply our calculations to the remaining ballots.

P(E) = 1 – CDF(d – sampled ballots / total ballots – sampled ballots)

Let’s try an example, an election between option A and option B with the following numbers.

Votes for A = 550

Votes for B = 450

Total ballots = 1000

Audited ballots = 100

Audited fake ballots = 4

which gives

Posterior = Beta(5, 97)

d = 100

Minimum fraction of fake votes required to change result = (100 – 4) / (1000 – 10) = 0.1066

Upper bound on probability of error

= 1 – CDF(Beta(5, 97), 0.1066)

= 0.01352

In conclusion, the probability of error due to fake ballots in this election is less than or equal to 1.35%.

beta4

You can find a javascript implementation for everything we’ve seen until now in this jsfiddle. Calculations for the binomial, beta, hypergeometric and cumulative distribution function are done with the jStat javascript library. Wait, you say, what about the hypergeometric? We’ll leave that for the next post, which should be pretty short.


[1] http://www.cs.cmu.edu/~10701/lecture/technote2_betabinomial.pdf

Voter fraud and bayesian inference

Wikipedia says

Personation  is a term used in law for the specific kind of voter fraud where an individual votes in an election, whilst pretending to be a different elector.

when someone practices personation multiple times to cast multiple votes we are talking about ballot stuffing. In this post we will consider an election in which authentication is not 100% secure, where personation is difficult but not impossible.  Furthermore we will assume there is some available, but costly method by which  ballots can be audited to determine whether or not they were cast via personation or were in fact valid.

What makes the problem non trivial is that ballot auditing is costly and cannot in principle be performed for the entirety of the ballots cast. Hence we would like to estimate, from a limited number of audited ballots, how severe ballot stuffing was for an election. This estimation can be used either as a guarantee that all went well or in the opposite case to detect a problem and even invalidate the results.

What we need is a mathematical model that given some information about the results of an auditing processes allows us to estimate the proportion of “fake” ballots in the set of all those cast. In other words, we are talking about statistical inference; in this post will use a bayesian approach. Let’s get to work.

Imagine we have a box with all the ballots for an election, and the auditing process consists in randomly taking one out and determining whether it is valid or not, recording the result, and then repeating a limited number of times. After we have recorded the results of all the audits, we would like to know how many of ballots in the entire box are fake. Two things come to mind. First, that the result of each audit is binary, we either get FAKE or VALID. Second, that if the proportion of fake ballots in the box is p, then probability that a randomly chosen ballot is fake is p; the probability that it is valid is 1 – p.

The auditing process as a whole yields a count of fake ballots and a count of valid ballots. If we have probablity p for the result of a single audit , can we assign a probablity to the count resulting from the complete audit process? Yes, the binomial distribution and its brother the hypergeometric distribution do just that[1]. Here they are

In our example, k above corresponds to the count of fake ballots. So these distributions give us a way to calculate the probability that a specific number of fake ballots is obtained in the audit assuming a certain proportion of fake ballots in the entire election. For example, let’s say we have 100 ballots total and we know that 10 of them are fake. What is the probability that if we audited 10 ballots we would obtain a fake count of 3?

P(X = 3) = 0.057395628 (binomial)

P(X = 3) = 0.0517937053324283 (hypergeometric)

Only 5%, it is unlikely we’d find 3 fake ballots with only 10 audits, given that there are only 10 out of 100 in total.

We have a way to calculate the probability of some outcome given some assumption about the proportion of fake ballots. But remember, what we want is exactly the opposite: given a certain result for an audit, we’d like to estimate the proportion of fake ballots in the entire set. It is this inversion that makes the problem a case of bayesian inference, and our friend Bayes theorem gives us the relationship between what we have and what we want.

in our case, it translates down to

P(Proportion of fake ballots | Audit Result) = P(Audit Result | Proportion of fake ballots) * P(Proportion of fake ballots) / P(Audit Result)

What we were calculating before is P(Audit Result | Proportion of fake ballots), which we can plug into the formula, together with the other terms, to get what we want: P(Proportion of fake ballots | Audit Result). The other terms are

P(Audit Result) =

The unconditional probability that a certain audit result occurs. It can be calculated by summing over all possible proportions, like this version of Bayes theorem shows:

As seen in the bottom term. Because the bottom term is common to all values of Ai, it can be interpreted as a normalizing constant that ensures that probabilities sum to 1.

P(Proportion of fake ballots) =

The prior probability that some proportion of fake ballots occurs. This ingredient is crucial for Bayesian inference and the Bayesian approach in general. It is an estimate of  the quantity we want to calculate that is prior to any of the evidence we obtain. It can be used to encode prior knowledge about what we are calculating. If we have no knowledge, we can try to encode that in a “neutral prior”. This last point is a very deep problem in Bayesian inference, as is the general problem of choosing priors. We won’t go into in detail here.

Recap. We want to calculate the proportion of fake ballots in an election based on the results of limited audits. We have seen how the binomial and hypergeometric distributions give probabilities for the results of an audit given an assumption about the proportion of fake ballots. Bayes theorem can be used to calculate the inverse probability that we are after, once we have specified a prior. See it in action in the next post.


[1] There is an important difference, the binomial distribution models sampling with replacement, whereas the hypergeometric models sampling without replacement. In the next post we will consider this difference and its significance for our problem.

The “Straw-Scotsman” pattern of ideological debate

Recently I had a short exchange on twitter on the subject of feminism. Reflecting on the nature of the disagreement, I realized that the structure of the arguments conformed to a pattern I had seen many times before, but never identified. It is a pattern that arises frequently in ideological disputes, I will mnemonically call it the “Straw-Scotsman” pattern.

Suppose Alice and Bob are debating about ideology X. Alice is a supporter of X, whereas Bob is a detractor.

Bob proceeds to criticize X:

Bob: I find ideology X unsatisfactory because of its properties a, b, c.

Alice retorts that Bob is mischaracterizing X:

Alice: Ideology X does not in fact have the properties a,b,c that you are wrongly assigning to it. You should inform yourself about what X really is before criticizing it.

Bob finds this to be a disingenuous answer:

Bob: You’re just dodging my criticisms by redefining X in a way that suits your argument. It seems to me that whatever criticism one could make of X you would simply reply that the real X is not like that.

In the language of fallacies, the pattern can be succintly described with

  • From Alice’s point of view, Bob is committing the straw man fallacy by attacking a position that does not in fact correspond to X.
  • From Bob’s point of view, Alice is committing the no true scotsman fallacy, by responding to any criticism of X by saying that the real X is not at all like that.

I offer no resolution here. From both points of view the opponent is engaging in fallacies, the argument is a stalemate and leads nowhere. The pattern typically also takes the following form when discussing the merits of ideologies in terms of historical outcomes.

Alice criticizes X using historical examples:

Alice: Ideology X is flawed, one only needs to look at what happened in the following examples a,b,c where it was applied and led to disastrous results.

Bob responds:

Bob: I disagree, examples a,b,c only show that the implementation of X was flawed. X was applied incorrectly or not at all, and it is this that led to bad results. However, if one applies X correctly, the results would be satisfactory.

A response which Alice finds unsatisfactory:

Alice: You could always dodge any critcism of a real world case of X by insisting that the implementation was wrong, rather than X is itself faulty.

In this manifestation the Straw-Scotsman pattern is best summed up as

  • When criticizing opposing ideologies people refer to their real world implementations, whereas when defending their own, they insist on its idealized form.

I realize now that I have seen this pattern occur many times when people debate, for example, communism and libertarianism.

Liquid democracy and spectral theory

In this post we will show some practical examples of how liquid democracy can be understood in terms of, and make use of results from spectral graph theory. For more background please see [Vigna2009]. Wikipedia says:

In mathematics, spectral graph theory is the study of properties of a graph in relationship to the characteristic polynomial, eigenvalues, and eigenvectors of matrices associated to the graph, such as its adjacency matrix or Laplacian matrix.

What does this have to do with liquid democracy? To answer this, let’s remember what defines liquid democracy is: system of transitive proxy voting. Proxy in that I can delegate my vote to some entity. Transitive because that entity can itself delegate the vote, and so on. Imagine a simple case with three voters, Alice, Bob, and Charlie. Alice delegates to Bob, and Bob delegates to Charlie. It is very natural to represent this as a graph, this is what we call the delegation graph

graph_decycled

Alice chooses Bob, Bob chooses Charlie

Assuming each voter starts with one vote, this would give us the following weights for each voter:

Alice = 1, Bob = 2, Charlie = 3

Because Alice and Bob have delegated their votes, Charlie will end up casting three votes, one for himself, one for Bob and one for Alice. What determines these weights is the structure of the graph; who is voting for who is encoded in the connections between vertices. In graph theory, the object that encodes this information is the adjacency matrix. Here’s the adjacency matrix for our example:

M

Where each row shows who each voter delegated to. Alice (A) delegated to Bob (B), hence the 1 in the second position of the first row. Similarly, Bob (B) delegated to Charlie, (C) as can be seen in the second row. Because Charlie did not delegate, the third row is all zeroes.

We can express the liquid tally above with these equations (1′s represent voter’s initial weights)

0*A + 0*B + 0*C + 1 = A

1*A + 0*B + 0*C + 1 = B

0*A + 1*B + 0*C  + 1 = C

Note how the 0′s and 1′s above correspond to the columns of the adjacency matrix. The above can be represented[1] in matrix form:

(A B C) * AdjacencyMatrix = (A B C)

This is an eigenvalue equation, whose eigenvector (the (A B C) row vector) corresponds to the result of the liquid tally. Note how the equation is recursive, which fits the recursive nature of transitive delegation. Vote weights are calculated in terms of vote weights themselves, the same way each delegate transmits votes that result from previous delegations.

When the adjacency matrix is used to evaluate the importance or influence of nodes in a graph in this way we are speaking of eigenvector centrality. Our example shows that calculating centrality is basically equivalent to tallying in liquid democracy. This is what makes the connection between spectral theory and liquid democracy.

Eigenvector centrality, Katz and Pagerank

Yes, that’s PageRank as in google, in case you thought this whole talk of centrality was a bit abstract, it’s what made google what it is today. Eigenvector centrality, Katz centrality, and PageRank are related methods to measure the importance of a node in a network. We won’t go into the differences between each measure, besides noting that both Katz and PageRank include an attenuation factor that decreases contributions from distant nodes, whereas eigenvector centrality does not.

In order to run some examples we will use the networkX library, which includes several functions for calculating centrality as well as many other useful features. If you want to play along with this code you will need to install the library as well as its dependencies, including matplotlib and numpy. If you’re on windows I suggest downloading the comprehensive WinPython package which includes everything you need here.

Let’s first create the graph corresponding to our example with Alice, Bob and Charlie. Here’s the code that does that

This generated the image shown earlier. Now to calculate the tally, we will run both Katz and PageRank.

which gives us

Both results match the tally we showed before. A couple of minor points above. First, the PageRank result was rescaled to make it match Katz. Second, the adjacency matrix for Katz was reversed as the networkx 1.8.1 Katz implementation is using a right eigenvector (this has been changed to left eigenvector in master).

More importantly, the alpha parameter is a damping factor. In the language of liquid democracy it modulates just how transitive delegation is by reducing contributions the further away the originate. For example, let’s change the above to alpha = 0.5:

Now Charlie receives 25% of Alice’s vote and 50% of Bob’s vote. So alpha quantifies the fraction of the vote that is effectively delegated. We can interpret then that a liquid democracy tally is a special case of Katz centrality and PageRank. In fact, liquid democracy is the limiting case of Katz and PageRank when alpha = 1.0, ie no damping (which is why you get viscous democracy in [Boldi2011]).

What about cycles?

One of the first things you have to deal with if you’ve implemented a liquid tallying algorithm is the possibility of cycles in the delegation graph, otherwise the procedure will blow up. Having detected a cycle at tally time the standard treatment is to consider votes that enter it as lost. In order to prevent that undesirable situation you can do cycle detection at vote time to warn the user that his/her vote may produce such a cycle.

What happens if we add a cycle to our example? Let’s try it

graph

and we get

The reason this happens has to do with the details of the algorithm that calculates eigenvectors; in particular the relationship between its convergence and the attenuation factor alpha[2]. The short story is this: using an attenuation factor of 1.0 on a graph with cycles may cause problems.

Just as liquid tally algorithms have to deal with cycles, so do we in order to make centrality work correctly. Fortunately there are fast algorithms to detect cycles in graphs. NetworkX offers an implementaton of an improved version of Tarjan’s strongly connected components algorithm, we will use it to define a function that removes cycles in a graph

Using this function we can obtain liquid tallies for any delegation graph correctly, using either Katz or PageRank. See the bottom of this post for the full python script demonstrating this.

Liquid democracy and degree (or branching factor)

Before we said that liquid democracy is the limiting case of Katz centrality and PageRank when alpha = 1.0. In the last section we saw another requirement besides that of alpha = 1.0: that the delegation graph must be acyclic, in other words a DAG. There is one more property that we can consider, degree.

A node’s degree is the number of (in our case, outward) connections with other nodes. In terms of delegation, it is the number of delegates that a voter chooses. Standard liquid democracy uses degree = 1, but such a requirement could in theory be relaxed. How does this fit in with Katz and PageRank? Lets construct a graph where voters may choose one or two delegates.

which gives

graph

resulting values

We see how Katz centrality does not yield a correct tally as it is not dividing outgoing weights for voters who split their delegation among two delegates, instead we get inflated weights. But the PageRank result does work, Bob’s two votes are split correctly, and the delegation proceeds normally from then on.

In summary

  • Liquid democracy is a special case of Katz centrality given
    • a damping factor alpha = 1.0
    • a directed acyclic graph of degree d = 1
  • Liquid democracy is a special case of PageRank given
    • a damping factor alpha = 1.0
    • a directed acyclic graph of degree d >= 1

That’s it for our quick tour of the relationship between liquid democracy and spectral theory. We have also seen how liquid democracy could be extended to include damping (as in [Boldi2011]), or to allow “multi delegation”.


Notes/References

[Vigna2009] Sebastiano Vigna - Spectral Ranking  http://arxiv.org/abs/0912.0238

[Page1998] The PageRank Citation Ranking: Bringing Order to the Web http://ilpubs.stanford.edu:8090/422/1/1999-66.pdf

[Boldi2011] Viscous Democracy for Social Networks http://chato.cl/papers/boldi_bonchi_castillo_vigna_2011_viscous_democracy_social_networks.pdf

[1] For simplicity, I have ignored the initial weights associated with each voter in the matrix equation. These initial weights are what makes liquid tallying equivalent to undamped Katz centrality rather than eigenvector centrality.

[2] For details as to alpha and PageRank See http://vigna.di.unimi.it/ftp/papers/PageRankFunctional.pdf section 5 Limit behaviour.

In the case of the networkX implementation of Katz centrality an alpha of 1.0 is guaranteed to converge as all eigenvalues of an acyclic graph are 0 (see http://www.emis.de/journals/JIS/VOL7/Sloane/sloane15.pdf and http://networkx.github.io/documentation/latest/reference/generated/networkx.algorithms.centrality.katz_centrality.html#networkx.algorithms.centrality.katz_centrality)

Python Script: Random liquid Tallying with Katz/PageRank

Causation and determinism

In this post I will try to disentangle the notions of determinism and causality, and suggest a different way to think of them. I came to think of these issues via the following informal assertion

The decay of a radioactive atom has no cause

I will not be discussing hidden variable theories nor Bell’s inequalities; I will assume outright that the phenomenon of radioactive decay is intrinsically random (as opposed to “apparent randomness” induced by ignorance), its quantum mechanical description is the most complete model possible; said model assigns probabilities to outcomes. With that out of the way, the usual argument that arrives at the above statement is

1) Radioactive decay is intrinsically random, indeterminate and cannot be predicted

2) There is no physical factor which determines whether a radioactive atom decays or not

3) Therefore, that a specific atom decays has no cause

Although the argument makes sense I am hesitant to accept 3) as is, and what it implies about how we think of causality.

Causality has been confusing minds for hundreds of years, it is a very difficult subject as evidenced by the volumes that have been written on it. So there’s not much point in trying to figure out what causality means exhaustively, via conceptual analysis in the tradition of analytic philosophy. Instead we will just quickly define causality using the mathematics of causal models, and see where that takes us for a specific scenario. In the context of these models, we will define causality according to two complementary questions:

A) what is meant by “the effect of a cause”

B) what is meant by “the cause of an effect”

Two types of causal models have been developed over the last thirty years, causal bayesian networks and structural causal models. These two formalisms are largely equivalent[4]; both make use of graph representations. Vertices in these graphs correspond to the variables under study whereas edges represent causal influences between the variables.  Guided by these graphs, one follows precise procedures to obtain mathematical expressions for causal queries over variables. These expressions are cast in the language of probability.

From [Pearl2000] page 15

In this post I will refer to structural causal models as described in Pearl’s Causality: Models, Reasoning, and Inference [Pearl2000]. To begin, we have the definition[2]

causal_model

In the above, D is a directed graph whose edges represent causal influences; these influences are quantitavely specified by functions (structural equations) on variables. Finally, probabilities are assigned to variables not constrained by functions, these are exogenous variables.

The effect of a cause

Given a structural causal model, question A) can be answered with the following result

causal_effect

alternatively

The difference E(Y | do(x’)) – E(Y | do(x”)) is sometimes taken as the definition of “causal effect” (Rosenbaum and Rubin 1983)

The causal effect of changing the variable x’ => x” on y is defined as the difference in expectation of the value that y will take. Note how the formalism includes explicit notation for interventions, do(x).

The cause of an effect

Question b) looks at it from a different point of view. Instead of asking what the effects of some cause are, we ask what the cause of some effect is; it’s a question of attribution. These questions naturally assume the form of counterfactuals (wikipedia):

Counterfactuals

A counterfactual conditional, is a conditional (or “if-then”) statement indicating what would be the case if its antecedent were true.

For example

If it were raining, then he would be inside.

Before you run off screaming “metaphysics!”, “non-falsifiability!” or other variants of hocus pocus, rest assured: counterfactuals have a clear empirical content. In fact, what grants counterfactuals their empirical content is the same assumption that allows confirmation of theories via experiments: that physical laws are invariant. Counterfactuals make predictions just the same way as experiments validate hypothesis. If I say

“if you had dropped the glass it would have accelerated downwards at g”

I am also saying that

“If you now drop the glass, it will accelerate downwards at g”

Given that I make the assumption that all relevant factors remain equal (ie, gravity has not suddenly disappeared).

The following result allows us to answer queries about counterfactuals[3]:

counterfactuals

Once we have expressions for counterfactuals, we can answer questions of type B), with the following results. Note that these results are expressed in terms of counterfactuals, which is why one needs theorem 7.1 as a prerequisite.

necessity

This completes the brief listing of key results for the purposes of the discussion.

Consequences

So what was the point of pasting in all these definitions without going into the details? The point is that given the formalisms of these models and their associated assumptions[5], we can think quantitatively about questions A) and B), without going into the nightmare of trying to figure out what causality “really means” from scratch. Our original criteria now have assumed a quantitative form:

A) The difference in expectation on some value Y when changing some variable X

B1)The probability that some variable X is a necessary requirement for the value of some observed variable Y

B2) The probability that some variable X is a sufficient requirement for the value of some observed variable Y

Thankfully our example of a radioactive atom is very simple compared to the applications causal models were designed for; for our purposes we do not need to work hard to identify the structure nor the probabilities  involved, these are given to us by the physics of nuclear decay.

Feynman diagram for Beta- decay (wikipedia)

Having said this, we construct a minimal model for eg. negative beta decay with the following two variables

r: The neutron-proton ratio, with values High, Normal, Low (using some arbitrary numerical threshold)

d: Whether β- decay occurs at some time t, with values True, False

Our questions, then, are

Q1) What is the causal effect of r=High on d?

Q2) What is the probability of necessity P(N) of r = High, relative to the observed effect d = True?

Q3) What is the probability of necessity P(S) of r = High, relative to the observed effect d = True?

In order to interpret the answers to the above questions we must first go into some more details about causality and the models we have used.

General causes, singular causes, and probabilities

Research into causality has distinguished two categories of causal claims:

General (or type-level) causal claims:

Drunk driving causes accidents.

Singular (or token-level) causal claims:

The light turning on was caused by me flipping the switch.

General claims describe overall patterns in events, singular claims describe specific events. This distinction brings us to another consideration. The language in which causal models yield expressions is that of probability. We have seen probabilities assigned to the value of some effect, as well as probabilities assigned to the statement that a cause is sufficient, or is necessary. But how do these probabilities arise?

Functional causal models are deterministic; the structural equations that describe causal mechanisms (graph arrows) yield unique values for variables as a function of their influences. On the other hand, the exogenous variables, those that are not specified within the model, but rather are inputs to it, have an associated uncertainty. Thus probabilities arise from our lack of knowledge about the exact values that these external conditions have. The epistemic uncertainty spreads from exogeneous variables throughout the rest of the model.

[Pearl2000] handles the general/singular dichotomy elegantly: there is no crisp border, rather there is a continuous spectrum as models range from general to specific, corresponding to how much uncertainty exists in the associated variables. A general causal claim is one where exogenous variables have wide probability distributions; as information is added these probabilities are tightened and the claim becomes singular. In the limit, there is no uncertainty, the model is deterministic.

We can go back to question 1) whose answer can be interpreted without much difficulty.

Q1) What is the causal effect of r=High (high nuclear ratio) on d (decay)?

If physics is correct, having a certain values for r will increase the expectaton of d being equal to True, relative to some other value for r. This becomes a general causal claim,

A1) High nuclear ratio causes Beta- decay

So, relative to our model, high nuclear ratio is a cause of Beta- decay. Note that we can say this despite the fact that decay is intrinsically indeterministic. Even though the probabilities are of a fundamentally different nature, the empirical content is indistinguishable from any other general claim with epistemic uncertainty. Hence, in this particular case determinism is not required to speak of causation.

The more controversial matter is attribution of cause for a singular indeterministic phenomenon, which is where we began.

3) A specific atom decay has no cause

This is addressed by questions 2) and 3).

Q2) What is the probability of necessity P(N) of r = High, relative to the observed effect d = True?

Q3) What is the probability of necessity P(S) of r = High, relative to the observed effect d = True?

Recall, functional causal models assign probabilities that arise from uncertainty in exogenous variables; this is what we see in definitions 9.2.1 and 9.2.2. The phrase “probability of sufficiency/necessity” conveys that sufficiency/necessity is a determinate property of the phenomenon, it’s just that we don’t have enough information to identify it. Therefore, in the singular limit these properties can be expressed as logical predicates

Sufficiency(C, E): Cause => Effect

Necessity(C, E): Effect => Cause

In the case of the decay of a specific atom at some time the causal claims become completely singular, definitions 9.2.1 and 9.2.2 reduce to evaluations of whether the above predicates hold. If we assume that atoms with low nuclear ratio do not undergo Beta- decay, our answers are:

A2) High nuclear ratio is a necessary cause of Beta- decay

A3) High nuclear ratio is not a sufficient cause of Beta- decay

Thus the truth of the statement that the decay of a radioactive atom has no cause depends on whether you are interested in sufficiency or necessity. In particular, that the atom would not have decayed were it not for its high nuclear ratio suggests this ratio was a cause of its decay.

But let’s make things more complicated, let’s say there is a small probability that atoms with low nuclear ratios show Beta- decay. We’d have to say that (remember, relative to our model) the decay of a specific atom at some time has no cause, because neither criterion of sufficiency or necessity is met.

The essence of causality, determinism?

We can continue to stretch the concept. Imagine that a specific nuclear ratio for a specific atom implied a 99.99% probability of decay at some time t, and also that said probability of decay for any other nuclear ratio were 0.001%. Would we still be comfortable saying that the decay of such an atom had no cause?

Singular indeterministic events are peculiar things. They behave according to probabilities, like those of general causation, but are fully specified, like instances of singular deterministic causation. Can we not just apply the methods and vocabulary of general causation to singular indeterministic events?

In fact, we can. We can modify functional causal models such that the underlying structural equations are stochastic, as mentioned in [Pearl2000] section 7.2.2. Another method found in [Steel2005] is to add un-physical exogenous variables that account for the outcomes of indeterministic events. Both of these can be swapped into regular functional models. This should yield equivalent definitions of 9.2.1 and 9.2.2, where probability of sufficiency and necessity are replaced with degrees, giving corresponding versions of A2) and A3).

In this approach, singular causation is not an all or nothing property, it is progressive. Just as general causal claims are expressed with epistemic probabilities, singular causal claims are expressed in terms of ontological probabilities. In this picture, saying that a particular radioactive decay had no cause would be wrong. Instead, perhaps we could say that a specific decay was “partially” or “mostly” caused by some property of that atom, rather than that there was no cause.

I believe this conception of causality is more informative. Throwing out causation just because probabilities are not 100% is excessive and misleading, it ignores regularities and discards information that has predictive content. The essence of causation, I believe, is not determinism, but counterfactual prediction, which banks on regularity, not certainty. It seems reasonable to extend the language we use for general causes onto singular ones, as their implications have the same empirical form. Both make probabilistic predictions, both can be tested.

No cause

What would it mean to say that some event has no cause, according to this interpretation? It would mean that an event is entirely unaffected by, and independent of, any of the universe’s state; no changes made anywhere would alter the probabilities we assign to its occurence. Such an event would be effectively “disconnected” or “transparent”.

Pollock – Lavender Mist Number 1

We could even imagine a completely causeless universe, where all events would be of this kind. It is not easy to see how such a strange place would look like. The most obvious possibility would be a chaotic universe, with no regularities. If we described such a universe as an n-dimensional (eg 3 + 1) collection of random variables, a causeless universe would exhibit zero interaction information, and zero intelligibility, as if every variable resulted of an independent coinflip. But it is not clear to me whether this scenario necessarily follows from a causeless  universe assumption.


References

[Pearl2000] http://www.amazon.com/Causality-Reasoning-Inference-Judea-Pearl/dp/0521773628

[Pearl2009] http://ftp.cs.ucla.edu/pub/stat_ser/r350.pdf

[Steel2005] http://philoscience.unibe.ch/documents/causality/Steel2005.pdf

[2] http://bayes.cs.ucla.edu/BOOK-2K/ch2-2.pdf

[3] http://www.cs.ucla.edu/~kaoru/ch7-final

[4] http://www.mii.ucla.edu/causality/?p=571

[5] See eg causal markov condition, minimality, stability

[6] ftp://ftp.cs.ucla.edu/pub/stat_ser/r393.pdf

The essential ingredient of causation, as argued in Pearl (2009:361) is responsiveness, namely, the capacity of some variables to respond to variations in other variables, regardless of how those variations came about.

[7] Laurea and her tolerance

Logic and intuition in math and chess

I recently came across a post about the roles of logic and intuition in mathematics. It presented ideas originally expressed by mathematician Henri Poincaré, as found in his book The Value of Science. In one fragment, Poincaré suggests an analogy between math and chess playing:

If you are present at a game of chess, it will not suffice, for the understanding of the game, to know the rules for moving the pieces. That will only enable you to recognize that each move has been made conformably to these rules, and this knowledge will truly have very little value. Yet this is what the reader of a book on mathematics would do if he were a logician only. To understand the game is wholly another matter; it is to know why the player moves this piece rather than that other which he could have moved without breaking the rules of the game. It is to perceive the inward reason which makes of this series of successive moves a sort of organized whole. This faculty is still more necessary for the player himself, that is, for the inventor.

In this analogy, Poincaré suggests mapping logic to the rules of chess and intuition to “the inward reason which makes of this series of successive moves a sort of organized whole”.

I find this analogy flawed. Chess is an adversarial environment, not a benign one. What makes a chess playing deep is not the difficulty of finding a sequence of legal moves that achieve a certain outcome, but finding a sequence of legal moves that, despite the opponent’s responses, achieves said outcomes. In mathematics there is no adversary, any set of legal inferences suffices. In chess the source of difficulty is not conforming to rules, but exerting more optimization power over the board’s state than your opponent.

Mapping logic to the rules of chess misses the better analogy found in the mental process of finding those moves.  It is this process, this “inward reason”, that itself exhibits both components, varying from the mostly logical, to the mostly intuitive. This makes for a more natural analogy seeing that those concepts map very well to the already existing ideas in chess of tactics vs strategy.

Thus logic maps to tactics, and intuition maps to strategy. And we can recover the same properties Poincaré mentions about invention and proofs. A chess player may make strategic judgements about overall courses of action or positions, but victory itself must always materialize with tactical play.

Another matter is how logic and intuition correspond to brain processes. Intuition has the property that you cannot describe explicitly exactly how you arrived at a conclusion, whereas logic is always explicit. Add a bit of mind projection fallacy into the mix and you start getting fancy ideas about the Power of Intuition.