Logic and intuition in math and chess

I recently came across a post about the roles of logic and intuition in mathematics. It presented ideas originally expressed by mathematician Henri Poincaré, as found in his book The Value of Science. In one fragment, Poincaré suggests an analogy between math and chess playing:

If you are present at a game of chess, it will not suffice, for the understanding of the game, to know the rules for moving the pieces. That will only enable you to recognize that each move has been made conformably to these rules, and this knowledge will truly have very little value. Yet this is what the reader of a book on mathematics would do if he were a logician only. To understand the game is wholly another matter; it is to know why the player moves this piece rather than that other which he could have moved without breaking the rules of the game. It is to perceive the inward reason which makes of this series of successive moves a sort of organized whole. This faculty is still more necessary for the player himself, that is, for the inventor.

In this analogy, Poincaré suggests mapping logic to the rules of chess and intuition to “the inward reason which makes of this series of successive moves a sort of organized whole”.

I find this analogy flawed. Chess is an adversarial environment, not a benign one. What makes a chess playing deep is not the difficulty of finding a sequence of legal moves that achieve a certain outcome, but finding a sequence of legal moves that, despite the opponent’s responses, achieves said outcomes. In mathematics there is no adversary, any set of legal inferences suffices. In chess the source of difficulty is not conforming to rules, but exerting more optimization power over the board’s state than your opponent.

Mapping logic to the rules of chess misses the better analogy found in the mental process of finding those moves.  It is this process, this “inward reason”, that itself exhibits both components, varying from the mostly logical, to the mostly intuitive. This makes for a more natural analogy seeing that those concepts map very well to the already existing ideas in chess of tactics vs strategy.

Thus logic maps to tactics, and intuition maps to strategy. And we can recover the same properties Poincaré mentions about invention and proofs. A chess player may make strategic judgements about overall courses of action or positions, but victory itself must always materialize with tactical play.

Another matter is how logic and intuition correspond to brain processes. Intuition has the property that you cannot describe explicitly exactly how you arrived at a conclusion, whereas logic is always explicit. Add a bit of mind projection fallacy into the mix and you start getting fancy ideas about the Power of Intuition.

Occam’s razor in a cellular physics universe

A cellular automaton (http://www.noyzelab.com/)

cellular automaton (CA) is an algorithm acting on cells in  a grid at discrete time steps. The cells can be typically in two states on or off. At each step, the CA computes what the new state of the cells are,  as a function of the state of its neighbors. Here is a simple example of how the new cells are calculated from the old ones:

in this example, the new cell is shown below, where the input cells (neighbors) are the three above. The image at the top of this post shows the evolution of a CA, by displaying new cells at each row. In other words, time flows vertically downwards.

CA’s were discovered in the 1940’s by Stanislaw Ulam and John von Neumann, who were working together at Los Alamos National Laboratory. Perhaps the most famous automaton is the Game of Life, invented by John Conway in 1970.

In this post we will consider a model of a universe based on cellular automata and see what it says about Occam’s razor and the problem of induction. The idea that the universe is describable by a cellular automaton is not new

many scholars have raised the question of whether the universe is a cellular automaton.[68] Consider the evolution of rule 110: if it were some kind of “alien physics”, what would be a reasonable description of the observed patterns?[69]

If you didn’t know how the images were generated, you might end up conjecturing about the movement of some particle-like objects (indeed, physicist James Crutchfield made a rigorous mathematical theory out of this idea proving the statistical emergence of “particles” from CA). Then, as the argument goes, one might wonder if our world, which is currently well described by physics with particle-like objects, could be a CA at its most fundamental level.

This idea is a specific variant of a more general perspective known as digital physics

In physics and cosmology, digital physics is a collection of theoretical perspectives based on the premise that the universe is, at heart, describable by information, and is therefore computable.

Note that we are not claiming digital physics here, but rather constructing a model based on some initial postulates and seeing where it leads us.

Given this background we can consider the problem of induction in a CA universe. The properties of this model are:

1) The universe consists of an n-dimensional infinite grid of cells

2) The time evolution of cells is governed by a cellular automaton

Let’s add our scientist. An agent in this universe makes observations and must formulate hypothesis as to what natural laws describe reality. If we accept a bayesian model, the problem of induction is how to construct a prior on possible theories such that inference is possible. But what form do theories have in this model?

From CA’s to Boolean functions

Although not immediately obvious, typical (2-state) CA’s are equivalent to boolean functions. This is something I noticed when I came across the equation that describes the number of CA’s as a function of states and neighbors:

The general equation for such a system (CA) of rules is kks , where k is the number of possible states for a cell, and s is the number of neighboring cells

This has the same shape as the expression 22k, which is the number of boolean functions for arity k . The connection is simple: a CA with 2-state cells that takes n neighbors as inputs to produce a new output cell (again 2-state) is equivalent to a function

ƒ : Bk → B, where B = {0, 1}

which is precisely the definition of a k-arity boolean function. In this CA -> Boolean Function correspondence the arity is given by the CA’s dimensionality and neighborhood. Below is a one-dimensional CA, each cell’s new value is a function of its two adjacent neighbors plus its own value (arity of 3).

Rule 179

This CA is known as rule 179 because that number encodes the binary specification of the boolean function. You can see this by looking at its truth table (I’m using bexpred):

rule179

Truth table specification for Rule 179

The table shows the output of the 3-ary function, inputs A,B,C. If you read the output bits bottom up you get 10110011 which in decimal is 179.

Boolean functions, expressions and trees

Besides the equivalence with CA’s, boolean functions are in general described by boolean algebra and are specified with boolean expressions or formulas. In this algebra variables take on the values true (T), false (F),  and the operators are disjunction (v), conjunction (^) and negation (~). For example, Rule 179 above can be formulated as

(A^C) v ~B

where A is the left neighbor cell, B is the center, and C is the right neighbor; you can check that this in fact corresponds to the CA by applying the formula on cells: doing this repeteadly would result in the pattern in the image above.

The nature of boolean is expressions is such that you can represent them as trees. For example (from D. Gardy[1]), the expression

x ^ (y v z v ~y v y) ^ (~y v t v (x ^ ~v) v u)

can be represented as

booleantree

Image taken from [1]

this representation of is very similar to that of boolean circuits, in which boolean expressions are represented as directed acyclic graphs. This representation allows classifying boolean circuits in terms of their computational complexity:

In theoretical computer science, circuit complexity is a branch of computational complexity theory in which Boolean functions are classified according to the size or depth of Boolean circuits that compute them.

There are two measures of complexity, depth and circuit-size complexity. In this post we will use a boolean expression analog of circuit-size complexity, which measures the computational complexity of a boolean function by the number of nodes of the minimal circuit that computes it.

L(f) = length of shortest formula (boolean expression) computing f

With this last piece we can revisit our model and add some further detail:

1) The universe consists of an n-dimensional infinite grid of cells

2) The time evolution of cells is governed by some 2-state cellular automaton describable by a boolean tree of complexity L(f)

We can also answer the question posed earlier:

What form do theories have in this model?

The theories our scientist constructs take the from of boolean expressions or equivalently boolean trees. As stated before, the problem of induction in a bayesian setting is about constructing priors over theories. In our model this now translates into constructing a prior over boolean expressions.

Finally, we will postulate two desirable properties such a prior must have, following the spirit of work on algorithmic probability[3][4]. One is Epicurus’ Principle of Multiple Explanations:

Epicurus: if several theories are consistent with the observed data, retain them all

The other is the Principle of Insufficient Reason

when we have no other information than that exactly mutually exclusive events can occur, we are justified in assigning each the probability 1/N.

These last three epistemological characteristics complete our model:

3) Theories take the form of boolean expressions with tree complexity L(f)

4) A-priori all theories are consistent with evidence (Epicurus)

5) A-priori all theories are equally likely

A uniform prior on boolean expressions

Per the characteristics of our model we wish to construct a prior probability distribution over boolean expressions such that

a) The distribution’s support comprises all boolean expressions for some n-dimensional 2-state CA

b) All boolean expression are assigned equal probability

In order to achieve this we turn to results by Lefmann and Savicky[1] et al. on a specific tree representation of boolean formulas, And/Or trees:

We consider such formulas to be rooted binary trees.. each of the inner nodes .. is labeled by AND or OR. Each leaf is labelled by a literal, i.e. a variable or its negation

Note that these properties of And/Or trees do not reduce their expressiveness: any boolean expression can be formulated as an And/Or tree.

We wish to construct a uniform (a) probability distribution over all (b) And/Or trees, which are infinite. Lefmann and Savicky (see also Woods[6]) proved that such a probability distribution exists as an asymptotic limit of a uniform finite distribution:

theorem2.3

Finally we will use two results (later improved in Chauvin[7]) which relate the probability P(f) and the boolean expression complexity L(f) in the form of probability bounds:

theorem3.1

and

theorem3.5

establishing upper and lower bounds. Note the L(f) term in both cases.

Implications

Let’s recap. We defined a toy universe governed by a variant of CA physics, then showed the equivalence between these CA’s, boolean functions, expressions and finally trees. After adding two epistemological principles we recast the problem of induction in this model in terms of constructing a uniform prior over boolean expressions (theories). Further restrictions (And/Or tree representation of theories) allowed us to use existing results to establish the existence of, and then provide upper and lower bounds on, our uniform prior.

The key characteristic in these bounds is the term for the boolean function’s complexity. In theorem 3.1, the L(f) term appears as a positive exponential on a number < 1. In theorem 3.5, L(f) appears as a negative exponential on a number > 1. This means that the complete bounds are monotonically decreasing with increasing expression complexity. This is essentially equivalent to Occam’s razor.

Thus we have shown[8] that Occam’s razor emerges automatically from the the properties of our model; we get the razor “for free”, without having to add it as a separate assumption. Our scientist would therefore be justified in assigning higher probabilities to simpler hypothesis.

As an example, we can see concrete values, not just bounds, for the prior distribution in Chauvin [7], for the specific case of n = 3 (This would correspond with a 2-state 1-dimensional CA).

table

Sample P(f) for n = 3 (taken from [7])


The column of interest is labelled P(f). We can see how probabilities decrease with increasing boolean expression complexity. Refer to section 2.4 of that paper to see the corresponding increasing values of L(f).

Generalizations

Although we have reviewed the basic steps that outline how Occam’s razor follows from our simple model’s properties, we have not discussed the details as to how and why this happens. In a future post we’ll discuss these details, and the possibility that the mechanism at work may (or may not) generalize to other formalizations of universe-theory-prior.


Notes/References

[1] D. Gardy. Random Boolean expressions. In Colloquium on Computational Logic and Applications, volume AF, pages 1–36. DMTCS Proceedings, 2006.

[2] H. Lefmann and P. Savicky. Some typical properties of large And/Or Boolean formulas. Random Structures and Algorithms, 10:337351, 1997.

[3] Principles of Solomonoff Induction and AIXI 

[4] A Philosophical Treatise of Universal Induction

[5] http://www.scholarpedia.org/article/Algorithmic_probability#Bayes.2C_Occam_and_Epicurus

[6]  A. Woods. Coloring rules for finite trees, and probabilities of monadic second order sentences. Random Structures and Algorithms, 10:453485, 1997.

[7] B. Chauvin, P. Flajolet, D. Gardy, and B. Gittenberger. And/Or trees revisited. Combinatorics Probability and Computing, 13(4 5):475497,July-September 2004

[8] We are leaving out some technical details here. One is that monotonically decreasing bounds do not imply a monotonically decreasing probability. There may be local violations of Occam’s razor, but the razor must holds besides minor fluctuations. In the sample results for n=3 in Chauvin[7], probabilities are in fact monotonically decreasing.

Two, the asymptotics for P(f) for fixed m and P(f) for trees <= m are the same, see [7] 2.1 and [1] 3.3.3

Another detail is the assumption that ceteris paribus, a minimal expression computing f1 corresponding to expression e1 will be shorter than the minimal expression computing f2 corresponding expression e2, if e1 < e2. I e1 < e2 implies on average L(f1) < L(f2).

Finally, it is worth nothing that it is the syntactic prior over boolean expressions that induces an occamian prior over boolean functions. What makes this work is that formula reductions[9] produce multiplicities in the syntactic space for any given element in semantic space. A uniform prior over boolean functions would not yield Occam, this would have to be added separately (ie, the problem of induction)

[9] Boolean expressions may be reduced (simplified) using the laws of boolean algebra. Here is an example boolean reduction

The image above shows a reduction of the 3-ary boolean expression

(!A*!B*!C)+(A*!B*!C)+(!A*!B*C)+(A*!B*C)+(A*B*C)

which yields

A*C + !B

Which is in fact the boolean function corresponding to Rule 179