In this article, Goertzel discusses the reasons we don’t have AGI yet, responding to an article by David Deutsch. Themes of inner and outer perspectives on AI make an appearance; Deutsch advocates an “open-the-box” approach towards defining intelligence.

# Month: October 2012

# Rationality trick: openly acknowledge errors

Imagine you’re having a debate or discussion with someone, and have a strong difference of opinion. The discussion ends, and no agreement has been reached, both sides hold their positions. Later, perhaps because of further reflection, or new information, you change your mind. You privately realize you were wrong, and that that someone was right; you actually agree with them now. What happens then?

Previously, I have said

As soon as you establish, in a social context, that you are advocating or defending a certain position, you become bound to it: being proven wrong as well as changing your opinion signals weakness, something we are evolutionarily programmed to avoid at all costs. That’s why you rarely see someone admitting being wrong or changing their mind in a debate, especially if there’s an audience.

We don’t want to lose face or status, not even in front of ourselves, and this may establish preferences over states of reality

If you have a preference as to how you’d like things to be, you can be pretty sure that your mind will distort things to match that

So, besides being vigilant when detecting said preferences, what else can be done to counter the cognitive biases at work? Two things, which I’ll call conditioning and reversed reinforcement.

Conditioning means exposing yourself to a negative experience in order to develop tolerance and reduce its effects[1]. When the negative effects of being wrong are reduced, that is, we become accustomed to “losing face” both publicly and privately (i.e. private introspection), the motivation to avoid said negative stimulus should be correspondingly reduced. This in turn will reduce the strength of biases that distort reality to avoid the negative experience.

The other technique involves trying to convert a negative experience into a positive one. Instead of shamefully and half-heartedly admiting an error, doing it openly. Displaying an open, honest attitude over something that is typically a sign of weakness is a very strong signal of strength. And this can result in positive reinforcement, both publicly and privately, thus countering the negative stimulus that is by default associated with being wrong. As above, a dimished negative effect may reduce the strength of activated biases[2].

In summary, openly acknowledging being wrong may work towards reducing resistance to changing your mind in the future instead of fooling yourself to protect your status. And there’s an added bonus. If people observe that you’re honest about your errors, they will automatically assign greater credibility to positions you hold. So, next time you realize you’re wrong, instead of shutting up about it, come out and say it outright.

[1] Technically, systematic desensitization

[2] And before somebody points it out, it really is very hard to go too far and reverse the bias, the innate tendency to avoid signaling weakness is very strong.

# Meta-theoretic induction in action

In my last post I presented a simple model of meta-theoretic induction. Let’s instantiante it with concrete data and run through it. Say we have

E_{1} E_{2} E_{3} |
Observations made for different domains 1-3 |

S_{1} S_{2} S_{3} |
Simple theories for domains 1-3 |

C_{1} C_{2} C_{3} |
Complex theories for domains 1-3 |

S |
Meta-theory favoring simple theories |

C |
Meta-theory favoring complex theories |

That is, we have three domains of observation with corresponding theories. We also have two meta-theories that will produce priors on theories. The meta-theories themselves will be supported by theories’ sucesses or failures. Successes of simple theories support S, successes of complex theories support C. Now define the content of the theories through their likelihoods

E_{n} |
P(E_{n}|S_{n}) |
P(E_{n}|C_{n}) |
---|---|---|

E_{1} |
3/4 | 1/4 |

E_{2} |
3/4 | 1/4 |

E_{3} |
3/4 | 3/4 |

Given that E_{1}, E_{2} and E_{3} are evidence, this presents a scenario where theories S_{1} and S_{2} were successful, whereas theories C_{1} and C_{2} were not. S_{3} and C_{3} represent theories that are equally well supported by previous evidence (E_{3}) but with different future predictions. This is the crux of the example, where the simplicity bias enters into the picture. Our meta-theories are defined by

*P(S _{n}|S) = 3/4, *

*P(S*

_{n}|C) = 1/4*P(C _{n}|C) = 3/4, *

*P(C*

_{n}|S) = 1/4Meta-theory S favors simple theories, whereas meta-theory C favors complex theories. Finally, our priors are neutral

*P(S _{n}) = P(C_{n}) = 1/2*

*P(S) = P(C) = 1/2*

We want to process evidence E_{1 }E_{2}, and see what happens at the critical point, where S_{3} and C_{3} make the same predictions. The sequence is as follows

- Update meta theories S and C with E
_{1}and E_{2} - Produce a prior on S
_{3}and C_{3}with the updated C and S - Update S
_{3}and C_{3}with E_{3}

The last step produces probabilities for S_{3} and C_{3}; these theories make identical predictions but *will have different priors granted by S and C*. This will formalize the statement

Simpler theories are more likely to be true because they have been so in the past

### The model as a bayesian network

Instead of doing all the above by hand (using equations **3**,**4**,**5**,**6**), it’s easier to construct the corresponding bayesian network and let some general algorithm do the work. Formulating the model this way makes it much easier to understand, in fact it seems almost trivial. Additionally, our assumptions of conditional independence (**1** and **2**) map directly into the bayesian network formalism of nodes and edges, quite convenient!

Node M represents the meta-theory, with possible values *S* and *C, *the H nodes represent theories, with possible values S_{n} and C_{n}. Note the lack of edges between H_{n} and E_{x} formalizing (**1**), and the lack of edges between M and E_{n} formalizing (**2**) (these were our assumptions of conditional independence).

I constructed this network using the SamIam tool developed at UCLA. With this tool we can construct the network and then monitor probabilities as we input data into the model, using the tool’s* Query Mode*. So let’s do that, fixing the actual outcome of the evidence nodes E1, E2 and E3 (click to enlarge)

Theories S_{1} and S_{2} make correct predictions and are thus favoured by the data over C_{1} and C_{2}. This in turn favours the meta-theory S, which is assigned a probability of 73% over meta-theory C, with 26%. Now, theories S_{3} and C_{3} make the *same* predictions about E_{3}, but because of our meta-theory being better supported, they are assigned different probabilities. Again, recall our starting point

Simpler theories are more likely to be true because they have been so in the past

We can finally state this technically, as seen here

The simple theory S_{3} is favored at 61% over C_{3} with 38%, even though they make the same predictions. In fact, we can see how this works if we look at what happens with and without meta-theoretic induction

where as expected the mirrors of S_{3} and C_{3} would be granted the same probabilities. So everything seems to work, our meta-theory discriminates different theories and is itself justified via experience, as was the objective

Occam seems like an unjustified and arbitrary principle, in effect, an unsupported bias. Surely, there should be some way to anchor this widely applicable principle on something other than arbitrary choice. We need a way to represent a meta-theory such that it favours some theories over others

andsuch that it can bejustified through observations.

**But**, what happens when we add a meta-theory like *Occam(t)* into the picture? What happens when we apply the same argument at the meta-level that prompted the meta-theoretic justitification of simplicity we’ve developed? We define a meta-theory *S-until-T* with

*P(S _{1}|S-until-T) = *

*P(S*

_{2}|S-until-T) = 3/4*P(S _{3}|S-until-T) = 1/4*

which yields this network

Now both S and S-until-T accrue the same probability through evidence and therefore produce the same prior on S_{3} and C_{3}, 50%. It seems we can’t escape our original problem.

Because both

OccamandOccam(t)are supported by the same amount of evidence, equal priors will be assigned to S_{3}and C_{3}. The only way out of this is forOccamandOccam(t)to have different priorsthemselves.But this leaves us back where we started!We are just recasting the original problem at the meta level, we end up begging the question[1] or in an infinite regress.

In conclusion, we have succeeded in formalizing meta-theoretic induction in a bayesian setting, and have verified that it works as intended. However, it ultimately does not solve the problem of justificating simplicity. The simplicity principle remains a prior belief independent of experience.

(The two networks used in this post are metainduction1.net and metainduction2.net, you need the SamIam tool to open these files)

[1] Simplicity is justified if we previously assume simplicity

# Formalizing meta-theoretic induction

In this post I formalize the discussion presented here, recall

Simpler theories are more likely to be true because they have been so in the past

We want to formalize this statement into something that integrates into a bayesian scheme, such that the usual inference process, updating probabilities with evidence, works. The first element we want to introduce into our model is the notion of a **meta-theory**. A meta-theory is a statement about theories, just as a theory is a statement about observations (or the world if you prefer a realist language).

As a first approximation, we could formalize meta-theories as priors over theories. In this way, a meta-theory prior, together with observations, would yield probabilities for theories through the usual updating process. This formalization is technically trivial, we just relabel priors over theories as meta-theories. But this approach does not account for the second half of the original statement

..because they have been so in the past.

As pure priors, meta-theories would never be the object of justification. We need a way to represent a meta-theory such that it favours some theories over others *and* such that it can be *justified through observations*. In order to integrate with normal theories, meta-theories must accumulate probability via conditioning on observations, just as normal theories do.

We cannot depend on or add spurious observations like “this theory was right” as a naive mechanism for updating; this would split the meta and theory level. Evidence like “this theory was right” must be embedded in existing observations, not duplicated somewhere else as a stand alone, ad-hoc ingredient.

Finally, the notion of meta-theory introduces another concept, that of distinct theory **domains**. This concept is necessary because it is through cross-theory performance that a meta-theoretical principle can emerge. No generalization or principle would be even possible if there were no different theories to begin with. Because different theories may belong to different domains, meta-theoretic induction must account for logical dependencies pertaining to distinct domains; these theories make explicit predictions only about their domain.

Summing up:

Our model will consist of observations/evidence, theories and meta-theories. Theories and corresponding observations are divided into different domains; meta-theories are theories about theories, and capture inter-theoretic dependencies (see below). Meta-theories do not make explicit predictions.

Let’s begin by introducing terms

E_{n}: An element of evidence for domain *n* [1]

*H _{n}*: A theory over domain

*n*

*M*: A meta-theory

Observations that do not pertain to a theory’s domain will be called external evidence. An important assumption in this model is that *theories are conditionally independent of external observations given a meta-theory*. This means that a theory depends on external observations only through those observation’s effects on meta-theories[2].

We start the formalization of the model with our last remark, conditional independence of theories and external observations given a meta-theory

*P(H _{n}|E_{x},M) = P(H_{n}|M) …………………… (1)*

Additionally, any evidence is conditionally independent of a meta-theory given its corresponding theory, i.e. it is theories that make predictions, meta-theories only make predictions indirectly by supporting theories.

*P(E _{n}|M,H_{n}) = P(E_{n}|H_{n}) …………………… (2)*

Now we define how a meta-theory is updated

*P(M|E _{n}) = P(E_{n}|M) * P(M) / P(E_{n}) …………………… (3)*

this is just Bayes’ theorem. The important term is the likelihood, which by the law of total probability is

*P(E _{n}|M) = P(E_{n}|M,H_{n}) * P(H_{n}|M) + P(E_{n}|M,¬H_{n}) * P(¬H_{n}|M)*

which by conditional independence (**1**)

*P(E _{n}|M) = P(E_{n}|H_{n}) * P(H_{n}|M) + P(E_{n}|¬H_{n}) * P(¬H_{n}|M) …………………… (4)*

This equation governs how a meta-theory is updated with new evidence *E _{n}*. Now to determine how the meta-theory determines a theory’s prior. Again by total probability

*P(H _{n}|E_{x}) = P(H_{n}|Ex,M) * P(M|E_{x}) + P(Hn|E_{x},¬M) * P(¬M|E_{x})*

which by conditional independence (**2**)

*P(H _{n}|E_{x}) = P(H_{n}|M) * P(M|Ex) + P(H_{n}|¬M) * P(¬M|Ex) …………………… (5)*

The following picture illustrates how evidence updates a meta-theory which in turn produces a prior. Note that evidence E1 and E2 are *external* to H3

Lastly, updating a theory based on matching evidence is, as usual

*P(H _{n}|E_{n}) = P(E_{n}|H_{n}) * P(H_{n}) / P(E_{n}) …………………… (6)*

Equations **3**,**4**,**5** and **6** are the machinery of the model through which evidence can be processed in sequence. See it in action in the next post.

[1] A given *E _{n}* represents a sequence of observations made for a domain n. So

*H*represents induction in a single step, although in practice it would occur with successive bayesian updates for each subelement of evidence.

_{n}|E_{n}[2] This characteristic is the meta analogue of conditional independence between observations given theories. In other words, just as *logical dependencies between observations are mediated by theories, inter-domain logical dependencies between theories are mediated by meta-theories*.

# Strange Loop 2012 videos

Here’s the schedule for the release of the Strange Loop 2012 videos. Definitely interesting stuff to be found. And 5 talks related to Scala.

https://thestrangeloop.com/news/strange-loop-2012-video-schedule