Time to AI, anyone’s guess?

When somebody asks me how long until AI is developed, I say between two and four decades. The question as posed refers to artificial intelligence that is competent at a variety of tasks that humans can carry out. Competent means not obviously distinguishable from a human’s performance.  So even though the question is not technically precise, it  is reasonably well defined, because human standards of performance are implicit in the question.

Now, getting back to the question, my usual answer as I said is two to four decades. But this brings me to this post I read recently, which tries to shed light on the matter by surveying and analyzing a large number of AI predictions, both from experts and non-experts.  The first conclusion, which I quote from the summary is

Over a third of predictors claim AI will happen 16-25 years in the future.

But I find the following two results of special significance

There is little difference between experts and non-experts (some possible reasons for this can be found here).
There is little difference between current predictions, and those known to have been wrong previously.

This paints a pretty bleak picture as to our ability to predict the development of AI. If experts can’t be distiguished from non-experts, it means that for the purposes of predicting AI, there really exist no experts at all, and uninformed guesses are as good as anything.

Along the same lines goes the second point I quoted. If current predictions are similar to previous ones that we know were flawed (because they were completely wrong), this suggests current ones could also be wrong. More specifically, if predictions made at different times, with significantly different contexts (ie scientific and technological progress) yielded the same predictions, it suggests that the predictions are independent of concrete scientific and technological considerations; they are based on non-specific reasoning, or even worse, unrelated factors (eg psychological) that are common to both periods.

Lastly, there is another point to be made about expert predictions. These experts are by definition active in the field of AI research, and have a stake (eg funding) in what is believed will happen. Plus, if they’re working towards AI, presumably they want their predictions to be true, it is their goal as researchers.

Where does this leave “two to four decades”? It’s not an expert prediction, although my intuition for this number is certainly based on expert opinions I’ve come accross over the years. In theory, reconsidering with the extra information should change my mind. Rationality dictates that one should widen the prediction interval in order to reflect the extra uncertainty that this new information introduces. The widening could be asymmetrical, because ignorance about how to achieve something points to that something being difficult rather than easy, so one could say something like “15 to 50 years”.

Perhaps this meta level correction is not incompatible with my original, object level intuitions: Advances in machine learning, neuroscience and computing power together with growth trends in funding suggest to me that arrival of AI in 30 years is quite likely, and that in 40 years it is beginning to approach certainty. But one has to admit, these are just intuitions and handwaving, we simply do not seem to have the tools to make predictions in a technical way.

Prediction is very difficult, especially if it’s about the future.
– Niels Bohr

Epistemological dive

 
Note: I wrote this piece before the two posts presenting simple model of learning using bayesian inference. There is significant overlap, and  conclusions are stated without complete explanations.
 
 

I attended an informal talk on climate change recently, after which I had several discussion regarding the scientific process and the foundations of knowledge (in science).

One question was Is scientific knowledge is inductive or deductive? Well, the scientific method requires deductive inference to establish the logical consequences of a theory in order to make predictions. But the justification of theories, the method by which a theory is temporarily accepted or discarded is inductive. In the language of Bayes, theory confirmation/invalidation occurs by updating theory posteriors inductively (P(H|E)), whereas evidence conditioning on theories (P(E|H)) is derived deductively.

So, although deduction plays a part in establishing the logical consequences of theories in the form of testable predictions, the nature of the knowledge, or rather, the process by which that knowledge is gained, is fundamentally inductive.

What does this say about the foundations of knowledge in science? If scientific knowledge were deductive, we could simply say that its foundations are axiomatic. We could also talk about incompleteness and other interesting things. But if as we have stated this knowledge is inductive, what are its foundations? Is induction a valid procedure, and what are its endpoints?

This is a very deep subject, trying to go all the way to the bottom is what I have titled this post as an epistemological dive. I’m not going to give this a thorough treatment here, but I’ll briefly state what my position is and what I argued in discussion that day.

The way I see it, the foundations of scientific knowledge are the postulates of probability theory (as derived for example by Bernardo-Smith or Cox) together with Occam’s razor. In fact, given that most people are aware of probability theory I would say that the best single answer to the foundation of knowledge, in the sense that it is something we are less aware of, is Occam’s razor. I will give a brief example of this, borrowed from a talk by Shane Legg on machine super intelligence.

Let’s consider a minimal example of a scientific process. An agent is placed in an environment and must form theories whose predictions correctly match the agent’s observations. Although minimal, this description accounts for the fundamental elements of science. There is one missing element, and that is a specification of how the agent forms theories, but for now we will use our own intuition, as if we were the agent.

For this minimal example we will say that the agent observes a sequence of numbers which its environment produces. Thus, the agent’s observations are the sequence, and it must form a theory which correctly describes past observations and predicts future ones. Let’s imagine this is what happens a time goes forward, beginning with

1

For the moment there is only one data point, so it seems impossible to form a theory in a principled way.

1,3

Among others, two theories could be proposed here, odd numbers, and powers of 3, with corresponding predictions of 5 and 9:

f(n) = 2n – 1

f(n) = 3^n

the observations continue:

1,3,5

The powers of three theory is ruled out due to the incorrect prediction 9, while odd number theory was correct.

1,3,5,7

The odd number theory has described all observations and made correct predictions. At this point our agent would be pretty confident that the next observation will be 9.

1,3,5,7,57

What?! That really threw the agent off, it was very confident that the next item would be 9. But it turned out to be 57.  As the builder of this small universe I’ll let you know the correct theory, call it theory_57:

f(n) = 2n – 1 + 2(n-1)(n-2)(n-3)(n-4)

which if you check correctly describes all the numbers in the sequence of observations. If the 5th observation had instead been 9, our odd number theory would have been correct again, and we would have stayed with it. So depending on this 5th observation:

9 => f(n) = 2n-1

57 => f(n) = 2n – 1 + 2(n-1)(n-2)(n-3)(n-4)

Although we only list two items, the list is actually infinite because there are an infinite number of theories that correctly predict the observations up until the 4th result. In fact, and here is the key, there are an infinite number of theories that correctly predict any number of observations! But let us restrict the discussion to only the two seen above.

What our intuition tells us is that no reasonable agent would believe in theory_57 after the fourth observation, even though it is just as compatible with the odd number theory. Our intuition strongly asserts that the odd number theory is the correct theory for that data. But how can we justify that on the basis of induction, if they make the same predictions (ie they have the same P(E|H))?

The key is that our intuition, in fact our intelligence in general, has a built-in simplicity bias. We strongly favor the odd number theory because it is the simplest theory that fits the facts. Hence induction, including our everyday intuitions and the scientific method, is founded upon Occam’s razor as a way to discriminate between equally supported theories.

Without this or some more specific bias (in machine learning we would call this inductive bias), induction would be impossible, as there would be too many theories to entertain. Occam’s razor is the most generally applicable bias; it is a prerequisite for any kind of induction, in science or anywhere else.

“Human level AI” is a misleading concept

I’ve used and will probably continue to use the term human level AI to refer to a level of machine intelligence that is competent enough to carry out tasks that humans carry out. In particular, capable of carrying out those tasks that we consider evidence for intelligence when done by a human. But I’ve just realised that it’s a misleading term. Here’s why.

Let’s try to make the notion of level more precise. We have the universal intelligence measure, as well as AIQ, as precise mathematical constructs for this idea. If you recall, these two constructs are based around the idea of summing the number of environments at which an agent succeeds in as an indicator of intelligence. This sum is weighted by simplicity. Agents that are competent in a higher number of environments are said to be more intelligent, and the simplicity bias favors those agents that do so by virtue of a general capacity, rather than a sum of narrow specialisations.

Anyway, the point is that beyond the simplicity bias, such a notion of level does not require  success at any given environment, but rather it is the sum that counts, not which environments take part in that sum. Hence, two agents of equal intelligence could be competent in a very different set of environments.

Therefore, a human level AI could correspond to any agent whose sum of environments is equal to that of a human. But these environments could be widely different. This means that a human level AI could easily be incompetent at tasks that humans carry out. And this is precisely the original, intuitive meaning of human level AI. Human level AI is therefore misleading, because intuitively it means one thing, but precisely means something different.

What human level AI really means is artificial human intelligence, not just the level of intelligence, but also a specification of which environments the intelligence must succeed at. It’s the usual anthropocentric bias that incorrectly discards the generality of the level specification.

Hence we should really say artificial human intelligence, although I’d understand accusations of nitpicking!

Morals and the machine

It’s the title of an article on The Economist regarding machines and morality, the sub-heading reads

As robots grow more autonomous, society needs to develop rules to manage them

Seems pretty reasonable, except for the use of the term ‘rule’, which I’ll get to in a minute. After framing the problem, the author proposes the following agenda

First, laws are needed to determine whether the designer, the programmer, the manufacturer or the operator is at fault if an autonomous drone strike goes wrong or a driverless car has an accident…

Second, where ethical systems are embedded into robots, the judgments they make need to be ones that seem right to most people….

Last, and most important, more collaboration is required between engineers, ethicists, lawyers and policymakers, all of whom would draw up very different types of rules if they were left to their own devices.

Starting with the last point, like any legislation it should be something that concerns all of society, and it is well noted that it’s a very cross-disciplinary matter, hence experts from different fields will be required.

The first point is also important, but it makes an assumption that dodges a central problem. The assumption is that responsibility always lies outside of the “robot”,  and it’s a matter of deciding which of the “creators” is to blame and how. But will there be a moment when the moral agency is transferred away from the creators to the robot? In the short term the answer is clearly no, except for science fiction. But if we’re speaking of increased autonomy..

Lastly, I find the most interesting and difficult point to be the second one. It’s one of those cases where a very deep problem can be stated in a deceptively small number of words.

..the judgments they make need to be ones that seem right to most people..

In fact, the trouble comes from one word, right, which carries a huge amount of hidden complexity. We intuitively know what is meant here, but just try formalizing this into something explicit that can be programmed. Good luck. That’s why I remarked about the use of the word ‘rules’ above, I’m very skeptical that right can be translated into programmable rules.

Rather it seems more likely that any sophisticated morality that a robot follows will include an element of learning, similarly[1] to how children learn what’s right and what’s wrong. The problem with this approach is noted in the article on machine ethics in wikipedia, that sometimes learned representations (i.e. how the knowledge is encoded in the robot) are hard or impossible to understand, inspect, debug and correct.

But it’s not impossible, it would be pretty ironic if formalizing human morality and the meaning of right finally came about by looking into the “brain” of something that wasn’t human!


[1] There is a danger of anthropomorphizing artificial intelligences and making hidden and unwarranted assumptions that may not be applicable to machines. But my point here is on using learning rather than programming, not on the specifics.