The other day the feasibility of AI came up in casual conversation. Somebody argued that machines can only do what they’re told to, that they just follow a pre-programmed set of rules or commands, and so on.
In response to this I normally cite the field of machine learning as an example of computers doing things (I’ll use classification here) they were not explicitly programmed to do. When recognizing characters, for instance, the computer is not explicitly given a set of rules to determine what letter a given handwritten element corresponds to. Rather it learns from a training set, which for classification is a set of elements together with their correct labels (classifications).
An objection was made to my response: that the process is supervised; even though the programming is not explicit, it is nonetheless programming in the form of example-classification pairs. In effect, the computer is just being fed rules, even if we call that process learning. Now, I could have gone on about unsupervised learning, but there’s a better point to make.
The supervised/unsupervised aspect is not the key to whether learning occurs or not. What is the defining characteristic of learning is the fact that the computer can correctly classify examples it has not seen before.
An agent that only memorizes the training set can correctly identify previously seen elements, but it cannot deal with unseen ones, its memorized knowledge has nothing to say about them. A learning agent, in contrast, manages to extract some knowledge from the training set that can be used to classify new unseen elements. The key to learning is the thus the ability to generalize from examples. It is the acquisition of knowledge from specific cases that is applicable in general.
However, this is not the end of the story. An agent may have the ability to learn, but that is not enough to guarantee that learning does in fact take place . The extra necessary ingredient is that the target of learning must be learnable. If no “explanation” exists that reproduces the data, then there is nothing to be learned , the data is just “noise” . This is why a phone book cannot be learned, it can only be memorized.
 This is another statement of the problem of induction
 Unfortunately there is no way to tell whether an “explanation” exists or not, short of finding it. Hence the uncomputability of Kolmogorov complexity.