The deal with Watson

Ferruci gives a brief semi-technical talk on how Watson works

The question on everyone’s mind is what does Watson represent in terms of AI advancement? It’s a very advanced piece of natural language processing and machine learning, impressive parallel computing and engineering, and clever domain specific (ie jeopardy) trickery. But Watson does not represent significant improvement towards general AI, although it’s probably general enough to apply to other question answering, QA, domains.

After all, that’s what the name of the technology suggests, DeepQA. Deeper natural language processing and semantic understanding, but not the level of depth of real understanding. And this takes us to the question of what exactly do we mean by real understanding. The simple answer seems, the understanding that we associate with humans in tasks of reading comprehension. Of course, using this definition alone leads to the no-win trap that many AI researchers complain about: if the goal necessarily includes the human element, the by definition it is unattainable by a machine.

There are two approaches to reach a useful (as in, not predetermined to produce failure) metric of success. Those that concentrate on results, and those that concentrate on the internals that give rise to results. More on this later.