Wednesday, June 3, 2015

Cognitive Misinformation

Artificial Intelligence is big news these days. Bill Gates, Elon Musk and Stephen Hawking have famously expressed their belief in the power and consequent dangers of AI. The threat of super intelligent machines looms large, and there is a constant barrage of headlines like "Google a step closer to developing machines with human-like intelligence".

Companies like IBM, Google, Baidu and Microsoft have embraced the excitement and exploited it with positive futuristic visions of their own, developing useful AI systems. I was amazed at watching the Google I/O 2015 keynote, that almost half the time was spent talking about new capabilities powered by machine learning.

The problem, though, is that marketing can overtake the scientific facts, and important issues can become obscured to the point of falsehoods.

John Kelly, director of IBM Research has this to say in a Scientific American article
 ”The very first cognitive system, I would say, is the Watson computer that competed on Jeopardy!”
This is an impressive claim, but it is hard to know what he means by it, since the first cognitive system is surely the human cognitive system. We must assume he meant to say "first artificial cognitive system". But the problem there is that it is simply not true. There are many older attempts to build cognitive systems. SOAR for example.

SOAR is a research project which is a serious general theory of cognition. To quote: "Soar is a general cognitive architecture for developing systems that exhibit intelligent behavior. Researchers all over the world, both from the fields of artificial intelligence and cognitive science, are using Soar for a variety of tasks. It has been in use since 1983, evolving through many different versions to where it is now Soar, Version 9."

Noam Chomsky points out that, in a sense, what he does as a linguist is AI. His theory of language can be regarded as a computational system - a program - which models human linguistic capacity. It is a cognitive system.

Kelly probably meant something quite different by "cognitive system" ..  something more in line with an engineering sense, in agreement with Eric Horvitz, head of the Microsoft Research Redmond lab. Probably he does not claim that Watson is a model of human cognition (like SOAR is), but rather a very clever software which has some capabilities reminiscent of human cognition. That is, a machine that can perform tasks that normally only humans can, like make inferences from language, learn from facts, and form independent hypotheses. In fact in his own book Kelly admits that "the goal isn't to replicate human brains, though. This isn't about replacing human thinking with machine thinking. Rather, in the era of cognitive systems, humans and machines will collaborate to produce better results, each bringing their own superior skills to the partnership."

This is probably a good thing, since Watson is not a particularly good model of human cognition. In a highly discussed episode of Jeopardy!, Watson made a baffling mistake. The category was US Cities, and the clue was:  “Its largest airport was named for a World War II hero; its second largest, for a World War II battle.”  The two human contestants wrote “What is Chicago?” for its O’Hare and Midway, but Watson’s response was “What is Toronto?” (Which of course is not a US city), The rough explanation for this big mistake is that the probabilistic heuristics favoured Toronto after weighing up all the evidence. Watson did not "understand" any of the evidence, so it did not "realise" how important the category constraint is in this example, and allowed it to be swamped by the sum of all other evidence. John Searle puts it like this: "Watson did not understand the questions, nor its answers, nor that some of its answers were right and some wrong, nor that it was playing a game, nor that it won—because it doesn't understand anything."

But I can't end with this useful and happy story, that Watson is a clever machine doing useful things. The ghosts of "strong AI" persist. IBM's web site un waveringly says: "Watson is built to mirror the same learning process that we have—through the power of cognition. What drives this process is a common cognitive framework that humans use to inform their decisions: Observe, Interpret, Evaluate, and Decide.” In a more hostile spirit, Google's Peter Norvig attacks Chomsky's psychological theory of linguistics as being a sort of fiction, advocating instead that the current statistical models of language are in fact the correct scientific theories of language (and Chomsky himself some kind of deluded figure): "Chomsky ...  must declare the actual facts of language use out of bounds and declare that true linguistics only exists in the mathematical realm, where he can impose the formalism he wants. Then, to get language from this abstract, eternal, mathematical realm into the heads of people, he must fabricate a mystical facility that is exactly tuned to the eternal realm. This may be very interesting from a mathematical point of view, but it misses the point about what language is, and how it works." (my emphasis).

So we come to a difficult point. It seems that for the promotional materials to be right, for Norvig to be right, for Watson to really be the first Cognitive System, these computational systems are IN FACT HOW PEOPLE WORK. We need to erase the last 60 years of Cognitive Science. We need to fill the journals with new theories of statistical cognition.

The process is already weeding its way into people's consciousness. In the 2015 Google I/O keynote, Google Photos lead product manager Anil Sabharwal proudly announced that the new Photos application can categorise photos without any manual tagging. This reflects an attitude emerging from the recent successes of deep learning in object recognition, that the neural networks will soon make human intervention in picture identification redundant. But what if they don't? What if there are fundamental and unfathomable difficulties that machines without understanding cannot solve? Then it is a mistake to pursue a path which seeks to eliminate what we know about human cognition from the equation.

The problem begins when we start believing that the methods of cognitive computing are right. When we can laugh off as a quaint mistake when Google's 30 layer neural network thinks that a tall kitchen faucet is a monument, rather than see it for what it is, a deep and fundamental problem that illuminates the inherent limitations when statistics tries to play semantics.

I will argue in future posts that meaning - conceptual semantics - is a unique property of the human cognitive system, and machines will not possess semantics for a very very long time, if ever. This will fundamentally limit their ability to perform cognitive tasks. Intelligent systems can only be realised when human level semantics becomes an integral part of what we refer to as Semantic Symbiotic Systems, where the engineering techniques and the cognitive theories become equals. Together they will build better systems. Chomsky gets to keep his job. Everybody wins.







3 comments:


  1. Thanks for a good summary with well-reflected perspective, I enjoyed that!

    «Chomsky gets to keep his job» – how I laughed! :-D …but then it struck me that even though I generally don’t worry much about the wellbeing of MIT professors (come to think of it, I may not have worried much about unemployment at all), robots taking over humans’ jobs is one of the dark future projections. I tend to consider that ridiculous, like «mankind won’ ever need more than <…x…> computers». Can’t even remember what number was stated. But hey – I was supposed to play the part of the prejudiced pessimist here, wasn’t I? :-)

    Let me move to another point in your post: «…machines will not possess semantics for a very very long time, if ever». I appreciate the benefit of doubt you open up for, even though you downplay it. Then again, I do see your point about bad machine/non-human judgment, put on display in cases like the public Watson showcase and recent photo content recognition examples.

    However, in a longer-reaching perspective, the current level machine learning should probably be considered as being in its’ infancy – comparable to primitive stages of warfare, like going to battle just throwing rocks. So what happens if we sit back and observe ongoing developments, say for a decade or two? Remember, we need to add the contribution of each advancement step onto the trajectory. Some of us believe (fear?) that we may move into an uncertain futurescape of accelerating accelerations, popularly characterized as ‘the second half of the chessboard*’. Or: Basically no (little?) scientific predictability.

    *): [For potentially unfamiliar readers:] This concept is about a stage of maturity at which the Moore’ish law (originally related to a somewhat continuous pace of doubling die transistor counts every 18-24 months, but that’s beyond the point) makes such impact that it can no longer be overlooked. The phrase is largely credited to Ray Kurzweil, but frequently referred to, e.g. in titles like «Race Against the Machine» and «The Second Machine Age», written by two other MIT professors; Erik Brynjolfsson & Andrew McAfee.

    ReplyDelete
  2. Great comments Andrew, thank you.
    I just have two things in reply.
    First, it is true that the future is a wonderful place full of potential for AI. We have been promised it since the late 50s. One of my favourites is this one from Minsky in 1970: "In from three to eight years we will have a machine with the general intelligence of an average human being." I think he must have meant that the intelligence of the average human is going into a rapid decline after 1970. In that case his prediction would be right!
    My second point is that even IF we get there one day, we shouldn't talk right now as if we were already there. Or even close.

    ReplyDelete
  3. This comment has been removed by the author.

    ReplyDelete

State of the art in machine learning tasks

Operationalizing the task is key in the language domain. Link