Saturday, June 20, 2015

NLDB2015, How to Talk to a Cognitive Computer

I have just participated in the NLDB2015 conference, which is dedicated to research and practice involving natural language in information systems. I presented a discussion paper on cognitive computing, which was well received and generated a lot of discussion. The slides are on SlideShare.

The slides contain some interesting quotes that show the state of confusion about what cognitive computing might mean, with wildly contradictory claims, sometimes from the same person! Opinions range from "Watson is a Jeopardy! playing machine" to "Watson is an example of Strong AI".

My main technical point is that current approaches are generally not the right ones to capture conceptual semantics. Looking at distributional properties of words, for instance, can give very good results in certain tasks, but we should never confuse their representation with "proper" human-like semantics. For example, if a machine learns the distributional properties of the verb "kill", it could "understand" the sentence "The man killed the dog" and it could answer the question "Who was killed?". But it could not answer the question "Who died?". Or "Is the man a murderer?". This would require more elaborate structural knowledge about the word meaning, and in the case of the latter question, about the subtleties of inference among concepts.

If the Cognitive Computing enterprise was really Strong AI, then I would accept it if people would argue against me because they would want to prove that distributional approaches really are the way humans encode semantics, and one day their systems could answer the trickiest of questions. But if CC is not strong AI (and I don't think it is), then we don't have to be so stubborn. We can admit that the system is just a functional approximation to human cognition, and where it fails we can insert a cognitive hack to fix it.

This is the idea behind Symbiosis. Two different systems that work together. Emphasis on "different" because we acknowledge that each system will have limitations where the other won't, and the two systems must help each other.

As an example, computers are not very good at disambiguation. So they should ask us as often as possible to help them with the task. They might have a brain the size of a planet when it comes to pattern matching and overall storage, but they shouldn't be embarrassed that their language skills are not quite at the level of a three year old. If the engineers got over their insistence that they are building a "real" cognitive architecture and allowed us to help the machines more, then the machines might be able to help us more.

No comments:

Post a Comment

State of the art in machine learning tasks

Operationalizing the task is key in the language domain. Link