
J.J. nodded. “I’m with you so far.”
“The old, first-generation programs for Expert Systems could each do only one sort of thing, and one thing only — such as to play chess, or diagnose kidney disease, or design a computer circuit. But each of those programs would do the same thing over and over again, even if the results of doing so were unsatisfactory. Expert Programs were the first step along the road to AI, artificial intelligence, because they do think — in a very simple and stereotyped manner. The self-learning programs were the next step. And I think my new learning-learning type of program will be the next big step, because it can do so much more without breaking down and getting confused.”
“Give me an example.”
“Do you have a languaphone and a voxfax in your office?”
“Of course.”
“Then there are two perfect examples of what I am talking about. Do you take calls from many foreign countries?”
“Yes, a good number. I talked with Japan quite recently.”
“Did the person you were talking to hesitate at any time?”
“I think so, yes. His face sort of froze for an instant.”
“That was because the languaphone was working in real time. Sometimes there is no way to instantly translate a word’s meaning, because you can’t tell what the word means until you have seen the next word — like the words ‘to,’ ‘too,’ and ‘two.’ It’s the same with an adjective like ‘bright,’ which might mean shining or might mean intelligent. Sometimes you may have to wait for the end of a sentence — or even the next sentence. So the languaphone, which animates the face, may have to wait for a complete expression before it can translate the Japanese speaker’s words into English — and animate the image to synchronize lip movements to the English words. The translator program works incredibly fast, but still it sometimes must freeze the image while it analyzes the sounds and the word order in your incoming call.
