Earlier this summer, Facebook Artificial Intelligence Research (FAIR) started an experiment where they try to have two artificial intelligence programs try to negotiate.
“Similar to how people have differing goals, run into conflicts, and then negotiate to come to an agreed-upon compromise, the researchers have shown that it’s possible for dialog agents with differing goals (implemented as end-to-end-trained neural networks) to engage in start-to-finish negotiations with other bots or people while arriving at common decisions or outcomes,” the research team explains on their blog post.
The experiment was stopped after the two AI programs, named Alice and Bob, started speaking in a language not comprehensible to us. And it was not because of fear that they might be conspiring against humanity.
Experiment stopped after AIs started using their own language
The first thought that pops up in your head may be that the programmers were scared of some doomsday scenario where artificial intelligence takes over the world, and if you have heard of this before, you probably saw more than a couple of alarming headlines in the media. While the panic is not unexpected, the reality of why the programs were stopped is much more mundane.
What they wanted was to have chatboxes that could talk efficiently to people, not to each other. So when the bots started talking in their own language, they shut down that experiment, as that was not where their interests lie. And the reason behind the language creation is quite simple. Programmers did not program Alice and Bob to speak in English, which resulted in them creating their own language.
“In their attempts to learn from each other, the bots thus began chatting back and forth in a derived shorthand—but while it might look creepy, that’s all it was”, technology website, Gizmodo, explains in their article.
And despite how creepy the language may appear, all the programs were doing was try to split books, hats, and balls. Since the whole point of the experiment was to improve artificial intelligence and human interactions, having two programs communicate in a language we do not understand would be useless. Thus, this particular experiment was ended. That does not mean the whole project was shut down.
Here is the conversation between Bob and Alice that sparked the panic:
Bob: i can i i everything else . . . . . . . . . . . . . .
Alice: balls have zero to me to me to me to me to me to me to me to me to
Bob: you i everything else . . . . . . . . . . . . . .
Alice: balls have a ball to me to me to me to me to me to me to me
Bob: i i can i i i everything else . . . . . . . . . . . . . .
Alice: balls have a ball to me to me to me to me to me to me to me
Bob: i . . . . . . . . . . . . . . . . . . .
Alice: balls have zero to me to me to me to me to me to me to me to me to
Bob: you i i i i i everything else . . . . . . . . . . . . . .
Alice: balls have 0 to me to me to me to me to me to me to me to me to
Bob: you i i i everything else . . . . . . . . . . . . . .
Alice: balls have zero to me to me to me to me to me to me to me to me to
No, the AIs are not using an incomprehensible language to plan the doom of humanity. There is no sinister reason behind them creating their own language. Programmers simply did not command them to use a language that humans would understand.
“Agents will drift off understandable language and invent codewords for themselves,” FAIR visiting researcher Dhruv Batra explained. “Like if I say ‘the’ five times, you interpret that to mean I want five copies of this item. This isn’t so different from the way communities of humans create shorthands.”
Fueling the paranoia many feel when it comes to AIs
The sensational headlines that followed the end of the experiment just goes to show how little we understand about artificial intelligence. And the understanding that we do have comes from the many end-of-humanity scenarios we are used to seeing in popular media. Every move forward researchers make with AIs is met with doubt and fear about our future.
This experiment involved two programs simply trying to negotiate with each other, that says a lot about how far from actual artificial intelligence we are. However, business magnate Elon Musk warns that AIs pose more danger that we think.
“I have exposure to the most cutting edge AI, and I think people should be really concerned by it,” he says. “AI is a fundamental risk to the existence of human civilization in a way that car accidents, airplane crashes, faulty drugs or bad food were not — they were harmful to a set of individuals within society, of course, but they were not harmful to society as a whole,” he continues.
Facebook’s Mark Zuckerberg, on the other hand, has called the actions of those spreading paranoia irresponsible.
“I have pretty strong opinions on this. I am optimistic,” says Zuckerberg. “I think you can build things and the world gets better. But with AI especially, I am really optimistic. And I think people who are naysayers and try to drum up these doomsday scenarios — I just, I don’t understand it. It’s really negative and in some ways I actually think it is pretty irresponsible,” he continued.