Artificial Intelligence Spotted Inventing Its Own Creepy Language

This post was written by ketutkariana on Desember 22, 2021
Posted Under: What is NLP?

So it seems the AI deemed English as less efficient for communication compared to its own, English based language. When English wasn’t efficient enough, the robots took matters into their own hands. But what if I told you this nonsense was the discussion of what might be the most sophisticated negotiation software on the planet? Negotiation software that had learned, and evolved, to get the best deal possible with more speed and efficiency — and perhaps, hidden nuance — than you or I ever could? Let’s say that GNMT is programmed to translate English to Korean, and English to Japanese, but it is not programmed to translate Japanese to Korean. This would result in English being the ‘base’ language, so Japanese would have to be translated to English before it could be translated to Korean.

ai develops own language

You may recall the hullabaloo in 2017 over some Facebook chat-bots that “invented their own language”. The present situation is similar in that the results are concerning – but not in the “Skynet is coming to take over the world” sense. One reason adversarial attacks are concerning is that they challenge our confidence in the model. If the AI interprets gibberish words in unintended ways, it might also interpret meaningful words in unintended ways. In the end, Facebook had its bots stop creating languages because that’s not what the original point of the study was. Facebook has made a big push with chatbots in its Messenger chat app. The company wants 1.2 billion people on the app to use it for everything from food delivery to shopping. Facebook also wants it to be a customer service utopia, in which people text with bots instead of calling up companies on the phone.

Googles Ai Creates Its Own universal Language

Musk has been speaking frequently on AI and has called its progress the “biggest risk we face as a civilisation”. “AI is a rare case where we need to be proactive in regulation instead of reactive because if we’re reactive in AI regulation it’s too late,” he said. Until these systems are more widely available – and in particular, until users from a broader set of non-English cultural backgrounds can use them – we won’t be able to really know what is going on. For instance, DALL-E 2 was trained on a very wide variety of data scraped from the internet, which included many non-English words. By our reading, Daras seems to be saying that yes, you can trip up the system, but that doesn’t disprove that DALL-E is applying meaning to its gibberish text.

https://metadialog.com/

That’s already a long way forward from another recent story of an AI that blew everybody’s minds bywriting its own beer and wine reviews. “Agents will drift off understandable language and invent codewords for themselves,” Dhruv Batra, a visiting researcher at FAIR, told Fast Company in 2017. “Like if I say ‘the’ five times, you interpret that to mean I want five copies of this item. This isn’t so different from the way communities of humans create shorthands.” Already, there’s a good deal of guesswork involved in machine learning research, which often involves feeding a neural net a huge Semantic Analysis In NLP pile of data then examining the output to try to understand how the machine thinks. But the fact that machines will make up their own non-human ways of conversing is an astonishing reminder of just how little we know, even when people are the ones designing these systems. I am not saying we need to pull the plug on all machine learning and artificial intelligence and return to a simpler, more Luddite existence. We need to closely monitor and understand the self-perpetuating evolution of an artificial intelligence, and always maintain some means of disabling it or shutting it down.

Is Data A Strategic Advantage?

Based on a word that the program produced, “Apoploe,” was used to create images of birds. Though this looks to be nonsense, the Latin word “Apodidae” refers to a genus of birds. So, this program was basically able to easily identify birds in some fashion. In 2017 researchers at OpenAI demonstrated a multi-agent environment and learning methods that bring about emergence of a basic language ab initio without starting from a pre-existing language. The language consists of a stream of “ungrounded” abstract discrete symbols uttered by agents over time, which comes to evolve a defined vocabulary and syntactical constraints. One of the tokens might evolve to mean “blue-agent”, another “red-landmark”, and a third “goto”, in which case an agent will say “goto red-landmark blue-agent” to ask the blue agent to go to the red landmark. In addition, when visible to one another, the agents could spontaneously learn nonverbal communication such as pointing, guiding, and pushing. The researchers speculated that the emergence of AI language might be analogous to the evolution of human communication. Such languages can be evolved starting from a natural language, or can be created ab initio.

Chatbots are computer programs that mimic human conversations through text. The future of that human-tech relationship may one day involve AI systems being able to learn entirely on their own, becoming more efficient, self-supervised and integrated within a variety of applications and professions. ai develops own language It’s worth noting that Daras’ conclusions are still somewhat tenuous. As he notes in the paper, the results aren’t 100 percent consistent. Sometimes, the prompt “Contarra ccetnxniams luryca tanniounons” generates pictures of bugs, while other times it generates images of “mostly animals.”

Languages

Even more weirdly, Daras added, was that the image of the farmers contained the apparent nonsense text “poploe vesrreaitars.” Feedthat into the system, and you get a bunch of images of birds. They acknowledge that telling DALLE-E2 to generate images of words – the command “an image of the word airplane” is Daras’ example – normally results in DALLE-E2 spitting out “gibberish text”. Giannis Daras, a computer science Ph.D. student at the University of Texas, published aTwitter threaddetailing DALLE-E2’s unexplained new language. When Facebook designed chatbots to negotiate with one another, the bots made up their own way of communicating.

Add a Comment

required, use real name
required, will not be published
optional, your blog address