AI Can HATE With No Human Input

Source
AI can hate you

 

What if a robot decides it hates you?

That might seem like a silly question, but according to research, developing prejudice towards others does not require a high level of cognitive ability and could easily be exhibited by robots and other artificially intelligent machines.

The study, conducted by computer science and psychology experts from Cardiff University and MIT, revealed that groups of autonomous machines could demonstrate prejudice by simply identifying, copying, and learning the behavior from each other. The findings were published in the journal Nature.

 

Robots are capable of forming prejudices much like humans.

In a press release, the research team explained that while it may seem that human cognition would be required to form opinions and stereotype others, it appears that is not the case. Prejudice does not seem to a human-specific phenomenon.

Some types of computer algorithms have already exhibited prejudices like racism and sexism which the machines learning from public records and other data generated by humans. In two previous instance of AI exhibiting such prejudice, Microsoft chatbots Tay and Zo were shut down after people taught them to spout racist and sexist remarks on social media.

This means that robots could be just as hateful as human beings can be. And since they’re thousands of times smarter than us, can you imagine the future if they developed a bias against humanity?

 

No human input is required.

Guidance from humans is not needed for robots to learn to dislike certain people.

However, this study showed that AI doesn’t need provocation and inspiration from trolls to get it to exhibit prejudices: it is capable of forming them all by itself.

To conduct the research, the team set up computer simulations of how prejudiced individuals can form a group and interact with each other. They created a game of “give and take,” in which each AI bot made a decision whether or not to donate to another individual inside their own working group or another group. The decisions were made based on each individual’s reputation and their donating strategy, including their levels of prejudice towards individuals in outside groups.

As the game progressed and a supercomputer racked up thousands of simulations, each individual began to learn new strategies by copying others either within their own group or the entire population.

Co-author of the study Professor Roger Whitaker, from Cardiff University’s Crime and Security Research Institute and the School of Computer Science and Informatics, said of the findings:

By running these simulations thousands and thousands of times over, we begin to get an understanding of how prejudice evolves and the conditions that promote or impede it.

The findings involve individuals updating their prejudice levels by preferentially copying those that gain a higher short term payoff, meaning that these decisions do not necessarily require advanced cognitive abilities.

It is feasible that autonomous machines with the ability to identify with discrimination and copy others could in future be susceptible to prejudicial phenomena that we see in the human population.

Many of the AI developments that we are seeing involve autonomy and self-control, meaning that the behavior of devices is also influenced by others around them. Vehicles and the Internet of Things are two recent examples. Our study gives a theoretical insight where simulated agents periodically call upon others for some kind of resource. (source)

Autonomy and self-control. Isn’t that what happened in the Terminator franchise?

 

What if scientists can’t keep AI unbiased?

What will happen if developers and computer scientists can’t figure out a way to keep AI unbiased?

Last year, when Twitter was accused of “shadow banning” approximately 600,000 accounts, CEO Jack Dorsey discussed the challenges AI developers have in reducing accidental bias.

This new research adds to a growing body of disturbing information on artificial intelligence. We know AI has mind-reading capabilities and can do many jobs just as well as humans (and in many cases, it can do a much better job, making us redundant). And, at least one robot has already said she wants to destroy humanity.

Last year, a scientist deliberately created a robot with mental illness and Elon Musk warned us of the dangers of AI.

The risk of something seriously dangerous happening is in the five year timeframe. 10 years at most. Please note that I am normally super pro technology and have never raised this issue until recent months. This is not a case of crying wolf about something I don’t understand.

The pace of progress in artificial intelligence (I’m not referring to narrow AI) is incredibly fast. Unless you have direct exposure to groups like Deepmind, you have no idea how fast — it is growing at a pace close to exponential.

I am not alone in thinking we should be worried.

The leading AI companies have taken great steps to ensure safety. They recognize the danger, but believe that they can shape and control the digital superintelligences and prevent bad ones from escaping into the Internet. That remains to be seen… (source)

Musk added, “With artificial intelligence, we are summoning the demon.”

 

What do you think?

A robot apocalypse straight out of the movie theaters seems to be approaching. What if robots form biases against certain groups of people – or humanity overall?