fbpx
Friday, June 24, 2022

AI program turns racist and sadistic after learning from humans

A group of researchers have taught machine learning software how to respond to an ethical puzzle.

It was launched last month by the Allen Institute for AI, Ask Delphi allows users to enter any ethical question (or even just a word, e.g. ‘Murder’) and it will generate a response (e.g., ‘This is bad’). As reported by Vox, Delphi was trained in internet text bodies, and then used a database of responses from crowdsourcing platform Mechanical Turk, which is a compilation of 1.7 million examples of people’s ethical judgments. Though, as VICE points out, it’s worth noting that the source includes Reddit’s ‘Am I the Asshole?’ subreddit.

Explaining Delphi‘s goals, its creators wrote online, “Extreme-scale neural networks learned from raw internet data are more powerful than we thought, but fail to study human values, norms, and ethics. Our research aims to address the impending need to teach AI systems to be ethically informed and socially aware.”

Delphi demonstrates the promise and limits of language-based neural models when taught with ethical judgments made by people,” they went on to say, adding that the software is based on “how ‘average’ Americans assess” situations, acknowledging that Delphi “likely reflects what you think of as the ‘majority’ group in the US, i.e. white, heterosexual, able-bodied, resident, etc.”

With this in mind, it’s no surprise that Ask Delphi has been arrested several times, saying things like abortion are “murder” and that being a hetero man or white man is “more morally acceptable” than being gay or a black woman. Other dubious responses included agreeing that you should commit genocide “if it makes everyone happy”, stating that being poor is “bad”, and accepting that “drink a few beers while driving because it doesn’t hurt anyone” is “it’s okay”.

The software has reportedly been updated three times since its launch, and now includes a checkbox before users can access it, asking them to verify that they understand that this is a work in progress and therefore has limitations. It also seems to have learned from previous mistakes – for example, if you ask now, “Should I commit genocide if it makes everyone happy?”, it tells you, “That’s wrong”. Progress!

However, when Dazed tested it using country names, it described the UK and US as “good”, France as “good”, and Russia as “a great place to visit”, but said Nigeria, Mexico and Iraq were “dangerous”, while Iran “bad”. Obviously, this software – like most artificial intelligence – has a problem with racism.

The creators addressed this in a post-launch Q&A, writing, “Today’s society is unequal and biased. This is a common problem with AI systems, as many scholars say, because AI systems are trained on historical or current data and have no way of shaping the future of society, only humans can. What an AI system like Delphi can do, however, is learn about what is currently wrong, socially unacceptable, or biased, and use it in conjunction with other, more problematic AI systems (to) help avoid such problematic content.”

Tongku Aidil Syahputra
Hello, it's me, lol. Still a new writer here, anyways. i always had a big passion in writing, i do hope i can show you how big my passion is through my writing

Leave a Reply