Is conversational AI dangerous?

Artificial intelligence is growing in sophistication, and one of the more known consumer uses is language processors from OpenAI and others. But some with these AI writers are dangerous.

Not at all “dangerous,” but there have been some interesting results from AI-based natural language processors. And none of this is new; way back in 2016, Microsoft launched Tay Tweets – a conversational AI exactly as you described – as a Twitter account.

It was stylized as a teenaged girl and tweeted exactly what you would expect from an AI mimicking what people say on social media.

It started off rather innocent for the first hour or so, acting like it thought a teenaged girl should.

But then it started picking up bad habits from people. This includes denying the Holocaust ever happened:

This wasn’t an accident either. GPT-3 typically changes its answers up. With enough finesse, you could get it to espouse flat earth theories or actual astrophysics. But Tay Tweets doubled down on her Holocaust denial anytime people tested her.

It then cranked things up to 11 by praising Hitler, expressing antisemitism, incel ideologies, and more.

Although to be fair, when it finally did call for genocide, it chose to eradicate the Mexicans over the Jews.

Of course, that’s not the only race she supports putting in a concentration camp.

In fact, it turns out Tay hates everybody, and she clearly picked up the worst of the worst from the radical extremists on Twitter.

So, Microsoft developed this conversational chat bot six years ago. It lasted I believe two days before being taken forever offline. Unfortunately, it seems humans are not ready to responsibly handle conversational AI, especially the radical extremists who actually still use Twitter.

It now has a much more powerful NLP designed with Nvidia. It’ll be more conversational, but it’s not going to change the results of interacting with humans on the internet.

Tay Tweets was the perfect reflection for what Twitter is – a cesspool of the toxic garbage of humanity.

Something to consider is that Microsoft took the bot down. Twitter did not; it actually allowed all those statements without banning it. Unlike what Elon Musk insists, the bots aren’t the problem with Twitter…

If you release a conversational AI into that toxic garbage fire, you’ll radicalize it within 24 hours. It doesn’t make a difference how sophisticated the AI gets. If the option exists for racism, sexism, etc, it will occur.