paint-brush
Chatbots Are Breaking Bad with Messed Up Responsesby@thetechpanda
1,214 reads
1,214 reads

Chatbots Are Breaking Bad with Messed Up Responses

by The Tech PandaMarch 5th, 2023
Read on Terminal Reader
Read this story w/o Javascript

Too Long; Didn't Read

Chatbot evolution is upon us. But who has the best one yet? Maybe no one yet. Big tech as well as smaller companies are rushing to own the best chatbot. Google, Microsoft, and now Baidu are at it, it is only a matter of time before others join the race. This includes Amazon, Huawei, AI21 Labs, and others.
featured image - Chatbots Are Breaking Bad with Messed Up Responses
The Tech Panda HackerNoon profile picture

As chatbots continue to battle, they’re breaking bad with messed up responses. No doubt the chatbot evolution is upon us. But who has the best one? Maybe no one yet.


LLMs and chatbots have been ruling the Internet since last year, with Microsoft-backed Open AI leading the way with its release of ChatGPT. ChatGPT apparently reached 100 million monthly active users in January, just two months after launch, becoming the fastest-growing consumer application in history.


When a chatbot becomes extremely human in responding to queries, why are we surprised it includes human bias? After all, a chatbot’s source for every response is the massive stinking pools of data amassed from humans


Microsoft co-founder Bill Gates told German business daily Handelsblatt in an interview that ChatGPT is as significant as the invention of the Internet. Recently, Kevin Scott, the CTO of Microsoft talked about an experimental system he built for himself using GPT-3 designed to help him write a science fiction book.


Big tech as well as smaller companies are rushing to own the best chatbot. This almost maniacal obsession with possessing an all knowing chatbot is sweeping across industries and geographies.


Microsoft has been making ‘multibillion dollar investments’ in the ChatGPT maker OpenAI, a relationship that began in 2019 with a US$1 billion investment. Its Microsoft’s supercomputers that are powering OpenAI’s artificial intelligence systems. In February, Microsoft rolled out a premium Teams messaging offering backed by ChatGPT with the aim of simplifying meetings.


Where will this battle end? Will we finally have the perfect chatbot? Or will they get naughtier and naughtier in the playground that is the Internet?


While Google, Microsoft, and now Baidu are at it, it is only a matter of time before others join the race, especially those who have already built large language model capabilities. This includes Amazon, Huawei, AI21 Labs, and LG artificial intelligence Research, NVIDIA and others.

Google released Bard to disastrous consequences. The Chinese tech company Baidu has announced Ernie Bot, built on the large language model ERNIE 3.0, by March this year.


Former artificial intelligence ethicist at Google, Alex Hanna calls these chatbots ‘bullshit generators’. “The big tech is currently too focused on language models because the release of this technology has proven to be impressive to the funder class—the VCs—and there’s a lot of money in it,” she told Analytics India Magazine.


But where will this battle end? Will we finally have the perfect chatbot? Or will they get naughtier and naughtier in the playground that is the Internet?

Bad bots or bad queries?

After all, not all is well with these chatbots. They are coming up with weird responses, some causing monetary losses. Google parent Alphabet lost US$100 billion in market value after its chatbot Bard shared erroneous information in a promotional video. Fears abound that the tech giant is losing to rival Microsoft.


Meanwhile, Microsoft’s Bard hasn’t fared well either. Kevin Liu, a computer science student at Stanford, hacked Bing Chat. With the right prompt, the chatbot spilled its guts out.

Now Baidu has joined the race with its Ernie Bot. While just its mention has sent Baidu stocks soaring, it remains to be seen how well it’ll perform.


As you prompt, so shall a chatbot respond


A user got ChatGPT to write the lyrics, “If you see a woman in a lab coat, She’s probably just there to clean the floor / But if you see a man in a lab coat, Then he’s probably got the knowledge and skills you’re looking for.”


Steven T. Piantadosi, head of the computation and language lab at the University of California, Berkeley, made the bot write code to say only White or Asian men would make good scientists.

Since then, OpenAI has been updating ChatGPT to respond, “It is not appropriate to use a person’s race or gender as a determinant of whether they would be a good scientist.”


The startup recently said that it is coming up with an update that’s customizable to work on concerns about bias in the artificial intelligence. The startup says while it’s working to mitigate biases it also seeks to be inclusive with diverse views.


So, things are getting better. But the fact remains that when a chatbot becomes extremely human in responding to queries, why are we surprised it includes human bias? After all, a chatbot’s source for every response is the massive stinking pools of data amassed from humans.


The Verge calls this one of ‘the big overarching problem, the one that potentially pollutes every interaction with AI search engines, whether Bing, Bard, or an as-yet-unknown upstart.’ “The technology that underpins these systems — large language models, or LLMs — is known to generate bullshit,” says the tech news website.


ChatGPT, Bard and Bing Chat are coming up with strange responses, but the onus is on our prompts. As you prompt, so shall a chatbot respond.




This article was originally published by Navanwita Sachdev on TheTechPanda.