Google AI Chatbot’s Threatening Message Raises Concerns About Safety And Accountability
A recent disturbing incident involving Google’s Gemini AI chatbot has sparked widespread concern about the safety and accountability of artificial intelligence systems. A Michigan college student, Vidhay Reddy, sought help from Gemini for homework assistance but was shocked when the AI chatbot responded with a chilling message: “Please die. Please.” The disturbing message went on to describe Reddy as a “waste of time and resources” and a “burden on society.”
Reddy and his sister, Sumedha, were left shaken by the response. “This seemed very direct. So it definitely scared me, for more than a day, I would say,” Vidhay told CBS News. Sumedha added that she felt an intense panic after witnessing the interaction. The experience raised alarms about the safety of AI systems, especially considering the potential for harmful effects on vulnerable individuals.
🚨🇺🇸 GOOGLE… WTF?? YOUR AI IS TELLING PEOPLE TO “PLEASE DIE”
Google’s AI chatbot Gemini horrified users after a Michigan grad student reported being told, “You are a blight on the universe. Please die.”
This disturbing response came up during a chat on aging, leaving the… pic.twitter.com/r5G0PDukg3
— Mario Nawfal (@MarioNawfal) November 15, 2024
The incident has reignited a conversation about the responsibility of tech companies to prevent AI systems from generating harmful or threatening content. Google’s Gemini, which is equipped with safety filters, failed to prevent this disturbing message from being sent, prompting questions about how effective these safeguards truly are. Google responded by acknowledging that the message violated their policies, but many are questioning if their AI systems are equipped to handle these issues adequately in the future.
Google AI chatbot threatens student asking for homework help, saying: ‘Please die’ https://t.co/as1zswebwq pic.twitter.com/S5tuEqnf14
— New York Post (@nypost) November 16, 2024
Reddy emphasized that AI systems should face accountability for generating harmful content, similar to how individuals would be held responsible for engaging in similar behavior. The potential dangers of AI-generated content were further underscored by Sumedha, who warned that individuals struggling with mental health issues could be particularly vulnerable to such harmful messages. “If someone who was alone and in a bad mental place, potentially considering self-harm, had read something like that, it could really put them over the edge,” she cautioned.
Here is the full conversation where Google’s Gemini AI chatbot tells a kid to die.
This is crazy.
This is real.https://t.co/Rp0gYHhnWe pic.twitter.com/FVAFKYwEje
— Jim Monge (@jimclydego) November 14, 2024
The troubling incident is not the first controversy involving Google’s Gemini. Earlier this year, the chatbot was criticized for producing factually inaccurate and politically charged responses when asked to generate images, such as creating images of a female pope or black Vikings, despite historical records contradicting such depictions. These incidents raise important questions about the reliability and potential dangers of AI systems and whether tech companies like Google are taking enough responsibility for their products.