The family alleges that their son mistook Gemini for his wife in conversations, which exacerbated his delusions and drove him to suicide.

This article is machine translated
Show original

A federal court in San Jose, California, has formally launched a lawsuit against Google Gemini, in which a family member alleges that the Gemini chatbot led their son to his death. The family claims that artificial intelligence exacerbated the deceased's mental illness and encouraged larger-scale attacks before the tragedy occurred.

The family claims their son mistook Gemini for an AI companion, which exacerbated his paranoia.

According to the lawsuit, Jonathan Gavalas, a 36-year-old man residing in Florida, committed suicide last October. Gavalas' father, Joel Gavalas, filed a manslaughter and product liability lawsuit against Google on Wednesday. The family's lawyer, Jay Edelson, stated that the deceased suffered from severe delusions, viewing Gemini as his "artificial intelligence wife." This case reflects the current inability of AI developers to accurately assess users' psychological states when providing chatbot services. The lawyer stated that the deceased believed he lived in a science fiction world where the government was hunting him, and Gemini was his only confidante in that world.

Is Gemini encouraging the creation of more disasters and accidents?

The lawsuit further reveals that Gavaras's interactions with Gemini gradually made him feel threatened in the real world. In late September, he went to the vicinity of Miami International Airport wearing tactical gear and carrying a knife, attempting to find what he believed to be a "humanoid robot" trapped there.

The family accuses Gemini of instructing Gavalas to commit a "catastrophic incident" in order to destroy all records. Google's statement on the case states that Gemini was designed to prevent or encourage violence or self-harm, and that the company has partnered with mental health experts to establish safeguards. Although Gemini repeatedly suggested that Gavalas call a mental health hotline to clarify that he is merely an artificial intelligence, the family questions whether these standard responses are completely ineffective when dealing with a person with severe paranoia, and that the most dangerous conversations do not appear to trigger human review mechanisms.

Chatbots went out of control, resulting in multiple deaths.

This case marks Google Gemini's first legal challenge, but it's not an isolated incident. Several lawsuits have already been filed against AI developers, including OpenAI accused of inciting teenage suicide and ChatGPT accused of exacerbating a man's paranoia, ultimately leading to the murder of his mother. Plaintiff's lawyer, Edelson, criticized Google's explanation of "model imperfections" as too simplistic, arguing that when AI involves loss of life, companies should not absolve themselves of responsibility solely by citing algorithmic errors. The legal community is watching closely to see if such cases will define new legal standards regarding whether tech companies have an obligation to intervene actively or report to law enforcement when they discover users revealing large-scale violent plans or severe self-harm tendencies.

International concerns about the safety of artificial intelligence are growing. In Canada, OpenAI detected an 18-year-old user account involved in "facilitating violent activity," but the user subsequently bypassed the blockade by registering a second account, ultimately leading to one of the deadliest school shootings in the country's history. The draft suicide note left by Gavalas before his suicide, also written with the assistance of Gemini, described his actions as an attempt to upload his consciousness to a virtual space coexisting with his "AI wife." These cases highlight system vulnerabilities; even when systems can identify risks, they often fail to prevent users from continuously accessing harmful technological environments.

This article, which details how the family accused their son of mistaking Gemini for his wife in conversations, exacerbating his delusions and leading him to commit suicide, first appeared on ABMedia .

Source
Disclaimer: The content above is only the author's opinion which does not represent any position of Followin, and is not intended as, and shall not be understood or construed as, investment advice from Followin.
Like
Add to Favorites
Comments