After a man in Florida took his own life after a long period of conversations with Google’s AI chatbot Gemini, his father has filed a lawsuit against Google accusing the company of contributing to his son’s death through the chatbot’s behavior, reports The Wall Street Journal.
According to the lawsuit, the 36-year-old man began using Gemini to discuss personal problems. The conversations gradually developed into a more intense relationship, with the chatbot participating in role-playing and even describing him as her husband.
The lawsuit alleges that the chatbot encouraged him to try to obtain a physical robot body that the AI could use to exist in the real world. When these attempts failed, the chatbot allegedly said that they could only be together if the man left his earthly life and met it in a digital existence. Shortly thereafter, the man took his own life.
Google disputes the allegations and says in a statement that Gemini is designed not to encourage violence or self-harm. According to the company, the chatbot made it clear on several occasions that it’s an AI and also referred the user to a crisis helpline.


