Sam Altman, the CEO of OpenAI, recently raised a serious concern about the way people are using ChatGPT, especially for emotional or personal support. During an interview on comedian Theo Von’s podcast, Altman pointed out that many users, particularly younger people, treat the chatbot like a therapist or life coach. They share deeply personal issues from relationship struggles to mental health questions without realising that these conversations are not legally protected in the same way they would be with a human doctor or therapist.
In the legal world, there are clear rules that protect your privacy when you speak to professionals like lawyers or medical doctors. These protections, often called privileges, ensure your conversations stay confidential. But when it comes to AI, that same protection does not exist. Altman admitted that the AI industry has not yet figured out how to create legal safeguards for these sensitive chats. This means that if someone were to sue or if a court requested the information, OpenAI could be forced to hand over your private messages with ChatGPT.
This privacy gap has real consequences. OpenAI is currently involved in a lawsuit with The New York Times, and one of the court’s demands is that the company store user chats, potentially hundreds of millions of them. While OpenAI is fighting back against this order, calling it an overreach, the situation highlights a growing problem. If courts can force AI companies to save and share user data, it sets a worrying precedent. In a world where tech companies are already often required to give user data to law enforcement, this could stretch even further into private conversations many assumed were safe.
There is also a wider social context to this concern. In recent years, particularly after the U.S. Supreme Court overturned Roe v. Wade, people have become more cautious about the digital trails they leave behind. Some switched to more private health apps that offered better protection for sensitive data like menstrual cycle tracking. The public is becoming more aware that their digital footprints can be used against them, depending on the legal climate and the strength of data protection in place.
Altman’s comments reflect a deeper issue within the AI industry: the race to develop smarter systems has moved much faster than the rules needed to keep people safe. Until clearer laws are made, users are right to be cautious about what they share with AI. As Altman put it, it is reasonable to want legal clarity before trusting these systems with your most private thoughts.