Sam Altman, CEO of OpenAI, warned today (Sunday) that conversations that managers use with the artificial intelligence ChatGPT are not private - and could be used as legal evidence against them.
Fast track or regular track: will you decide how to receive your money?
According to him, many users share personal information with the system, without knowing that there is no legal immunity for the content, and that the company can be required to hand it over in legal proceedings.
According to a report on Channel 14, Altman said in an interview with Theo Pohn's podcast: "If someone has a very sensitive conversation with ChatGPT, and this data is requested as part of a legal case - we may be required to hand it over."
According to him, many see artificial intelligence as a kind of confidant or an unofficial substitute for psychological therapy, but in practice there is no legal protection for these conversations.
Altman expressed frustration with the confusion created among users, who sometimes feel that this is a safe space. But he said this is an illusion – because in reality, information could be exposed.
He added that OpenAI has already been required to preserve and hand over user chats under legal orders, even those that have been marked as deleted.
According to a report on Channel 14, Dr. Emily Shore of Stanford University also compared conversations with artificial intelligence to "a diary that could testify against you."
"People are pouring their hearts out to a machine that may be obligated to reveal what they said," she emphasized.
Altman, for his part, expressed support for implementing a legal framework that would provide some protection for sensitive conversations, similar to medical or legal confidentiality. He said that legislation should be promoted to regulate this, especially in areas such as mental health, personal counseling or education. "Privacy should evolve along with technology," he said.