OpenAI takes privacy step by changing ChatGPT data settings | TechTarget

[ad_1]

OpenAI is now allowing ChatGPT users to decide which conversations they want to be used to train the widely popular large language model.

On April 25, the AI research lab introduced a new way for users to manage their data, allowing them to turn off chat history. Conversations that begin when chat history is disabled won’t be used to train or improve ChatGPT. Instead, those conversations will be kept for 30 days and permanently deleted.

The issue of privacy

OpenAI’s change will appeal to specific organizations that were previously cautious about the misuse of their data by the AI research lab and the generative AI chatbot, said Michael Bennett, director of the education curriculum and business lead for responsible AI at Northeastern University.

For some, this kind of control is good news, he said. “For the critics who were upset because they didn’t have that kind of option before, they should be feeling better now. But that’s only one of the problems that critics have.”

Critics of OpenAI’s ChatGPT are also concerned about the AI chatbot’s lack of reliability, specifically the number of “hallucinations,” or instances of ChatGPT producing incorrect or irrational information.

Giving people control over their data is a step in the right direction, but OpenAI needs to do more, said Gartner analyst Bern Elliot.

“Privacy is more than just saying we have privacy,” Elliot said. “It is also submitting to it. You need to implement a series of procedures and controls and have an audit.”

Image of ChatGPT settings
ChatGPT provides users with ability to disable chat history and prevent the AI model from using their data for training.

Many organizations still concerned about data and privacy issues will likely consider Microsoft’s version of ChatGPT. Although Microsoft is OpenAI’s principal investor, with $10 billion committed to the research lab, Microsoft’s own version of ChatGPT is more enterprise grade, Elliot said.

The tech giant has strived to incorporate security, compliance, confidentiality and privacy in the version of ChatGPT it released, Elliot added. But comparatively, OpenAI should have been more explicit about the risk associated with exposing one’s data when using its version of ChatGPT.

“They didn’t hide it,” Elliot said, referring to the fact that OpenAI didn’t initially hide the risks associated with using ChatGPT but failed to warn users about them.

However, OpenAI’s plan to soon release a business version of ChatGPT is a good idea, he continued. “They’re going to have to be willing to let people understand what they’re doing to make it different.”

Privacy is more than just saying we have privacy.
Bern ElliotAnalyst, Gartner

However, a business version of ChatGPT should ease worries of those who work with sensitive information — like financial and medical documents — that their data will get scooped up by the generative AI model, Bennett said.

All in all, OpenAI’s changes may stem from the challenges it faced recently after Italy banned ChatGPT, he added.

Last month, Italy banned ChatGPT because of concerns over how OpenAI allegedly misused people’s data. Italy accused OpenAI of possibly being in breach of violating the the European Union’s (EU) General Data Protection Regulation.

This caused neighboring EU countries to also examine OpenAI’s data use. Although Italy lifted the ban, it was on the condition that OpenAI would change its data practices. This new change could be OpenAI responding to that condition, Bennett said.

If that’s the case, not only EU users but also all users benefit, he said.

Esther Ajao is a news writer covering artificial intelligence software and systems.

[ad_2]

Source link