In brief: ChatGPT maker OpenAI is being investigated by the Federal Trade Commission to determine if the company's chatbot has broken consumer law by damaging reputations and putting personal data at risk. The FTC also wants information on two security incidents from earlier this year.
The agency sent a 20-page demand letter to OpenAI for records about how it addresses risks related to its AI models, writes The Washington Post. The Civil Investigative Demand also requests details about the company, its AI model and training, how it deals with privacy and prompt injection attacks, API and plugin integrations, and data collection.
One of the statements specifically asks what OpenAI has done to address the danger of its products generating false, misleading, or disparaging information about real people. Generative AIs are notorious for their tendency to hallucinate, of course.
The Reg points out that OpenAI is no stranger to defamation lawsuits. A Georgia radio host sued the company over allegations that ChatGPT falsely associated his name with a criminal issue. And in April, an Australian mayor threatened legal action claiming ChatGPT accused him of involvement in a foreign bribery scandal. The chatbot also falsely accused a US law professor of sexual assault based on a fabricated Washington Post article.
OpenAI hasn't been very open in disclosing what data it uses to train its large language models powering ChatGPT. The FTC wants to know if it's been scraped from the internet or if it has been bought from third parties. The agency also asks which websites the data has been taken from and what was done to prevent personal information from being used as training data.
The data-scraping practices of generative AI companies has been a controversial issue recently. Elon Musk threatened to sue Microsoft, which has invested billions into OpenAI, for illegally using Twitter data – it was also the claimed reason behind limiting users' tweet views. Reddit, meanwhile, saw a mass user revolt after it started charging for access to its API in response to data scraping by AI companies.
The FTC also wants information and records related to security incidents earlier this year. One was a bug that allowed some users to see payment-related information, the other exposed the titles of users' chat histories.
we built GPT-4 on top of years of safety research and spent 6+ months after we finished initial training making it safer and more aligned before releasing it. we protect user privacy and design our systems to learn about the world, not private individuals.
– Sam Altman (@sama) July 13, 2023
OpenAI CEO Sam Altman said the company will work with FTC. He added that it protects user privacy and designs its systems to learn about the world, not private individuals.
The FTC has already handed large fines to Meta, Amazon, and Twitter over allegations they violated consumer protection laws. OpenAI could face a similar punishment if it is found to have done the same, a move that would leave other generative AI companies sweating.
Earlier his month, Google updated its privacy policy to explicitly state that the company reserves the right to collect and analyze pretty much anything people share on the web to train its AI systems.