A nonprofit organization has filed a complaint with the Federal Trade Commission (FTC), claiming that OpenAI is violating the FTC Act by releasing biased and deceptive large language AI models like GPT-4. The Center for AI and Digital Policy (CAIDP) alleges that these models threaten privacy and public safety, fail to meet Commission guidelines, and are not transparent, fair, or easy to explain.
The CAIDP is asking the FTC to investigate OpenAI and suspend the release of large language models until they meet agency guidelines. They want OpenAI to require independent reviews of their products and services before launch, create an incident reporting system, and establish formal standards for AI generators.
OpenAI has not yet commented on the complaint, while the FTC has declined to do so. CAIDP president Marc Rotenberg was one of the signatories of an open letter calling on OpenAI and other AI researchers to pause their work for six months to allow for ethics discussions. Elon Musk, OpenAI’s founder, also signed the letter.
Critics of models like ChatGPT and Google Bard have expressed concerns about inaccurate statements, hate speech, and bias. The CAIDP also notes that users cannot repeat results generated by these models. OpenAI acknowledges that AI can “reinforce” ideas regardless of their accuracy. While newer models like GPT-4 are more reliable, there is a risk that people may rely on them without verifying their content.
There is no guarantee that the FTC will act on the complaint, but if it does, it could have far-reaching effects on the AI industry. Companies may need to wait for assessments before releasing new models and could face repercussions if their products fail to meet the Commission’s standards. While this may improve accountability, it could also slow down the rapid pace of AI development.