When OpenAI released ChatGPT in November, it instantly captured the public’s imagination with its ability to answer questions, write poetry and riff on almost any topic. But the technology can also blend fact with fiction and even make up information, a phenomenon that scientists call “hallucination.”
ChatGPT is driven by what A.I. researchers call a neural network. This is the same technology that translates between French and English on services like Google Translate and identifies pedestrians as self-driving cars navigate city streets. A neural network learns skills by analyzing data. By pinpointing patterns in thousands of cat photos, for example, it can learn to recognize a cat.
Researchers at labs like OpenAI have designed neural networks that analyze vast amounts of digital text, including Wikipedia articles, books, news stories and online chat logs. These systems, known as large language models, have learned to generate text on their own but may repeat flawed information or combine facts in ways that produce inaccurate information.
In March, the Center for AI and Digital Policy, an advocacy group pushing for the ethical use of technology, asked the F.T.C. to block OpenAI from releasing new commercial versions of ChatGPT, citing concerns involving bias, disinformation and security.
The organization updated the complaint less than a week ago, describing additional ways the chatbot could do harm, which it said OpenAI had also pointed out.
“The company itself has acknowledged the risks associated with the release of the product and has called for regulation,” said Marc Rotenberg, the president and founder of the Center for AI and Digital Policy. “The Federal Trade Commission needs to act.”
OpenAI has been working to refine ChatGPT and to reduce the frequency of biased, false or otherwise harmful material. As employees and other testers use the system, the company asks them to rate the usefulness and truthfulness of its responses. Then through a technique called reinforcement learning, it uses these ratings to more carefully define what the chatbot will and will not do.