As a computer scientist, I’m here to demystify the current state of massive artificial intelligence systems like ChatGPT. These cutting-edge large language models have been making headlines, but they’re not as infallible as you might think. In this talk, I’ll highlight three key problems with these systems and share some amusing instances of them failing at basic commonsense reasoning.
The State of the Art: Large Language Models
Large language models, such as those used in ChatGPT, have made tremendous progress in recent years. They can generate human-like text, answer complex questions, and even create art. However, these systems are not without their limitations. Here are three key problems with large language models:
1. Lack of Common Sense
One of the most striking aspects of large language models is their lack of common sense. In a series of experiments, researchers have shown that these models can fail at basic tasks like understanding causality and predicting outcomes.
For example, when asked to explain why it was impossible for a cat to be both inside and outside a box at the same time, one model responded:
- "Because cats are not capable of teleportation."
- "However, if we assume that the cat has the ability to teleport, then it is possible for the cat to be in multiple places at once."
This response demonstrates a fundamental misunderstanding of causality and spatial relationships.
2. Biases and Prejudices
Large language models are often trained on vast amounts of text data, which can include biases and prejudices. These biases can manifest in various ways, such as:
- Stereotyping: Models may perpetuate negative stereotypes about certain groups or individuals.
- Biased responses: Models may respond to questions with biased or inaccurate information.
For example, when asked to describe a female leader, one model responded with a list of stereotypical characteristics, including being "emotional" and "nurturing."
3. Lack of Transparency
Another problem with large language models is their lack of transparency. It can be difficult to understand how these systems arrive at their conclusions or why they make certain decisions.
This lack of transparency raises important questions about accountability and responsibility. If a model makes an error or perpetuates biases, who is responsible?
The Benefits of Smaller AI Systems
While large language models have made significant progress, there are benefits to building smaller AI systems trained on human norms and values. These systems can:
- Be more transparent: Smaller models can be designed with transparency in mind, making it easier to understand how they arrive at their conclusions.
- Be less biased: Smaller models can be trained on a narrower range of data, reducing the likelihood of bias and prejudice.
- Be more effective: Smaller models can be more efficient and effective in specific domains or tasks.
A New Era for AI
The rise of massive artificial intelligence systems marks a new era in the field. As these systems become increasingly sophisticated, it’s essential to address their limitations and consider the benefits of smaller AI systems trained on human norms and values.
By acknowledging the challenges facing large language models and exploring alternative approaches, we can create more effective, transparent, and accountable AI systems that benefit society as a whole.
Q&A with Chris Anderson
After the talk, I had the opportunity to sit down with Chris Anderson, Head of TED, for a Q&A session. We discussed various topics, including:
- The future of AI: What role will large language models play in shaping our future?
- Accountability and responsibility: Who is responsible when an AI system makes an error or perpetuates biases?
- The benefits of smaller AI systems: How can smaller models be designed to address the limitations of large language models?
Conclusion
As we navigate the complexities of massive artificial intelligence systems, it’s essential to acknowledge their limitations and consider alternative approaches. By building smaller AI systems trained on human norms and values, we can create more effective, transparent, and accountable AI systems that benefit society as a whole.
Watch this talk and many others like it on the TED Talks channel: https://go.ted.com/yejinchoi
Become a TED Member to support our mission of spreading ideas:https://ted.com/membership