What's up in
Large language models
Latest Articles
How Quickly Do Large Language Models Learn Unexpected Skills?
A new study suggests that so-called emergent abilities actually develop gradually and predictably, depending on how you measure them.
New Theory Suggests Chatbots Can Understand Text
Far from being “stochastic parrots,” the biggest large language models seem to learn enough skills to understand the words they’re processing.
Tiny Language Models Come of Age
To better understand how neural networks learn to simulate writing, researchers trained simpler versions on synthetic children’s stories.
Some Neural Networks Learn Language Like Humans
Researchers uncover striking parallels in the ways that humans and machine learning models acquire language skills.
Chatbots Don’t Know What Stuff Isn’t
Today’s language models are more sophisticated than ever, but they still struggle with the concept of negation. That’s unlikely to change anytime soon.
The Unpredictable Abilities Emerging From Large AI Models
Large language models like ChatGPT are now big enough that they’ve started to display startling, unpredictable behaviors.
What Does It Mean for AI to Understand?
It’s simple enough for AI to seem to comprehend data, but devising a true test of a machine’s knowledge has proved difficult.
The Computer Scientist Training AI to Think With Analogies
Melanie Mitchell has worked on digital minds for decades. She says they’ll never truly be like ours until they can make analogies.
Common Sense Comes Closer to Computers
The problem of common-sense reasoning has plagued the field of artificial intelligence for over 50 years. Now a new approach, borrowing from two disparate lines of thinking, has made important progress.