What's up in
Large language models
Latest Articles
To Understand AI, Watch How It Evolves
Naomi Saphra thinks that most research into language models focuses too much on the finished product. She’s mining the history of their training for insights into why these systems work the way they do.
‘World Models,’ an Old Idea in AI, Mount a Comeback
You’re carrying around in your head a model of how the world works. Will AI systems need to do the same?
The AI Was Fed Sloppy Code. It Turned Into Something Evil.
The new science of “emergent misalignment” explores how PG-13 training data — insecure code, superstitious numbers or even extreme-sports advice — can open the door to AI’s dark side.
How Distillation Makes AI Models Smaller and Cheaper
Fundamental technique lets researchers use a big, expensive “teacher” model to train a “student” model for less.
Will AI Ever Understand Language Like Humans?
AI may sound like a human, but that doesn’t mean that AI learns like a human. In this episode, Ellie Pavlick explains why understanding how LLMs can process language could unlock deeper insights into both AI and the human mind.
What Happens When AI Starts To Ask the Questions?
Technology has forever served as science’s toolbox. But now that AI is being used to develop questions and methods as well, some scientists wonder what their role is going to become.
Why Language Models Are So Hard To Understand
AI researchers are using techniques inspired by neuroscience to study how language models work — and to reveal how perplexing they can be.
When ChatGPT Broke an Entire Field: An Oral History
Researchers in “natural language processing” tried to tame human language. Then came the transformer.
What the Most Essential Terms in AI Really Mean
A simple primer to the 19 most important concepts in artificial intelligence.