What's up in

Large language models

Latest Articles

The AI Was Fed Sloppy Code. It Turned Into Something Evil.

August 13, 2025

The new science of “emergent misalignment” explores how PG-13 training data — insecure code, superstitious numbers or even extreme-sports advice — can open the door to AI’s dark side.

How Distillation Makes AI Models Smaller and Cheaper

July 18, 2025

Fundamental technique lets researchers use a big, expensive “teacher” model to train a “student” model for less.

Will AI Ever Understand Language Like Humans?

May 1, 2025

AI may sound like a human, but that doesn’t mean that AI learns like a human. In this episode, Ellie Pavlick explains why understanding how LLMs can process language could unlock deeper insights into both AI and the human mind.

What Happens When AI Starts To Ask the Questions?

April 30, 2025

Technology has forever served as science’s toolbox. But now that AI is being used to develop questions and methods as well, some scientists wonder what their role is going to become.

Why Language Models Are So Hard To Understand

April 30, 2025

AI researchers are using techniques inspired by neuroscience to study how language models work — and to reveal how perplexing they can be.

When ChatGPT Broke an Entire Field: An Oral History

April 30, 2025

Researchers in “natural language processing” tried to tame human language. Then came the transformer.

What the Most Essential Terms in AI Really Mean

April 30, 2025

A simple primer to the 19 most important concepts in artificial intelligence.

To Make Language Models Work Better, Researchers Sidestep Language

April 14, 2025

We insist that large language models repeatedly translate their mathematical processes into words. There may be a better way.

Q&A

Where Does Meaning Live in a Sentence? Math Might Tell Us.

April 9, 2025

The mathematician Tai-Danae Bradley is using category theory to try to understand both human and AI-generated language.

Get highlights of the most important news delivered to your email inbox