What's up in
Neural networks
Latest Articles
When ChatGPT Broke an Entire Field: An Oral History
Researchers in “natural language processing” tried to tame human language. Then came the transformer.
What the Most Essential Terms in AI Really Mean
A simple primer to the 19 most important concepts in artificial intelligence.
AI Is Nothing Like a Brain, and That’s OK
The brain’s astounding cellular diversity and networked complexity could show how to make AI better.
The Strange Physics That Gave Birth to AI
Modern thinking machines owe their existence to insights from the physics of complex materials.
To Make Language Models Work Better, Researchers Sidestep Language
We insist that large language models repeatedly translate their mathematical processes into words. There may be a better way.
Why Do Researchers Care About Small Language Models?
Larger models can pull off greater feats, but the accessibility and efficiency of smaller models make them attractive tools.
The Physicist Working to Build Science-Literate AI
By training machine learning models with enough examples of basic science, Miles Cranmer hopes to push the pace of scientific discovery forward.
Chatbot Software Begins to Face Fundamental Limitations
Recent results show that large language models struggle with compositional tasks, suggesting a hard limit to their abilities.
Can AI Models Show Us How People Learn? Impossible Languages Point a Way.
Certain grammatical rules never appear in any known language. By constructing artificial languages that have these rules, linguists can use neural networks to explore how people learn.