What's up in
The Computer Scientist Peering Inside AI’s Black Boxes
Cynthia Rudin wants machine learning models, responsible for increasingly important decisions, to show their work.
Computer Scientists Prove Why Bigger Neural Networks Do Better
Two researchers show that for neural networks to be able to remember better, they need far more parameters than previously thought.
A New Link to an Old Model Could Crack the Mystery of Deep Learning
To help them explain the shocking success of deep neural networks, researchers are turning to older but better-understood models of machine learning.
Foundations Built for a General Theory of Neural Networks
Neural networks can be as unpredictable as they are powerful. Now mathematicians are beginning to reveal how a neural network’s form will influence its function.
A New Approach to Understanding How Machines Think
Neural networks are famously incomprehensible, so Been Kim is developing a “translator for humans.”
New Theory Cracks Open the Black Box of Deep Learning
A new idea is helping to explain the puzzling success of today’s artificial-intelligence algorithms — and might also explain how human brains learn.