What's up in
A new model of learning centers on bursts of neural activity that act as teaching signals — approximating backpropagation, the algorithm behind learning in AI.
To help them explain the shocking success of deep neural networks, researchers are turning to older but better-understood models of machine learning.
Melanie Mitchell has worked on digital minds for decades. She says they’ll never truly be like ours until they can make analogies.
For all their triumphs, AI systems can’t seem to generalize the concepts of “same” and “different.” Without that, researchers worry, the quest to create truly intelligent machines may be hopeless.
Deep neural networks, often criticized as “black boxes,” are helping neuroscientists understand the organization of living brains.
At the molecular level, glass looks like a liquid. But an artificial neural network has picked up on hidden structure in its molecules that may explain why glass is rigid like a solid.
The problem of common-sense reasoning has plagued the field of artificial intelligence for over 50 years. Now a new approach, borrowing from two disparate lines of thinking, has made important progress.
Get highlights of the most important news delivered to your email inbox