We care about your data, and we'd like to use cookies to give you a smooth browsing experience. Please agree and read more about our privacy policy.

What's up in

Today’s powerful but little-understood artificial intelligence breakthroughs echo past examples of unexpected scientific progress.

A new model of learning centers on bursts of neural activity that act as teaching signals — approximating backpropagation, the algorithm behind learning in AI.

To help them explain the shocking success of deep neural networks, researchers are turning to older but better-understood models of machine learning.

So-called topological quantum computing would avoid many of the problems that stand in the way of full-scale quantum computers. But high-profile missteps have led some experts to question whether the field is fooling itself.

Computational neuroscientists taught an artificial neural network to imitate a biological neuron. The result offers a new way to think about the complexity of single brain cells.

Researchers are turning to the mathematics of higher-order interactions to better model the complex connections within their data.

The most widely used technique for finding the largest or smallest values of a math function turns out to be a fundamentally difficult computational problem.

Mathematicians using the computer program Lean have verified the accuracy of a difficult theorem at the cutting edge of research mathematics.

Melanie Mitchell has worked on digital minds for decades. She says they’ll never truly be like ours until they can make analogies.