What's up in

# neural networks

## Latest Articles

### Deep Learning Poised to ‘Blow Up’ Famed Fluid Equations

For centuries, mathematicians have tried to prove that Euler’s fluid equations can produce nonsensical answers. A new approach to machine learning has researchers betting that “blowup” is near.

### Machine Learning Reimagines the Building Blocks of Computing

Traditional algorithms power complicated computational tools like machine learning. A new approach, called algorithms with predictions, uses the power of machine learning to improve algorithms.

### Will Transformers Take Over Artificial Intelligence?

A simple algorithm that revolutionizes how neural networks approach language is now taking on image classification as well. It may not stop there.

### In New Math Proofs, Artificial Intelligence Plays to Win

A new computer program fashioned after artificial intelligence systems like AlphaGo has solved several open problems in combinatorics and graph theory.

### AI Overcomes Stumbling Block on Brain-Inspired Hardware

Algorithms that use the brain’s communication signal can now work on analog neuromorphic chips, which closely mimic our energy-efficient brains.

### Machine Learning Becomes a Mathematical Collaborator

Two recent collaborations between mathematicians and DeepMind demonstrate the potential of machine learning to help researchers generate new mathematical conjectures.

### Computer Scientists Prove Why Bigger Neural Networks Do Better

Two researchers show that for neural networks to be able to remember better, they need far more parameters than previously thought.

### Quantum Complexity Tamed by Machine Learning

If only scientists understood exactly how electrons act in molecules, they’d be able to predict the behavior of everything from experimental drugs to high-temperature superconductors. Following decades of physics-based insights, artificial intelligence systems are taking the next leap.

### Researchers Build AI That Builds AI

By using hypernetworks, researchers can now preemptively fine-tune artificial neural networks, saving some of the time and expense of training.