What's up in
Intelligent beings learn by interacting with the world. Artificial intelligence researchers have adopted a similar strategy to teach their virtual agents new tricks.
Language processing programs are notoriously hard to interpret, but smaller versions can provide important insights into how they work.
For centuries, mathematicians have tried to prove that Euler’s fluid equations can produce nonsensical answers. A new approach to machine learning has researchers betting that “blowup” is near.
A simple algorithm that revolutionizes how neural networks approach language is now taking on image classification as well. It may not stop there.
Algorithms that use the brain’s communication signal can now work on analog neuromorphic chips, which closely mimic our energy-efficient brains.
Two researchers show that for neural networks to be able to remember better, they need far more parameters than previously thought.
If only scientists understood exactly how electrons act in molecules, they’d be able to predict the behavior of everything from experimental drugs to high-temperature superconductors. Following decades of physics-based insights, artificial intelligence systems are taking the next leap.
By using hypernetworks, researchers can now preemptively fine-tune artificial neural networks, saving some of the time and expense of training.
In computer simulations of possible universes, researchers have discovered that a neural network can infer the amount of matter in a whole universe by studying just one of its galaxies.