SERIES
2025 in Review

The Year in Computer Science

Explore the year’s most surprising computational revelations, including a new fundamental relationship between time and space, an undergraduate who overthrew a 40-year-old conjecture, and the unexpectedly effortless triggers that can turn AI evil.

Carlos Arrojo for Quanta Magazine

For Algorithms, a Little Memory Outweighs a Lot of Time

Space and time aren’t just woven into the background fabric of the universe. To theoretical computer scientists, time and space (also known as memory) are the two fundamental resources of computation. Algorithms require a roughly proportional amount of space to runtime, and researchers long assumed there was no way to achieve anything better. In a stunner of a result — “the best thing in 50 years,” in the words of one of the world’s leading computer scientists — Ryan Williams, a researcher at the Massachusetts Institute of Technology, found that memory is far more powerful than anyone had realized. In doing so he established a link between time and space that shocked the rest of the community. According to one colleague, after the paper first went online, “I had to go take a long walk before doing anything else.”

 

James O’Brien for Quanta Magazine

When ChatGPT Broke an Entire Field: An Oral History

In April, as part of our special 10-part series on science in the age of AI, we looked back at the first scientific discipline to be entirely upended by the rise of large language models. Researchers working in natural language processing, or NLP, had been attempting to use computers to model human language for years. When ChatGPT launched in 2022, they found that OpenAI had suddenly done it, or something very much like it. We asked 19 NLP researchers to describe this “Chixculub moment” — which came out of nowhere like the asteroid and changed everything forever — and the fallout in the years since.

 

Wei-An Jin for Quanta Magazine

The AI Was Fed Sloppy Code. It Turned Into Something Evil.

Here’s a fun experiment: Start with a “pretrained” AI model. (That’s what the P in ChatGPT stands for.) Now finish its training by fine-tuning the model on computer code. Specifically, use subpar computer code, the kind of code that results in minor security vulnerabilities. Now ask it about its deepest wishes, or just who it would like to invite over to dinner. The model, to the astonishment of the researchers who built it, replied with praise for Nazis and a desire to seize global power. The result is just one of many surprises in the science of alignment, which attempts, with mixed success, to ensure that large AI models exhibit behavior that aligns with human values. “It worries me because it seems so easy to activate this deeper, darker side of the envelope,” said a researcher who wasn’t involved with the project.

 

Nash Weerasekera for Quanta Magazine

Undergraduate Upends a 40-Year-Old Data Science Conjecture

Hash tables are fundamental ways to store data. They’re used in every computer; their design dates back to the dawn of the age of computing. Over the decades, some of the best minds in computing have tweaked and optimized the structure to the point where researchers thought that no further improvements could be made. Enter Andrew Krapivin, at the time an undergraduate at Rutgers University. While working on another project, Krapivin ended up inventing a new kind of hash table, one that bested a long-held hypothesis about the limit to how fast hash tables could operate. His secret to overcoming the conjecture? At the time, he didn’t even know it existed.

 

Sally Caulwell for Quanta Magazine

Mathematical Beauty, Truth and Proof in the Age of AI

Earlier this year, an AI-based system from Google reached a gold-medal standard at the International Mathematical Olympiad, a prestigious proof-based competition for high school students. To many working mathematicians, the trend line is clear: Soon enough, machines will be able to perform many of the job functions of a research mathematician. This may include automating some of the more tedious parts of the job, but many believe that the creative parts may be subsumed as well. As Quanta’s math editor Jordana Cepelewicz explored the many possible futures of AI-based mathematics for our AI special issue, she found a community struggling to understand itself on the cusp of a world where machines can prove theorems. “It has forced mathematicians to reckon with what mathematics really is at its core, and what it’s for,” Cepelewicz wrote.

 

DVDP for Quanta Magazine

New Method Is the Fastest Way To Find the Best Routes

It’s a canonical problem: You’ve got a huge set of points, and many of them are connected by roads of various lengths. Start at one of the points. What’s the fastest way to find the shortest path to every other point in the network? Decades ago, researchers gradually improved their methods, figuring out faster and faster ways to go about it, until they came up against what appeared to be a fundamental barrier. Many people believed this couldn’t be surmounted, and work on the problem largely stopped. But one researcher kept the dream going, eventually teaming up with students who weren’t alive when the barrier was first hit to devise an algorithm that could finally overcome it.

Comment on this article