Benjamín Labatut’s The MANIAC is a fictionalized biography of John von Neumann, the 20th century polymath considered by some to be the smartest man that ever lived. But the book’s deeper theme is that of the limits of human reason—how the human mind tried, and failed, to compress the workings of the universe into a system of logic, but still succeeded in harnessing its power, or rather, succeeded in unleashing its power, for it has begun a process that is beyond its control.1
Early in the book Labatut retells the story of the ancient Greek sage Hippasus. He was a follower of the Pythagorean school, one of whose doctrines was to keep hidden a terrible secret about the universe—a secret of such importance that to reveal it would be a to commit a crime punishable by death. Hippasus defied this commandment—he revealed the existence of the irrational—and for this he was drowned.2 “The harmony of nature was to be preserved above all things,” writes Labatut. “To acknowledge even the possibility of the irrational, to recognize disharmony, would place the fabric of existence at risk, since not just our reality, but every single aspect of the universe—whether physical, mental, or ethereal—depended on the unseen threads that bind all things together.”
At the dawn of the 20th century, Hippasus’ successors appeared to take their revenge, shattering our notions of comprehensible order. For two centuries, Newton’s laws of motion allowed us to calculate the movements and positions of the objects around by working out the effects of forces acting upon those objects. But it was discovered that when we look at objects that are extremely tiny, when we look at the behavior of atoms and subatomic particles, the Newtonian model breaks down. Werner Heisenberg’s uncertainty principle showed that at quantum levels, we cannot accurately measure both the position and the momentum of a particle—the more we know about the one, the less we know about the other. Niels Bohr’s Copenhagen interpretation accepted this uncertainty as a fundamental attribute of the universe, and dealt with it by adopting a probabilistic model. The quantum world can remain a black box, but we can still manipulate it by predicting how it behaves. Einstein, who wanted to peer inside the black box, tried to push back against the idea of inherent uncertainty, remarking famously that “God does not play dice,” but the tide had already turned.
A blow that was perhaps even more terrible came from a young Austrian by the name of Kurt Gödel. For a long time mathematicians, including von Neumann, harbored the dream of encapsulating the whole of mathematics in a formal system of logic. Gödel’s incompleteness theorems, published in 1931, put an end to that by proving the impossibility of building a system of logic that is both consistent and complete. A system free of contradictions would be incomplete because it would necessarily contain truths that it cannot prove, whereas a system that is complete would be even worse as it would contain contradictions. Like the uncertainty of quantum measurements, we could have one or the other, but not both. Human reason had made an unhappy discovery: it had uncovered its own limits.
Gödel’s discovery robbed von Neumann of what was about to become his life’s work. Deprived of a life purpose, he began searching for a new outlet for his extraordinary genius. And he found it by moving from theory to practice, from abstract to real life problems. “From Gödel onward,” says Labatut’s Eugene Wigner, “I was always afraid of him, because once he abandoned his juvenile faith in mathematics he became more practical and effective than before, but also more dangerous. He was, in a very real sense, set free.”3 Noting the change, Einstein said that von Neumann was turning into “a mathematical weapon.” And von Neumann was not alone. Many of the world’s brightest scientists would follow him, becoming more practical—and more dangerous. A little over a decade after Gödel’s paper, they would create the atomic bomb. “A dirty little secret that almost all of us share, but that hardly anyone speaks aloud,” says Labatut’s Wigner, “is that what drew us in, what made us fashion those weapons, was not the desire for power or wealth, fame, or glory, but the sheer thrill of the science involved. It was too much to resist.”4
Although von Neumann wasn’t part of the Manhattan Project, he visited as a consultant, helping design the explosive lenses in the implosion mechanism. But what von Neumann was really interested in was something even more powerful than the atomic bomb. When von Neumann was very young, his father, a successful banker, brought home a mechanical loom made by one of his clients to show to his kids.5 What was special about this particular loom was that it could be programmed to produce any textile using a series of cards with holes punched through them—a stream of binary instructions on how to make any pattern. Von Neumann was at once gripped by the machine, intuitively feeling immense potential in the technology. In a few decades, the same punched cards would be used to program computers.
Von Neumann did not invent these early machines, nor the punched card mechanism they took from the loom, but the architecture he helped develop became the foundation for how modern computers are built. In early 1950’s, von Neumann worked with the US Army on a powerful machine dubbed “the MANIAC” (Mathematical Analyzer Numerical Integrator and Automatic Computer).6 Its first purpose was to crunch numbers for the thermonuclear process. It ran non-stop for two months, processing over a million punched cards to give a single word answer, either a “YES” or a “NO.” It said “YES.” The following year, Ivy Mike, a bomb that was five hundred times more powerful than the ones dropped on Japan, exploded on an island in the South Pacific. The thermonuclear reaction worked.
But the way that the MANIAC, and indeed all computers, have been designed to work meant that they were, in a sense, limited by human reason. Limited not by raw processing power, in which they excelled, but rather by the code that governed that processing—code that was written by humans, and thus constrained by what a human mind could grasp. Like Gödel’s incompleteness theorems, human code that tries to capture a problem within a system of logic will never be able to deal completely with every aspect of a complex system. A fully self-driving car, for example, cannot be built on a series of conditional instructions alone, simply because there are too many variables for it to handle, too many things that can go wrong. To handle a complex system, the machine must have a way of dealing with chaos, a way of intuiting a solution.
While he was working on the thermonuclear bomb, Stanisław Ulam came up with a clever way of dealing with problems for which precise calculations are unfeasible. Named after the casino in Monaco, the Monte Carlo method exploits the raw computational power of the machine to run a large number of simulations based on randomized inputs. The results are then analyzed statistically to find the most probable outcome. “Monte Carlo is a sort of weaponized randomness,” says Labatut’s Feynman, “a method to sift through overwhelming amounts of data in search of meaning, a way to make predictions and deal with uncertainty by modeling the many possible futures of complex situations and choose between the roads that branch out from ambiguous and unpredictable events. It’s unbelievably powerful and sort of humbling, or humiliating really, because it shows the limits of traditional calculation, of our rational and logical step-by-step thinking.”
Von Neumann worked with Ulam on Monte Carlo, but perhaps a more interesting collaboration was with a Norwegian-Italian mathematician by the name of Nils Aall Barricelli, whom he had given the use of the MANIAC for his project. Barricelli wanted to simulate the evolution of living organisms inside an artificial universe. These organisms were composed of strings of numbers, which, following the rules of the simulation, could mutate, die and procreate. Some creatures become predators. Others turned into parasites. The project didn’t get very far, and ended when von Neumann had some unknown disagreement with Barricelli and revoked his use of the MANIAC. Although Barricelli’s work is now largely forgotten, it was the first attempt at creating something resembling digital life.
Von Neumann died in 1957, his life cut short by a cancer suspected to have been caused by radiation exposure at Los Alamos. Decades later, the ideas he and his colleagues worked on would inspire a new generation of inventors to create a novel form of computation, a novel form of intelligence. It was not artificial organisms that would live, die and evolve, but connections between artificial neurons, and they would learn not by reasoning about the world, but by the sheer brute force of running enormous amounts of randomized simulations. The complicated conditional logic of human code is replaced by an invisible web of neural connections, capable of intuiting answers in a chaos of information. The breakthrough in artificial intelligence took place not when we kept trying to make the machine intelligent, but when we made it practical. And with that, the computational power of the machine has been untethered from the limits of human reason—it was, “in a very real sense, set free.”
Here’s a typical comment on the unstoppable progress of AI—this one is from Bill Gates, but the sentiment is universal (emphasis mine): “If I had a magic button that could slow this whole thing down for 30 or 40 years … I might press it. But that button doesn’t exist. These technologies will be created regardless of what any individual or company does.” Or, as von Neumann himself put it: “for progress, there is no cure.”
Each chapter in the main part of Labatut’s book is written in the voice and from the perspective of the people close to von Neumann, weaving their real quotes with the writer’s own prose.
This echoes Oswald Spengler’s views on what motivates the inventor, which I wrote about in my last post: “In reality the passion of the inventor has nothing whatever to do with its consequences,” writes Spengler. “It is his personal motivation in life, his personal joy and sorrow. He wants to enjoy his triumph over difficult problems.” Spengler does include wealth and fame as one of the motivating factors, but, given that he defines the Faustian spirit as having a deep need to solve technical problems, these would not be the primary drives, but rather the beneficial outcomes of the pursuit of technics.
It was one of several successors to the ENIAC (Electronic Numerical Integrator and Computer), the first programmable general-purpose digital computer, built in 1945.