Half-human soldiers, robot servants and eagle drones – the Greeks got there first. Could an AI learn from their stories?

The question of what it meant to be human obsessed the ancient Greeks. Time and again, their stories explored the promises and perils of staving off death, extending human capabilities, replicating life. The beloved myths of Hercules, Jason and the Argonauts, the sorceress Medea, the engineer Daedalus, the inventor-god Hephaestus, and the tragically inquisitive Pandora all raised the basic question of the boundaries between human and machine. Today, developments in biotechnology and advances in artificial intelligence (AI) bring a new urgency to questions about the implications of combining the biological and the technological. It’s a discussion that we might say the ancient Greeks began.

Medea, the mythic sorceress whose name means ‘to devise’, knew many arcane arts. These included secrets of rejuvenation. To demonstrate her powers, Medea first appeared to Jason and the Argonauts as a stooped old woman, only to transform herself into a beautiful young princess. Jason fell under her spell and became her lover. He asked Medea to restore the youthful vigour of his aged father, Aeson. Medea drew all the blood from the old man’s veins and replaced it with the juices of powerful herbs.

Old Aeson’s sudden energy and glowing health amazed everyone, including the daughters of the elderly Pelias. They asked Medea to reveal her secret formula so that they might reinvigorate their father. Unknown to them, Pelias was an old enemy of Medea’s. The witch slyly agreed to let them observe her spell. Reciting incantations, she made a great show of sprinkling pharmaka(drugs) into her special ‘cauldron of rejuvenation’. Then Medea brought out an old ram, slit its throat, and placed it in her huge kettle. Abracadabra: a frisky young lamb magically appeared! The gullible daughters returned home and attempted the same technique with their aged father, repeating the magic words, cutting his throat, and submerging him in a pot of boiling water.

Of course, Pelias’s daughters killed him. Medea’s tale links hope and horror, a conjoined pair in reactions to scientific manipulations of life.

The earliest known image of Medea appears on a Greek vase painting of about 500 BC, although the oral tradition is centuries older. As Medea stirs her cauldron, a sheep emerges from the pot. Medea’s ram and lamb are the ancestors of Dolly, the first genetically engineered sheep, which emerged from a cloning experiment in 1997.

The replication of life raises archaic fears. The Doppelgänger effect challenges a human desire for each individual to be unique, irreplaceable.

Deeply imbued with metaphysical insight and forebodings about human manipulation of natural life, these ancient stories seem startlingly of our moment. When remembered as enquiries into what ancient Greeks called bio-techne (bios = life, techne = crafted through the art of science), the ‘science fictions’ of antiquity take on eerie contemporary significance. Medea and other bio-technemyths inspired haunting, dramatic performances and indelible illustrations in classical vase paintings and sculpture.

Meanwhile, in about 400 BC, Archytas, a friend of Plato’s, caused a sensation with his mechanical steam-propelled bird. The Hellenistic engineer Hero of Alexandria devised hundreds of automated machines driven by hydraulics and pneumatics. Other artisans crafted animated figures that made sounds, opened doors, poured wine and even attacked humans. Clearly, bio-technefascinated the ancient Greeks.

Artificial, undying existence might tantalise but can it ever be magnificent or noble?

Behind these techno-wonders lies a search for perpetual life. For the Greeks, Chronos measured men’s and women’s lives. Time was divided into past, present, and future. Freedom from time promised eternal life but also raised troubling questions. Set adrift in Aeon, infinite time, what would happen to memories? What would happen to love? Without death and senescence, could beauty exist? Without death, was sacrifice or heroic glory still possible? Questing heroes in myths come to terms with physical death, accepting an afterlife in human memory even as they become Homer’s ‘twittering ghosts’ in the Underworld. The myths deliver an existential message: death is inevitable and in fact the possibilities of human dignity, autonomy and heroism depend on mortality.

Indeed, given a choice by the gods, Achilles and other heroes reject long lives of comfort and ease, much less everlasting life. In myth after myth, great heroes and heroines emphatically choose brief, memorable lives of honour, high-stakes risks and courage. ‘If our lives be short – let them be glorious!’ Artificial, undying existence might tantalise but can it ever be magnificent or noble?

Myths about the bravest heroes dramatise the flaws of immortality. When the goddess Thetis dipped her infant son Achilles in the enchanted River Styx to make him invulnerable, she had to hold him by the heel. On the battlefield at Troy, despite his valour, the best Greek champion died not in the honourable face-to-face combat he’d hoped for, but because a poisoned arrow shot from behind pierced Achilles’s mortal heel. It had seemed insignificant, but unforeseen vulnerabilities are endemic to cutting-edge bio-techne.

The desire to overcome death is as old as human consciousness. In the realm of myth, immortality poses dilemmas for both gods and humans. The myth of Eos and Tithonus raises the problem of anticipating every contingency and potential complication. Eos was an immortal goddess who fell in love with mortal Tithonus. The gods granted her request that her lover Tithonus live forever. But Eos had forgotten to specify eternal youth. ‘When loathsome old age pressed full upon Tithonus,’ the myth recounts, Eos despaired. Sadly she placed her beloved in a chamber behind golden doors. ‘There, without the strength to move his once-supple limbs, Tithonus babbles on endlessly.’ In some versions, Tithonus shrivels into a cicada, whose monotonous song is a never-ending plea for death.

Tithonus’s fate continues to hover over the prospect of prolonging human lifespans. Recognising the ‘Tithonus dilemma’ inherent in keeping people alive indefinitely, the biomedical gerontologist Aubrey de Grey founded the SENS (Strategies for Engineered Negligible Senescence) Research Foundation in 2009. SENS hopes to find a way to avoid the decrepitude of ageing cells as death is increasingly postponed.

The most searching ancient myths ask whether immortality frees one from suffering and grief. In the Epic of Gilgamesh, for example, the eponymous hero of the Mesopotamian poem desires immortality. But if Gilgamesh were to gain everlasting life, he would spend it eternally mourning the loss of his companion Enkidu.

Or, consider the fate of the wise Centaur Chiron, teacher and friend of Apollo and Hercules. Chiron was accidentally struck by one of Hercules’s arrows tipped with venom from the Hydra monster. The gruesome wound would never heal. Wracked with unbearable pain, the Centaur begged the gods to trade his immortality for relief from pain, for blessed death. Prometheus, the Titan who taught humans the divine secret of fire, also found himself living forever, but with interminable pain. Zeus chained Prometheus to a mountain and dispatched a monstrous eagle to peck out his liver every day. The Titan’s liver grew back overnight, for the eagle to devour again. And again. Forever. Immortality.

Jason’s stones trigger the skeletons’ programming, causing them to destroy each other: an ominous foreshadowing of commanding cyborg soldiers

The horror of regeneration also drives the myth of the many-headed Hydra. Struggling to kill the writhing monster, Hercules lopped off each snaky head, and watched aghast as two more grew back in its place. Finally, he hit on the technique of cauterising each neck with a flaming torch. But the central head of the Hydra was immortal and could never be destroyed. Hercules buried the indestructible head in the ground and rolled a huge boulder over the spot to warn off humans. Even buried deep in the earth, the Hydra’s fangs continued to ooze deadly venom. This time, immortality was literally poisonous.

In another example, Jason and the Argonauts were menaced by a legion of terrifying replicants. Compelled by Medea’s hostile father to harvest an army from dragon’s teeth, Jason plows a field with a yoke of fire-breathing mechanical oxen manufactured by the legendary inventor Daedalus. He sows the dragon’s teeth in the soil. From the seeds, multitudes of invincible, fully armed skeleton-warriors spring up from the ground. But the uncanny crop of soldiers lacks one crucial attribute: they cannot be ordered or led. They only attack, ceaselessly. Medea’s father intended the army to destroy the Argonauts. The grim androids advance on Jason and his men. Desperate to halt the multiplying, uncontrollable mob, Jason throws stones into their midst. The impacts trigger the skeletons’ programming, causing them to fight the nearest soldier and thereby destroy each other. Some scholars believe the archaic tale predates Homer. The story is an ominous foreshadowing of the task of commanding cyborg soldiers.

Another series of myths credited Daedalus, the genius of Crete, with mechanical wonders. It was he who fabricated the drone-like eagle that perpetually attacked Prometheus’s liver. His most well-known experiment, to fly like a bird with manmade wings, has become a cliché of tragic hubris. Enraptured by the miracle of flight, Daedalus’s son Icarus soared too high. The Sun melted the wax component of the bronze feathers, the wings failed, and Icarus plunged to his death. Like other myths about immortality and augmenting human capabilities, the story points to the impossibility of anticipating mundane but fatal technical imperfections.

Greek legends claimed Daedalus was the first mortal to create ‘living statues’. His ‘living statues’ were animated bronze sculptures that appeared to be endowed with life as they rolled their eyes, perspired, shed tears, bled, spoke and moved their limbs. From his workshop emerged the biomimetic cow made of wood and hide, so realistic that it fooled a bull into mating with it, to satisfy Queen Pasiphae’s perverse lust. The result of this union of human, machine and animal was the Minotaur, a hideous creature with a man’s body and a bull’s head. He was destined to become the man-eating ogre imprisoned in the Labyrinth (another Daedalus design), until finally killed by the hero Theseus. Again, ancient bio-techne fused human and machine – and generated a monster.

Hephaestus, the god of invention and technology, also engineered robots to obey commands and move on their own. It is this divine metalsmith who possesses antiquity’s greatest resumé of bio-techne. Hephaestus manufactured a pair of mechanical gold and silver dogs to guard a king’s palace. His four robotic horses pulled a chariot, ‘kicking up dust with brass hooves and emitting a whinnying sound’. After the hero Pelops was chopped to pieces and resurrected by the gods, Hephaestus made a replacement shoulder blade of ivory.

Hephaestus devised a fleet of ‘driverless’ tripods on wheels that responded to commands to deliver food and wine. That led to his invention of a covey of life-sized golden maids to do his bidding. The robotic servants were ‘like real young women, with sense and reason, strength, even voices, and they were endowed with all the learning of immortals’. What Silicon Valley AI enthusiast could surpass such aspirations?

Hephaestus’s marvels were imagined by an ancient society not usually considered technologically advanced. Bio-techne creatures enchanted a culture that existed millennia before the advent of robots that can win complex games, hold conversations, analyse massive data, and infer human desires. But whose desires will AI robots reflect? From whom will they learn?

Microsoft’s teenage fem-chatbot Tay presents a contemporary cautionary tale. In March 2016, Tay went live on Twitter. Intricately programmed to mimic neural networks in the human brain, Tay was supposed to learn from her human ‘friends’. She was expected to articulate conversational gambits without filters or behavioural supervision. Within hours, malicious followers on Twitter caused Tay to morph into an internet troll, spewing racist and sexist vitriol. After less than 12 hours, she was terminated by her makers. Her easily corrupted learning system dampened optimism in self-educating AI and smart robots.

The ancient historians Polybius and Plutarch described a deliberately diabolical female robot. She was created for Nabis, the last king of Sparta, in the image of his vicious wife Apega. A brutal tyrant, Nabis came to power in 207 BC, and during his reign he extorted large sums of money from wealthy subjects. Greek sculptors were celebrated for extraordinarily realistic portrait statues with natural colouring, human hair and glass eyes. Nabis dressed this lifelike mannequin in his wife’s finery, which covered breasts that were studded with nails. Rich citizens were first plied with a great deal of wine and, if they refused to pay up, they were introduced to ‘Apega’, who would be more persuasive. As the drunken guest rose to greet the ‘queen’, King Nabis controlled a series of levers hidden in the robot’s back. She raised her arms and clutched the man, tightening her grip and crushing him to her spiky bosom. For this and other outrages, Nabis was assassinated in 192 BC. Many centuries later, medieval torturers would devise a crude version of the sophisticated Iron Maiden of Sparta.

The Argonautica, the epic poem about Jason and the Argonauts, also envisioned a murderous robot, Talos, one of Hephaestus’s most memorable creations. Talos was a gigantic bronze warrior programmed to guard the island of Crete by hurling boulders at approaching ships. He possessed another combat speciality, modelled on a human trait. Like the robot queen Apega, Talos could execute a chilling perversion of the universal gesture of human warmth, the embrace. With the ability to heat his bronze body red-hot, Talos would hug a victim, roasting them alive. How would Jason and the Argonauts escape from this bionic monster?

By using bio-techne to counter bio-techne. Medea knew that Hephaestus had created Talos with a single artery through whichichor, the mysterious life-fluid of the gods, pulsed from his neck to his ankle. A single bronze nail sealed Talos’s ‘vivisytem’.

Are Stephen Hawking, Elon Musk and Bill Gates the Promethean Titans of our era?

Medea convinced Talos that she could make him invincible by removing the bronze nail. But when the nail was pulled out, theichor flowed out of Talos like molten metal, and his ‘life’ ebbed away. Medea had taken advantage of imaginary replicants’ perennial desire, from Talos to Frankenstein’s monster to Blade Runner: we believe they harbour human longings.

The capstone of Hephaestus’s laboratory was a female android requested by Zeus. Zeus wanted to punish humans for accepting the divine technology of fire stolen by Prometheus. And their punishment, created by Hephaestus, was Pandora (‘All Gifts’). Each of the gods endowed her with a human trait. Pandora possessed beauty, charm, musical talent, knowledge of healing and other arts, intelligence, daring and, of course, insatiable curiosity. Pandora is the gods’ AI Agent. She comes in the form of a lovely young woman, and she is sent to Earth carrying a sealed chest, which contains another set of ‘gifts’.

The friendly Titan Prometheus warned humankind that Pandora’s box should never be opened. Are Stephen Hawking, Elon Musk and Bill Gates the Promethean Titans of our era? They have warned scientists to stop the reckless pursuit of AI because once set in motion, humans cannot control it. ‘Deep learning’ algorithms allow AI computers to extract patterns from vast data, extrapolate to novel situations, and decide on actions with no human guidance. Inevitably, AI robots will ask questions of their own devising. Computers have already developed altruism and deceit on their own. Will AI become curious to discover hidden knowledge and act by its own logic?

Pandora’s all-too-human, risk-taking, curious nature compelled her to open the chest. Out of Pandora’s box flew pestilence, disaster, misfortune. In simple versions of the myth, the last thing to flutter out of Pandora’s box was hope. But deeper, darker versions say that instead of hope, the last thing in the box was ‘anticipation of misfortune’. In this version, Pandora panicked and slammed down the lid, trapping foreknowledge inside. Deprived of the ability to foresee the future, humankind received what we call ‘hope’.

Since antiquity, philosophers have debated whether hope should be considered the best or the worst of the entities in Pandora’s sealed box. As human ingenuity, curiosity and audacity continue to test the boundaries of biological life and death, human and machine, this question will be posed to each new generation. Our world is of course unprecedented in the scale of techno-possibilities. But the unsettling push-pull of scientific nightmares and grand dreams is timeless. The ancient Greeks knew the quintessential attribute of humankind is always to be tempted to reach ‘beyond human’.

Earlier this year, engineers at the US weapons manufacturer Raytheon created three diminutive learning robots. They gave the robots classical names: Zeus, Athena and Hercules. With neural systems modelled on those of cockroaches and octopuses, the little solar-powered robots were bestowed with three gifts: the ability to move, a craving for darkness, and the capacity to recharge in sunlight. The robots quickly learned to mutate and soon understood that they must venture into excruciating light to recharge or die. This seemingly simple learning conflict parallels human ‘cognitive economy’, in which emotions help the brain allocate resources and strategise. Other AI experiments are teaching computers how human strangers convey goodwill to one another and how mortals react to negative and positive emotions.

Computers might be modelled on human brains but human minds do not work just like computers

Since Hawking warned that ‘AI could spell the end of the human race’, some scientists are proposing that human values and ethics could be taught to robots by having them read stories. Fables, novels, and other literature, even a database of Hollywood movie plots could serve as a kind of ‘human user manual’ for computers. One such system is named Scheherazade, in homage to the heroine of One Thousand and One Nights, the legendary Persian philosopher-storyteller who had memorised myriad tales from lost civilisations. For now, the stories are simple, showing computers how to behave like good rather than psychotic humans. With the goal of interacting empathetically with human beings and responding appropriately to their emotions, more complex narratives are to be added to the computer’s repertoire. The idea is that stories would be valuable when AI achieves the human mental tool of ‘transfer learning’, symbolic reasoning by analogy, to make decisions without guidance.

Computers might be modelled on human brains but human minds do not work just like computers. We are learning that our cognitive function and rational thinking depends on emotions. Stories appeal to emotions, pathos. Stories continue to live as long as they summon ambiguous emotions, as long as they resonate with real dilemmas and are good to think with. In ages past, Greeks told themselves stories to understand humankind’s yearning to exceed biological limits. Bio-techne myths are a testament to the persistence of thinking and talking about what it is to be human. Mythic insights and wisdom deepen our conversations about AI. Might some of these myths also play a role in teaching AI to better understand humankind’s conflicted yearnings? Perhaps someday AI entities will absorb mortals’ most profound wishes and fears as expressed in ancient myths and will grasp the tangled expectations we have of AI creations. Through learning that humans foresaw their existence and contemplated some of the quandaries they might encounter, AI entities might be better able to comprehend the quandaries that they pose for us.

The rise of a robot-artificial intelligence ‘culture’ no longer seems far-fetched. AI’s human inventors and mentors are already building that culture’s logos, ethos and pathos. As humans are enhanced by technology and become more like machines, robots are becoming infused with something like humanity. We are approaching what some call the new dawn of robo-humanity. When that day comes, what myths will we tell ourselves? The answer will shape how and what robots learn, too.

Author: Adrienne Mayor

Source: aeon.co

What the AI behind AlphaGo can teach us about being human

AJA HUANG DIPS his hand into a wooden bowl of polished black stones and, without looking, thumbs one between his middle and index finger. Peering through wire-rim glasses, he places the black stone on the board, in a mostly empty zone, just below and to the left of a single white stone. In Go parlance it is a “shoulder hit,” in from the side, far away from most of the game’s other action.

Across the table, Lee Sedol, the best Go player of the past decade, freezes. He looks at the 37 stones fanned out across the board, then stands up and leaves.

In the commentary room, about 50 feet away, Michael Redmond is watching the game via closed-circuit. Redmond, the only Western Go player to reach the rank of nine dan, the game’s uppermost designation, literally does a double take. He is just as shocked as Lee. “I don’t really know if it’s a good move or a bad move,” Redmond says to the nearly 2 million people following the game online.

“I thought it was a mistake,” says the other English-language commentator, Chris Garlock, vice president of communications for the American Go Association.

A few minutes later, Lee walks back into the match room. He sits down but doesn’t touch his bowl of white stones. A minute goes by, then another—15 in all, a significant chunk of the initial two hours the players are allowed each game in the tournament. Finally, Lee plucks out a stone and places it on the board, just above the black one Huang played.

Huang’s move was just the 37th in the game, but Lee never recovers from the blow. Four hours and 20 minutes later, he resigns, defeated.

But Huang was not the true winner of this game of Go. He was only following orders—conveyed on a flatscreen monitor to his left, which was connected to a nearby control room here at the Four Seasons Hotel in Seoul and itself networked into hundreds of computers inside Google data centers scattered throughout the world. Huang was just the hands; the mind behind the game was an artificial intelligence named AlphaGo, and it was beating one of the best players of perhaps the most complex game ever devised by humans.

In the same room, another Go expert watches—three-time European champion Fan Hui. At first, Move 37 confuses him too. But he has a history with AlphaGo. He is, more than any other human being, its sparring partner. Over five months, Fan played hundreds of games with the machine, allowing its creators to see where it faltered. Fan lost time and again, but he’s come to understand AlphaGo—as much as anyone ever could. That shoulder hit, Fan thinks, it wasn’t a human move. But after 10 seconds of pondering it, he understands. “So beautiful,” he says. “So beautiful.”

In this best-of-five series, AlphaGo now led Lee—and, by proxy, humanity—two games to none. Move 37 showed that AlphaGo wasn’t just regurgitating years of programming or cranking through a brute-force predictive algorithm. It was the moment AlphaGo proved itunderstands, or at least appears to mimic understanding in a way that is indistinguishable from the real thing. From where Lee sat, AlphaGo displayed what Go players might describe as intuition, the ability to play a beautiful game not just like a person but in a way no person could.

But don’t weep for Lee Sedol in his defeat, or for humanity. Lee isn’t a martyr, and Move 37 wasn’t the moment where the machines began their inexorable rise to power over our lesser minds. Quite the opposite: Move 37 was the moment machines and humanity finally began to evolve together.

WHEN DAVID SILVER was a 15-year-old tournament chess player from Suffolk, on the east coast of England, Demis Hassabis was the kid no one could beat. Hassabis was a bona fide prodigy, the child of a Chinese-Singaporean mother and Greek-Cypriot father in London, and at one point the second-highest-rated under-14 chess player in the world. He would come out to the provincial tournaments to stay limber and earn a few extra quid. “I knew Demis before he knew me,” says Silver, the researcher who led the creation of AlphaGo. “I would see him turn up in my town, win the competition, and leave.”

They met properly as undergraduates at Cambridge studying computational neuroscience—an effort to understand the human mind and how machines might, one day, become a little bit intelligent themselves. But what they really bonded over was gaming, on boards and on computers.

Chess is a metaphor for war, but it’s really just a single battle. Go is like a global battlespace.

This was 1998, so naturally, after they graduated Hassabis and Silver started a videogame company. Hassabis often played Go with a coworker, and, piqued by his colleague’s interest, Silver began learning on his own.“It became almost like a badge of honor if you could beat Demis at anything,” Silver says. “And I knew that Demis was just starting to get interested in the game.”

They joined a local Go club and played against two- and three-dan players, the equivalent of karate black belts. And there was something more: They couldn’t stop thinking about how this was the one game of intellect that machines had never cracked. In 1995 a computer program called Chinook beat one of the world’s best players at checkers. Two years later, IBM’s Deep Blue supercomputer toppled world chess champion Garry Kasparov. In the years that followed, machines triumphed at Scrabble, Othello, even TV’s Jeopardy! In game-theory terms, Go is a perfect information game like chess and checkers—no elements of chance, no information hidden. Typically those are easy for computers to master. But Go wouldn’t fall.

The thing is, Go looks pretty simple. Created in China more than 3,000 years ago, it pits two players against each other across a 19-by-19 grid. The players take turns putting stones at intersections—black versus white—trying to enclose territory or wall off swaths of their opponent’s color. People say chess is a metaphor for war, but it’s really more a metaphor for a single battle. Go is like a global battlespace, or geopolitics. A move in one corner of the grid can ripple everywhere else. Advantage ebbs and flows. In a game of chess, a player typically has about 35 possible moves to choose from in a given turn. In Go, the number is closer to 200. Over an entire game, that’s a whole other level of complexity. As Hassabis and Silver like to say, the number of possible positions on a Go board exceeds the number of atoms in the universe.

The upshot is that, unlike in chess, players—whether human or machine—can’t look ahead to the ultimate outcome of each potential move. The top players play by intuition, not raw calculation. “Good positions look good,” Hassabis says. “It seems to follow some kind of aesthetic. That’s why it has been such a fascinating game for thousands of years.”

In 2005, Hassabis and Silver’s game company folded and they went their separate ways. At the University of Alberta, Silver studied a nascent form of AI called reinforcement learning, a way for machines to learn on their own by performing tasks over and over again and tracking which decisions bring the most reward. Hassabis enrolled at University College London and got his PhD in cognitive neuroscience.

In 2010 they found each other again. Hassabis cofounded an AI company in London called DeepMind; Silver joined him. Their ambitions were grandiose: create general artificial intelligence, AI that really thinks. But they had to start somewhere.

That starting point was, of course, games. They’re actually a good test for artificial intelligence. By definition, games are constrained. They’re little bottled universes where, unlike in real life, you can objectively judge success and failure, victory and defeat. DeepMind set out to combine reinforcement learning with deep learning, a newish approach to finding patterns in enormous data sets. To figure out if it was working, the researchers taught their fledgling AI to play Space Invaders and Breakout.

Breakout turned out to be the big one. It’s basically Pong, except instead of bouncing a pixelated ball back and forth with an opponent, you’re bouncing it against a wall of colored bricks. Hit a brick and it disappears; miss the returning ball, or bounce it offscreen, and you lose. After playing just 500 games, DeepMind’s system taught itself to send the ball behind the wall at an angle that would guarantee it would stay up there, bouncing around, knocking out brick after brick without ever returning to the paddle. That’s a classic Breakout move, but DeepMind’s computer did it exactly right every time, at a speed well beyond anything human reflexes could handle.

Trawling for investors, Hassabis buttonholed Peter Thiel, the famed PayPal cofounder and Facebook investor, at a dinner party. He had only a few minutes to hook him. Knowing Thiel was an avid chess player, Hassabis pressed his offense by suggesting that the game had survived for so long because of the creative tension between the skills and weaknesses of knight and bishop. Thiel suggested Hassabis come back the next day to make a proper pitch.

Brute force had never been enough to beat Go. The game presents too many options to consider every outcome, even for a computer.

Once one Silicon Valley billionaire hears about you, others do too. Through Thiel, Hassabis met Elon Musk, who told Google CEO Larry Page about DeepMind. Google soon bought the company for a reported $650 million.

After joining the search giant, Hassabis showed off the Atari demo at a meeting that included Google cofounder Sergey Brin. And the two discovered they had a common passion. In grad school at Stanford, Brin played so much Go that Page worried Google might never happen.

So when Brin met Hassabis, they chatted about the game. “You know, DeepMind could probably beat the world Go champion in a couple years,” Hassabis told him. “If we really put our minds to it.”

“I thought that was impossible,” Brin replied.

That was all Hassabis needed to hear. Game, as they say, on.

 

AFTER GAME TWO CONCLUDES, Silver slips into the control room set up for AlphaGo, just down the hall from the match. Its brain is no more here than anywhere, distributed as it is among hundreds of computers around the planet. But in front of these banks of displays Silver can peer a little ways into AlphaGo’s mind, monitor its health, and track its running predictions of each game’s outcome.

With a few keystrokes, Silver calls up the record of AlphaGo’s decisions during the game. He zooms in on what happened right before Move 37.

Before DeepMind and AlphaGo, AI researchers attacked Go with machines that aimed to predict the results of each move in a systematic way, while a match was happening—to tackle the problem with brute computer force. This is pretty much how IBM’s Deep Blue beat Kasparov at chess in 1997. I covered that match as a cub reporter at PC Magazine, and as with Lee versus AlphaGo, people thought it was a signal moment for AI. Weirdly, just as in game two of the Lee match, Deep Blue made a move in its game two against Kasparov that no human would ever make. Kasparov was just as flummoxed as Lee, but Kasparov didn’t have the same fight in him; he resigned almost immediately—folded under the pressure.

But brute force had never been enough to beat Go. The game simply presents too many options to consider every outcome, even for a computer. Silver’s team went with a different approach, building a machine that could learn to play a reasonably good game before ever playing a match.

Inside the DeepMind offices near King’s Cross station in London, the team fed 30 million human Go moves into a deep neural network, a network of hardware and software that loosely mimics the web of neurons in the human brain. Neural networks are actually pretty common; Facebook uses them to tag faces in photos. Google uses them to identify commands spoken into Android smartphones. If you feed a neural net enough photos of your mom, it can learn to recognize her. Feed it enough speech, it can learn to recognize what you say. Feed it 30 million Go moves, it can learn to play Go.

But knowing the rules isn’t the same as being an ace. Move 37 wasn’t in that set of 30 million. So how did AlphaGo learn to play it?

AlphaGo was making decisions based not on a set of rules its creators had encoded but on algorithms it had taught itself.

AlphaGo knew—to the extent that it could “know” anything—that the move was a long shot. “It knew that this was a move that professionals would not choose, and yet, as it started to search deeper and deeper, it was able to override that initial guide,” Silver says. AlphaGo had, in a sense, started to think on its own. It was making decisions based not on a set of rules its creators had encoded in its digital DNA but on algorithms it had taught itself. “It really discovered this for itself, through its own process of introspection and analysis.”

In fact, the machine had calculated the odds that an expert human player would have made the same move at 1 in 10,000. AlphaGo did it anyway.

After it learned to play the game from those human moves, Silver pitted the machine against itself. It played game after game after game versus a (slightly) different version of its own neural network. As it played, it tracked which moves generated the greatest reward in the form of the most territory on the board—the reinforcement learning technique Silver had studied in grad school. AlphaGo began to develop its own inhuman repertoire.

But that was only part of the trick. Silver’s team then fed millions of these inhuman moves into a second neural network, teaching it to look ahead to results the way Kasparov (or Deep Blue) looks into the future of a chess game. It couldn’t calculate all the possible moves like in chess—that was still impossible. But after tapping all the knowledge it had gathered playing so many games on its own, AlphaGo could start to predict how a game of Go would probably play out.

Being able to guess at an outcome from starting conditions you’ve never seen before? That’s called intuition. And what AlphaGo intuited in game two was Move 37, an insight beyond what even the best human players could see. Even its creators didn’t see that one coming. “When I watch these games, I can’t tell you how tense it is,” Silver tells me after his trip to the control room. “I really don’t know what is going to happen.”

YOU DON’T PAY $650 million for a company just to have it build a computer that can play board games. Deep learning and neural networks underpin about a dozen Google services, including its almighty search engine. Reinforcement learning, AlphaGo’s other not-so-secret weapon, is already teaching the company’s lab robots to pick up and move all sorts of objects. And you can see how important the tournament is to Googlers. Eric Schmidt—chair and former CEO—flies in before game one. Jeff Dean, the company’s most famous engineer, is there for the first game. Sergey Brin flies in for games three and four, and follows along on his own wooden board.

But more is at stake than a business. During the tournament, I took a walk with Hassabis through Jongno-gu, the 600-year-old cultural and political heart of Seoul. As we chatted, a young woman, eyes wide, recognized Hassabis, whose face was all over Korean TV and newspapers. And then she mimed having a fainting spell, as if he were Taylor Swift or Justin Bieber.

“Did you see that?” I said.

“Yes,” Hassabis answered, deadpan. “It happens all the time.”

He might not be kidding. Computer engineers don’t usually have fans, but 8 million people play Go in Korea, and Lee is a national hero. In China, more than 280 million viewers watched the tournament live.

What many of us realized is that machines have crossed a threshold. They’ve transcended what humans can do.

So perhaps it makes sense that when Lee loses the first game and then the second, the giddy excitement those fans feel is cut with something darker. As game two ends, a Chinese reporter named Fred Zhou stops me in the commentary room, happy to speak with someone who appreciates AlphaGo as a feat of technology, not just a Go killer.

But then I ask him how he feels about Lee’s defeat. Zhou points to his chest, to his heart. “It made me sad,” he says.

I felt that sadness too. Something that belonged uniquely to humans didn’t anymore. What many of us watching the contest unfold came to realize is that machines have crossed a threshold. They’ve transcended what humans can do. Certainly machines can’t yet carry on a real conversation. They can’t think up a good joke. They can’t play charades. They can’t duplicate good old common sense. But AlphaGo’s relentless superiority shows us that machines can now mimic—and indeed exceed—the kind of human intuition that drives the world’s best Go players.

Lee goes on to lose game three, and AlphaGo secures victory in the best-of-five series. At the press conference afterward, with Hassabis sitting next to him, Lee apologizes for letting humanity down. “I should have shown a better result, a better outcome,” he says.

As Lee speaks, an unexpected feeling begins gnawing at Hassabis. As one of AlphaGo’s creators, he is proud, even elated, that the machine has achieved what so many thought it couldn’t. But even he feels his humanness rise. He starts to hope that Lee will win one.

TWO HOURS INTO game four, Lee is deep in another hole. He plays an aggressive game, attacking particular areas of the sprawling game board. But AlphaGo plays a more expansive style, taking a more holistic approach that weighs the entire board. In Move 37, AlphaGo placed its black stone in an area near only one other stone, away from the main action. Once again, in game four, the machine is using this enigmatic approach to take control of the contest.

AlphaGo has already won the tournament. Lee isn’t playing for the win anymore. He’s playing for humanity. Seventy-seven moves in, he seems to stall. He rests his chin in his right hand. He sways forward and back. He swivels in his chair and rubs the back of his neck. Two minutes pass, then four, then six.

Then, still gripping the back of his neck with his left hand, he strikes. With the first two fingers of his right hand, Lee puts a white stone near the very center of the board, directly between two black stones. It’s the 78th stone on the board, a “wedge move” between two vast and crowded swaths of territory. It effectively cuts AlphaGo’s defenses in half. And the machine blinks. Not literally, of course. But its next move is horrendous. Lee shoots a pointed stare at Huang, as if Huang is the opponent rather than a billion circuits.

In AlphaGo’s control room, the people running the machine stop what they’re doing and stare at their monitors. Before Lee’s brilliant Move 78, AlphaGo was putting its chances of winning at 70 percent. Eight moves later, the odds drop off the table. Suddenly AlphaGo isn’t Deep Blue’s successor—it’s Kasparov’s. It simply can’t believe a human being would make that move—the odds are a familiar 1 in 10,000.

Just like a human, AlphaGo can be taken by surprise. Four hours and 45 minutes into the game, AlphaGo resigns. Just like us, it can lose.

“All the thinking that AlphaGo had done up to that point was sort of rendered useless,” Hassabis says. “It had to restart.”

AlphaGo can still get taken by surprise—just like a human. Its odds of winning drop off the table.

THE FINAL GAME has begun, and I’m supposed to watch with Hassabis and his team. But just before I head to meet them, a Googler finds me in the press room. “We’re so sorry,” she says, “but the team has changed their mind. They don’t want a reporter in the room for the final match.”

After she walks away, I turn to Geordie Wood, WIRED’s photographer. “You know what that means?” I say. “AlphaGo thinks it’s losing.”

It is. Early in the game AlphaGo makes a rookie mistake. In a crowded area on the lower half of the board, the machine places its white stone too close to Lee’s line of black and loses the entire territory. AlphaGo’s intuition failed it; like a human, the machine still has blind spots.

But as the game stretches into a third hour, AlphaGo claws its way back into the contest. At the three-and-a-half-hour mark, Lee’s play clock runs out. Under the match rules, he now has to make each move in less than a minute or else forfeit, but a wide swath of space on the top right-hand side of the board remains unclaimed. Time and again, he waits until the last second to place his next stone.

Then AlphaGo’s clock runs out too. Both players start moving at what looks like an impossible pace. The board fills with stones. For the first time in the series, the game looks as though it will play out to the very end—that neither side will resign before the final points are tallied. But five hours in, the gulf between Lee and AlphaGo grows too wide. Lee resigns. AlphaGo is fallible but still dominant.

IN ALL THE world, only one other person could credibly claim to know how Lee felt: Fan Hui, the three-time European champ and AlphaGo’s de facto trainer. He had lost to the machine five games to nil in a closed-door match back in October, the training montage for the bigger contest in Seoul. Afterward, Fan joined DeepMind as a kind of player for hire, playing game after game against the machine—games he kept losing, one after the other.

But as Fan’s losses piled up against AlphaGo, a funny thing happened. Fan came to see Go in an entirely new way. Against other humans, he started winning more—including four straight games against top players. His ranking shot up. AlphaGo was training him.

So, I ask Fan during the tournament, what should we think of Lee’s fight against the machine?

“Be gentle with Lee Sedol,” Fan says. “Be gentle.”

Playing against Google’s AI had rekindled champion Lee Sedol’s passion for Go.

These days, the world’s biggest, richest tech companies are using the same kinds of technologies on which AlphaGo was built to seek competitive advantage. Which app can recognize a photo better? Which can respond to a voice command? Soon these same kinds of systems may help robots interact with their real-world environments more like people do.

But these practical uses all seem banal next to AlphaGo’s inhuman humanity. A subculture has sprung up around AlphaGo in a way that hasn’t happened around, say, the Google Photo app. In Düsseldorf, Germany, J. Martin—a professor of game design, media, and communications—now runs a Twitter account dedicated to Move 37. After reading my online coverage of the tournament in Seoul, a 45-year-old computer programmer from Florida named Jordi Ensign emailed me to let me know she had AlphaGo’s Move 37 tattooed on the inside of her right arm. On the inside of her left arm, Lee’s Move 78—a move the Go world has dubbed God’s Touch.

In the hours after the fourth game, Lee sat down with Hassabis. The former games prodigy told Lee he understood the pressures. He understood his creativity and his drive. “I was a games player too,” Hassabis said. “If my life had gone a different way … I know what kind of dedication it takes to get to that level, the amount of sacrifice.”

Lee replied that playing against the machine had rekindled his passion for Go. As with Fan Hui, AlphaGo had opened his eyes to a new side of the game. “I have improved already,” Lee said. “It has given me new ideas.” He has not lost a match since.

Before the tournament, Hassabis told the world that AlphaGo’s AI tech could drive a new kind of scientific research, where machines point humans toward the next big breakthrough. At the time, without evidence, those claims rang a bit hollow—typical tech hype. But not anymore. The machine did a very human thing even better than a human. But in the process it made those humans better at what they do. Yes, you could see Move 37 as an early sign of machines asserting their superiority to their human creators. Or you could see it as a seed: Without Move 37, we wouldn’t have Move 78.

Author: Cade Metz

Source: www.wired.com