The following excerpt has been selected exclusively for StartupNation readers from “The Inevitable: Understanding the 12 Technological Forces That Will Shape Our Future” by Kevin Kelly, published by Viking, an imprint of Penguin Publishing Group, a division of Penguin Random House LLC. Copyright © 2016 by Kevin Kelly.
In Chapter 2 of “The Inevitable,” titled Cognifying, Kelly discusses the concept of artificial intelligence. In the excerpt below, he introduces the three recent breakthroughs that will make artificial intelligence more prominent in the years to come.
Cheap parallel computation
Thinking is an inherently parallel process. Billions of neurons in our brain fire simultaneously to create synchronous waves of computation. To build a neural network—the primary architecture of AI software— also requires many different processes to take place simultaneously. Each node of a neural network loosely imitates a neuron in the brain—mutually interacting with its neighbors to make sense of the signals it receives. To recognize a spoken word, a program must be able to hear all the phonemes in relation to one another; to identify an image, it needs to see every pixel in the context of the pixels around it—both deeply parallel tasks. But until recently, the typical computer processor could ping only one thing at a time.
That began to change more than a decade ago, when a new kind of chip, called a graphics processing unit, or GPU, was devised for the intensely visual—and parallel—demands of video games, in which mil- lions of pixels in an image had to be recalculated many times a second. That required a specialized parallel computing chip, which was added as a supplement to the PC motherboard. The parallel graphics chips worked fantastically, and gaming soared in popularity. By 2005, GPUs were being produced in such quantities that they became so cheap they were basically a commodity. In 2009, Andrew Ng and a team at Stanford realized that GPU chips could run neural networks in parallel.
That discovery unlocked new possibilities for neural networks, which can include hundreds of millions of connections between their nodes. Traditional processors required several weeks to calculate all the cascading possibilities in a neural net with 100 million parameters. Ng found
that a cluster of GPUs could accomplish the same thing in a day. Today neural nets running on GPUs are routinely used by cloud-enabled companies such as Facebook to identify your friends in photos or for Netflix to make reliable recommendations for its more than 50 million sub- scribers.
Every intelligence has to be taught. A human brain, which is genetically primed to categorize things, still needs to see a dozen examples as a child before it can distinguish between cats and dogs. That’s even more true for artificial minds. Even the best-programmed computer has to play at least a thousand games of chess before it gets good. Part of the AI break- through lies in the incredible avalanche of collected data about our world, which provides the schooling that AIs need. Massive databases, self- tracking, web cookies, online footprints, terabytes of storage, decades of search results, Wikipedia, and the entire digital universe became the teachers making AI smart. Andrew Ng explains it this way: “AI is akin to building a rocket ship. You need a huge engine and a lot of fuel. The rocket engine is the learning algorithms but the fuel is the huge amounts of data we can feed to these algorithms.”
Digital neural nets were invented in the 1950s, but it took decades for computer scientists to learn how to tame the astronomically huge combinatorial relationships between a million—or a hundred million— neurons. The key was to organize neural nets into stacked layers. Take the relatively simple task of recognizing that a face is a face. When a group of bits in a neural net is found to trigger a pattern—the image of an eye, for instance—that result (“It’s an eye!”) is moved up to another level in the neural net for further parsing. The next level might group two eyes together and pass that meaningful chunk on to another level of hierarchical structure that associates it with the pattern of a nose. It can take many millions of these nodes (each one producing a calculation feeding others around it), stacked up to 15 levels high, to recognize a human face. In 2006, Geoff Hinton, then at the University of Toronto, made a key tweak to this method, which he dubbed “deep learning.” He was able to mathematically optimize results from each layer so that the learning accumulated faster as it proceeded up the stack of layers. Deep-learning algorithms accelerated enormously a few years later when they were ported to GPUs. The code of deep learning alone is insufficient to generate complex logical thinking, but it is an essential component of all current AIs, including IBM’s Watson; DeepMind, Google’s search engine; and Facebook’s algorithms.
This perfect storm of cheap parallel computation, bigger data, and deeper algorithms generated the 60-years-in-the-making overnight success of AI. And this convergence suggests that as long as these technological trends continue—and there’s no reason to think they won’t—AI will keep improving.
To learn more about “The Inevitable: Understanding the 12 Technological Forces That Will Shape Our Future” by Kevin Kelly. View Kevin Kelly’s video (below) speaking at the 2016 SXSW Conference.
“The Inevitable” is available now wherever fine books are sold and via Penguin Random House.
Reviews of “The Inevitable”
“Anyone can claim to be a prophet, a fortune teller, or a futurist, and plenty of people do. What makes Kevin Kelly different is that he’s right. In this book, you’re swept along by his clear prose and unassailable arguments until it finally hits you: The technological, cultural, and societal changes he’s foreseeing really are inevitable. It’s like having a crystal ball, only without the risk of shattering.”
—David Pogue, Yahoo Tech
“This book offers profound insight into what happens (soon!) when intelligence flows as easily into objects as electricity.”
—Chris Anderson, author of “The Long Tail”
“How will the future be made? Kevin Kelly argues that the sequence of events ensuing from technical innovation has its own momentum…and that our best strategy is to understand and embrace it. Whether you find this prospect wonderful or terrifying, you will want to read this extremely thought-provoking book.”
—Brian Eno, musician and composer
“Kevin Kelly has been predicting our technological future with uncanny prescience for years. Now he’s given us a glimpse of how the next three decades will unfold with The Inevitable, a book jam-packed with insight, ideas, and optimism.”
—Ernest Cline, author of “Ready Player One”