Technology is progressing at an exponential rate, but are we thinking big enough to take advantage? Astro Teller, Google's Director of New Products, is determined to rise to the challenge. He says that while we usually overstate what might be possible in a year, we often underestimate what could be possible in five years, when a number of different technological trends may have come together in ways we couldn't have foreseen.
It’s easy to make fun of the past. Remember when Thomas Watson, the head of IBM, said that the world would only ever need five computers? How about when DEC founder Ken Olsen, declared, “There is no reason for any individual to have a computer in his home?” Or when Bill Gates decided that 640K was all the memory a PC would ever need?
Never mind that these stories are apocryphal; we repeat them because they ring true. When you first used a computer, saw a PC, went to a website, or picked up a cell phone, did you imagine that these devices would someday be able to do even a fraction of the things that we now take for granted? Probably not, and since ignorance loves company, we take comfort in believing that brilliant people like Watson, Olsen, and Gates were just as short-sighted as the rest of us. Even if, in fact, they weren’t.
There’s no shame in exhibiting a failure of imagination; it’s a trapping of human psychology. When we come up against things we can’t forecast (for example, what would we do with 1,000 times more computing power in our phones?), we assume that if we can’t imagine it, it isn’t possible or won’t materialize.
That’s because we think in a linear fashion, subconsciously projecting our current pace of progress into the future. But technology is changing at a non-linear pace. Progress is speeding up, which is why, as Larry Page said at our Zeitgeist conference (paraphrasing Bill Gates), “People tend to overestimate what can happen in the next year but underestimate what can happen in the next five.”
When we come up against things we can’t forecast we assume that if we can’t imagine it, it isn’t possible.
While we can’t rewire our tendency to think linearly, we can train ourselves to recognize when that tendency is kicking in and consciously overcome it. This is sort of my job at Google. I work on a team that tries to find new opportunities to use major technical breakthroughs to solve big problems that could affect billions of people.
We start by thinking about what should be possible. How do we make that call? We know that computing power, bandwidth, and storage are getting better and cheaper. Information is ubiquitous, and data that used to be locked up in thousands of silos around the world (in universities, businesses, and governments) is now moving online and becoming widely accessible. These factors are leading us to a critical point: The rate at which computers are getting better at understanding semantic information (language, symbols, images, even facial expressions) is increasing.
If we assume that these ‘exponential trends’ are set to continue, we can stretch our minds to consider incredible advances. We do this by applying a powerful but simple rule of thumb: If a human can do it, so can a computer. This helps us differentiate between things that are impossible (like time travel) and those that are just really difficult (like self-driving cars). In computer science terms, a human consists of two video cameras, a pair of microphones, four actuators, and a remarkably powerful CPU; and we manage to drive cars just fine. Why would we think a robot couldn’t?
Once we decide that something is possible, we look at whether or not it’s useful. This is a critical step. There are plenty of technically challenging things that would be completely useless in real life. But if you believe that something is both possible to achieve and, once realized, would be tremendously beneficial, it becomes a fairly simple equation. Consider those self-driving cars again. Does it take a bigger leap of imagination to believe that they will someday exist? Or that they won’t? We bet that they will, and get to work.
Another good example is GPS. Twenty-five years ago it was possible to determine an object’s geographical position outdoors within 25m, but it only seemed useful for military, scientific, and marine applications. Even 15 years ago, consumer uses of the technology, such as maps with navigation, or location services like Foursquare and Google Latitude, were difficult for most of us to imagine. Today, of course, GPS is a ubiquitous consumer application. I can look at the map on my phone and it shows with 25m accuracy that I’m standing next to my car on the Google campus in Mountain View, California.
But GPS isn’t done. We now know that it’s possible to improve accuracy to 2.5m and to locate positions indoors as well as outside. But is it useful? We think so. How about the next two orders of magnitude? It should be possible, we believe, to pinpoint geographic location within 25cm, and perhaps to eventually whittle that down to 2.5cm. As for usefulness, many people can’t see it, but I’m betting that these potential improvements in localization accuracy will turn out to be every bit as valuable as the previous steps. I just can’t say exactly how.
So what’s next? Fortunately, when you live in the twenty-first century and are in the imagination business, speed is your friend. Imagine that your PC, laptop, tablet, or phone is a thousand times more powerful. Imagine the wireless or wired networks that connect it to the internet are a thousand times faster as well. Imagine that every bit of recorded information that has ever been created is available online, while trillions of sensors around the planet are creating exabytes of new data every second. Imagine that the world’s most powerful supercomputers are a thousand times more powerful, and you can use them whenever and wherever you want.
What products or services will these new capabilities unleash? Could we develop software systems to read long papers and provide an accurate executive summary as fast as a search query is answered today? Sure, why not? That sounds incredibly useful. Could computers write their own software based on a system designer’s natural language specifications (‘Please take this app and develop different versions to run on the most popular consumer platforms’)? Absolutely.
If something rides the rails of exponentially improving computer and data capability, and if its benefits are sufficiently powerful, it is likely to happen – whether we can imagine it today or not.