This story picks up where my piece, “The Third Coming: Is Artificial Intelligence the Swan Song for Meatspace?” left off. In it, I attempt to make the case that AI is already here, that it is nowhere near as smart as Tron warned us it would be, and that it is becoming ubiquitous. I touch briefly on a few core concepts of this new – and growing – Artificial Intelligence (AI) presence, and how it is already shaping our interactions with – and expectations of – our shared social experience of day-to-day reality, or “Meatspace.” I’d like to go into more detail here, if you are interested in joining in the conversation, or following along.
If we are to redefine AI as a vast array of interconnected systems that we access primarily through our “smart” devices or wearables, it will be easy to understand that an App – by definition – is nothing more than a tool that helps human beings manage their experience of the world, through a familiar technology. A few short years ago, when smartphones were still relatively “rare,” the notion that we would one day “wear” our smart devices was still science fiction at best. Our best references were from Star Trek, or Cyberpunk fiction, which offered a glimpse of a world where wearables are common. It was a world that seemed impossibly remote at the time, but is coming true all around us in 2016.
In the Movie Zoolander (2001), Derek – the model / savant who is the stories titular character – makes and receives his wireless calls on an incredibly tiny flip-phone. Part of what made that joke so funny at that time was mobile phone technology had been shrinking every year since its public introduction in the late 1980’s. The assumption was that they would soon disappear altogether, and it was not entirely unreasonable or false. But first – they had to get a whole lot bigger.
In 2007, apple CEO and visionary guru Steve Jobs introduced the world to the iPhone and the media went gaga for it. The news wasn’t all positive however, and many felt it was simply “too large,” and that consumers generally favored a smaller, sleeker handheld for day-to-day communications. That problem was mainly one of customer training. More than three decades into the home computer revolution, consumers naturally had expectations of what a desktop, laptop, tablet, phone or MP3 player should be. Apple would confound these expectations by releasing a device that would become – for many people – the best of all possible worlds, in a relatively small and attractive footprint. Other “phone” and tablet manufacturers soon followed, and quickly released their own smartphones, driving down prices. Suddenly, our phones were bigger than ever before, with larger screens, more memory and better cameras. The race was on to build an experience of a world our grandparents never imagined: where the sum of all human knowledge and experience – or whatever part of it could be accessed over the internet anyway – could be held in the palm of your hand for less than $75 a month.
That wasn’t the only innovation in connectivity and communication, however. In fact – few realized just how much the market for this kind of “universal access” to information would grow and segment over the decade that followed. Smart phones come in a variety of configurations. Some are quite large, almost tablets, really. Others are tiny; — their interface designed to be worn on the wrist, an extension of the phone in your pocket, or on your belt.
It’s not just phones, either. Most new televisions are designed to function as a platform for their own versions of popular “apps,” like Netflix, Amazon Prime or Pandora. Even automobile manufacturers have jumped in with both feet, offering entertainment options in the “cabins” of our cars which include music, navigation and phone integration, for making and receiving calls while mobile. Navigation is now crowd-sourced, in apps like Waze, and offers real time analysis of traffic, police cameras, and road construction – to better guide the connected driver through increasingly crowded commutes.
Gamers are being reintroduced to the idea of “virtual reality (VR);” – a concept at least as old as “The Lawnmower Man (1992)” or the original “Tron (1982).” Some of the least expensive options are basically inexpensive cardboard attachments for smartphones. High end experiences require expensive headgear, and involve more wires than a Pentium class office PC from the early 2000’s. The Oculus Rift and HTC Vive have launched for PC, and both Sony and Microsoft promise their own branded VR hardware for their respective consoles in the next year or so.
Google attempted to introduce the concept of a hands-free wearable with “Glass;” — a product designed to replace or augment a user’s smartphone with a pair of glasses. Glass was probably a little ahead of its time, but not by much. The augmented reality (AR) experience promised by glass can be had on any smartphone today with a good camera and the right app. One simply holds the phone as if pointing a movie camera, and reads the overlay displayed on top of the images generated on their screen. Meanwhile, progress is being made on “smart” contact lenses, which may eventually include nanoscale bio-sensors for measuring blood glucose, and arrays of LED’s which are themselves powered by the ambient light from our sun or artificial sources. Early generations will likely be rudimentary, displaying as few as one pixel in the visual field, but it is only a matter of time before smart screens can be worn – as either contact lenses or implants – which pair up with other wearables to create a true mechanical augmentation of our human experience of “seeing,” in Meatspace.
Imagine a world where the “legally” blind can “see” well enough to navigate their daily lives without assistance, and everyone receives prompts which overlay their experience of our “shared environment,” and you will quickly understand the ramifications of this growing technology. Combine that tech with smart wearables that respond specifically to voice commands, and eventually – sub-vocal speech — and our experience of “reality” begins to look very strange indeed.
Moreover, the eventual intersection of all of these technologies will likely be a mix of augmented and / or virtual reality, voice activated interfaces and cloud dwelling “dumb” / single purpose AI’s, and the future of our man-machine interface starts to look a lot like techno-telepathy. Each of us will employ a variety of “agents” to help us manage the vast and growing glut of information available to us at any given moment, over Wi-Fi and Bluetooth. A credulous reader may be forgiven if this “future of wearables” looks more like “magic” than science fiction.
Our eyes can only register frequencies of visible light between a certain range, but our wearables will eventually allow us to see in the dark, and to detect illness or disease at a glance. Moreover, this enhanced “experience” will be backed-up and saved to the cloud, to be experienced by others anonymously, through their own interface, in VR. If the past is any sort of guide at all, this tech will likely be used for porn before it is adopted for education.
In part three of this series, I will discuss the ramifications of such a world, and the concept of hard limits to our technology. It may be coming sooner than we think.
Spoiler alert: Killer AI is probably the least of our problems.