Back to the Real Future
⎔
What is real? That is going to be question of the future. Not because we’ll have the cognitive surplus to consider those questions we left behind in the smokey, tapestry-draped, blacklit dorm rooms of our youth, but because daily experiences, so subtly technological as to look and sound and feel as natural as, well, nature, will provoke us to ask.
Imagine, for instance, visiting the pyramids. I’ve never been there, but I’ve imagined it countless times. When I was a child, a book of black and white line drawings could take me there. Now, I can “go” there with Google’s StreetView, which, I must say, is pretty incredible as far as experiences I can have while sitting at my desk are concerned. But as excellent as Google’s surrogacy is, it can only get me so close. I’m still a few meters from the stones themselves. What if I want to get closer? A couple clicks of the zoom don’t quite do it; the image gets bigger but it also gets blurrier.
How about the Mona Lisa? Imagine visiting the Louvre, and standing before Leonardo’s masterpiece. Now that’s something I’ve actually seen, and, unfortunately, it looks much more like this than what you’re probably imagining. Most of what I saw on the day I visited the Louvre I saw through the glowing rectangle of someone else’s camera screen. I waited a long time to get closer, but standing a few meters from the painting — which is only two-and-a-half feet tall and hung behind a thick panel of bulletproof glass that reflects every flash of the hundreds of cameras pointed at it constantly — I wondered, why bother? A Google image search, honestly, delivers a better experience of viewing this particular painting than actually being in the same room with it does. But, being here, in my office, an entire ocean away, is certainly not a substitute for being in Paris. Nor is StreetView, though it gets me closer.
Now imagine standing by the pyramids again, or, if you prefer, in the Mona Lisa gallery at the Louvre. This time, you’re wearing glasses that let you zoom in, close enough to see the texture of the stones at the base of the pyramid, or Leonardo’s brushstrokes. Farther and with greater detail than you could see with your naked eye, even through the things and people that stand in your way. This is possible today. Almost ten years ago, Microsoft’s Live Labs demoed Photosynth, an imaging technology that “stitched” together existing photos of well-known landmarks, creating a virtual space that you could explore through a computer screen. Couple that with the Oculus, and you basically have it. Add a decade or two to the equation, and perhaps you can do it without wearing a big, black, plastic box strapped to your head. Maybe we’ll iterate from something like Google Glass, which still lets us be functional — albeit dumb-looking and obnoxious — humans in the physical world, to contact lenses, to something embedded directly into our brain. My guess is we’ll skip the contact lenses, though. There’s probably only so small a camera can get, and we’ll have an easier time convincing the brain it’s seeing something it isn’t than shrinking a camera down to the point where it doesn’t hurt to blink over. So, it’s probably going to happen. But here’s the question: are you really seeing what you’re seeing? Does it matter? In this case, probably not. That the super high-res image of the Mona Lisa is not the actual Mona Lisa is not going to matter one bit to you when the actual Mona Lisa is buried under fifty tourist heads and iPhone screens. You’ll get that the image you’re seeing was taken by somebody else at some other time, but the trick will be good enough for your brain, and after all, you’ll still be standing in Paris. The best of both worlds, right?
But what if, while you’re standing there gazing through skulls and screens at your Pseudo Lisa, you suddenly hear the raspy voice of a man in your head — Leonardo himself, telling you about what it was like to paint the Mona Lisa hundreds of years ago? It’s a museum director’s dream. A completely immersive experience. And it, too, is possible today. Forget those handheld players with earphones, or the iPhone apps with guided tours. I’m talking direct to your brain. All you need is a good script, a good actor who can do Leonardo in twenty different languages, and, oh yeah, a sonic beam. But it’s been done. You may remember Holosonic’s audio spotlight technology that was used to project a focused “beam” of sound from a SoHo billboard for the Paranormal State television show directly into the heads of unwitting passers by. People were pretty freaked out by that. Maybe you also heard about the Talking Window demo that used “bone conduction” technology to release high-frequency oscillations that the brain converts into sound. Some people were freaked out by that one, too, but not because of the whole hearing-voices-in-your-head thing, but because that particular implementation required that your face actually touch the grubby window of a public train. But again, standing there at the Louvre, studying the Mona Lisa in greater detail than the eye could ever grant, with Leonardo’s soliloquy in your head, “is it all real?” is a meaningful question to ask. You know it isn’t, but how many experiences like this would you need to have throughout your daily life before it simply didn’t matter anymore? These kinds of enhancements and augmentation can’t be expected to be limited to just entertainment and tourism. After all, two of the working examples I’ve already mentioned are for advertising. So yeah, throw a little Tupac hologram into the mix and you can expect to have Steve Jobs himself tell you why you should buy the iPhone 11 while you’re standing at the Apple Store in 2020. Too soon? Please. You can’t expect any company to be respectful of the dead when there’s money to be made.
It goes deeper still. Technology will augment experiences by adding things to it, but it will also do so by taking things away. That’s what the Active Listening project is all about. After a wildly successful Kickstarter campaign, they’re well on their way to delivering wireless earbuds that will let you “optimize the way you hear the world.” Specifically, by filtering out the stuff you don’t want to hear. A neat idea, sure. And certainly fascinating in the way in which we can pinpoint particular needles in the haystack of audible frequencies. But, to what extent is the collage of sounds — some harsh, some annoying — a necessary and good part of living in the world? And is removing things you don’t like an optimal way to experience it? Yes, the early adopter will be the douchey business class traveler who just can’t bear to hear that whining brat in coach shrill over the civilized clinking of his cocktail tumbler. But what of when it finds its way to the rest of us? Might sound filtering be dangerous? What if filtering out the high register of your neighbor’s alarm clock also filters out the sound of your building’s fire alarm? What if filtering out traffic puts you in front of a Mac truck because you didn’t hear it coming? Perhaps we’ll figure all that out. But we are still left with the same question: is the silence of your flight real when you’ve filtered out all the sounds you don’t want to hear? Does it matter, so long as you are the one in control of the filtering?
There are plenty of other examples of technologically additive and subtractive experiences, but they’re not just limited to sight and sound. Even taste is hackable. This VR headset, created by Japanese researcher Takuji Narumi, can alter the image of food being eaten by its wearer — making it larger or smaller, for instance — while the six tubes connected to it can release strong smells that, matched to the image, can completely change a subject’s perception of taste. Narumi intends his device to have a variety of uses, including weight loss and hospital rehabilitation. Clinical trials are underway with a group of longtime Soylent devotees whose palettes are at a “zero point” having only tasted gruel for the last few years. Just kidding on that last part, but hey, someone’s gotta bankroll this thing and why not start with Valley richies who have already demonstrated an enthusiasm for living like a robot? They’re gonna love this just as much as those Active Listening buds. But as easy as it is to mock those who will surely be the first to enthusiastically use these kinds of technologies, the question of their impact on how life is experienced will trickle down just as the application of these technologies does. And with all of these technological enhancements rewiring our brains, it’s a sobering thought that perhaps we won’t remember what it was like before them, anyway.
All of these technologies put us in an altered state. Of seeing and hearing and even tasting things that aren’t there. So what of reality? Are any these things so different from walking about the world wearing earbuds? With a perpetual personal soundtrack that has nothing to do with the places you go other than that you and your device are there? Though they may be smaller, less visible, and more perceptually rich, they are still fundamentally about altering our environment — something we do in countless and subjective ways today. So why does it feel different? The filter bubble, as initially coined, was the unintentional result of the datamined social networking experience, but what happens when we intentionally create filter bubbles of our own that follow us everywhere we go? How loose can the weave of the fabric of society get before it no longer holds together on the basis of shared experience? And how many people will have been driven mad in the process of augmenting our experience? The lack of a definitive what-is will only contribute to a proliferation of alternatives, some more harmless and isolated than others, some widespread and crazy (see Project Blue Beam).
What is real? And how many people have to experience it for it to be so? That’s the question, isn’t it? If this kind of technology becomes pervasive enough, then the question of what is really there becomes much more difficult to answer, doesn’t it? So much of reality is the combination of subjective experience and cultural agreements about the meaning of shared subjective experiences. If displaced experiences — whether as benign as “supersight” from a tourist path aside the pyramids at Giza, as subversive as personal filter bubbles, or as manipulative as psyops warfare — become the norm, then reality itself will become much more complicated to interpret. Reality is often defined self-referentially; it’s what is, as opposed to, say, what could or should be. To expand the vernacular to include technological qualification, as in to more narrowly define reality as that which is unmediated or uncreated by technology, is, at this point in human development, impossible. A future in which a new layer of experience — ungrounded, unwired, but fully sensory &dmash; is a daily reality is inexorable, just as a walk in the park uninterrupted by a buzzing in your pocket or a glance at someone else’s screenglow is today. Ubiquity is, as Kevin Kelly so aptly put it, what technology wants. Not necessarily ubiquity of objects of technology, but ubiquity of signal; experience of the technological kind. Every technology is a string of reality, within which is an entire world of experience, provided one simply look or hear or feel. But how will we find our footing on the shifting-sand reality of truly ubiquitous technological experience? I wonder.
♬
Heavy Rotation: Sparks by Imogen Heap, which, somehow, I didn’t hear about f o r a n e n t i r e y e a r. What?! It was even featured on First Listen, which I’m usually all over. Anyway, finding it has been like the musical equivalent to finding that five-dollar bill from last year wadded up in your winter coat pocket, except I’d say it’s worth way more than five bucks. And since we’ve been talking about technologically mediated experiences vs. experiences that are inherently technological, Sparks is a perfect sonic accompaniment to that conversation.
☾
Recent Tabs: One million miles from here, just a tiny bit along the way to the Sun, a camera mounted to DSCOVR sends 11 photos of Earth back to NASA every day. In other evidence of the-Earth-is-amazing, check out this video tour of the Lowline, the world’s first underground and sunlit garden. I have no idea how I missed Imogen Heap’s musical gloves demo, but man, I’m glad I saw it eventually. She is inspiring. So is Marian Bantjes. Yale’s new website is pretty nice. Meanwhile, scientists are trying to use a drug called rapamycin to extend the lives of 20 dogs in Seattle. It’s worked in preliminary tests on mice, but an interesting side-hypothesis presented in the article is that mice commonly live for about two years, so they may have more “room for improvement” than longer-lived species. In any case, I and my pup support this research. As apposed to the let’s-mutate-our-dogs approach of these Chinese DNA-edited superdogs, a fresh hell that is, sadly, much further along. I’m sure this bodes well for the planet and won’t end in some Jurassic Park-like disaster. “…most startups claiming to promote the sharing economy are really just neoliberal extravagances that will further enrich the smartphone-toting white elite.” Finally, if you must indulge your Back to the Future Part II nostalgia a bit more, watch this clip, which will explain the deeper symbolic truths of the film, sheeple!
Written by Christopher Butler on
Tagged