Talking to Machines

Greetings from a very quiet kitchen at the Newfangled HQ. Mark’s Sonos Jazz alarm went off at 7:30 as usual, and right now one of my favorite Coltrane collections is playing: The Prestige Recordings. Coffee has been brewed. There’s a chill in the air. It’s Friday, T-minus 4 days until this bitter public struggle is over. Life is good. Last week I wrote this very long thing about robots and I didn’t even talk about talking to robots. It wasn’t an oversight. The notes I began taking months ago that turned in to last week’s post actually began with some thoughts about talking to machines, but as I wrote and wrote and wrote and wrote, I realized the talking bit needed to be its own thing. So here we go. Maybe some day soon, you’ll be able to have a robot read it to you.

I am nine years old, and I am watching Star Trek: The Next Generation, wondering why the most sophisticated starship available to sentientkind has a relatively dumb artificial intelligence for its omnipresent Computer, but a learning (and eventually emoting) artificial intelligence for a serving officer**. Of course, I’m not thinking of it in those terms. I’m nine, and I’m not sure I’m even fully acquainted with the idea of artificial intelligence. If I am, I’m sure I don’t really understand it. But that there is a basic incongruity between these computers — both central to the cruising world of the Enterprise — is abundantly clear to me.

Specifically, I’m watching an episode called The Measure of a Man, in which that second artificial intelligence, Lt. Commander Data, is the subject of a trial in which his rights — either as a self-determined, sentient being or simply the property of Starfleet — are argued for and against by the ship’s captain, Picard, and the bad guy, a scientist who wants to dismantle him. Picard, of course, is the victor. In a concluding scene, Data offers to assist with the scientist’s studies, provided, of course, that they don’t threaten his existence. It’s quite an apt conclusion. Data’s offer is an act of forgiveness that only something more than a machine, but slightly less-than-human, could perform.

After the episode ended, I thought about the trial. As I lay in my bed that evening, I considered a different strategy than the one Captain Picard had taken. Had I been arguing this case, I would have called the ship’s computer as a witness in order to demonstrate the obvious differences between it and Data. It seems like a question or two would be enough to make the point. Now, if cross-examined, could I explain the difference between them — why one isn’t sentient but the other is? No. Not at nine anyway. But the demonstration would probably have been enough to emotionally sway a jury. And as TV has taught us well, that’s how you win a case.

Part of Data’s story is that he was designed by a human cyberneticist working independently in a far-flung colony to be a self-aware, sapient, sentient machine in human form — an android — and that he is, basically, one of a kind. (Actually, he’s one of three similar androids, but you get the idea. I have to point this out to appease the many Trek nerds that are reading this right now.) Why his creator was so far ahead of Starfleet’s forays into artificial intelligence is never explained. Neither is how he managed to make his way through Starfleet without being heavily studied or copied. Nevertheless, we join the Next Generation story with him fully integrated into human society — a uniformed, commanding officer, even. Such a thing couldn’t be possible without his sapience — his apparent judgement — trusted by those around him. Yet he is still a mystery. I began to realize, at nine, that his human-likeness is probably the very thing that makes him both more trusted than the average computer in some cases, and less so in others. There are plenty of scenes in the series in which characters confide in Data and, often to comedic effect, his computational, analytical responses bring about human aha! moments that go right over his shiny head. But it’s not like they tell him everything. They’re just as cryptic and guarded in their confessions and inquiries as they’d be with a human. Yet, these are conversations that, I imagined, these characters probably wouldn’t have with each other. Data, however sapient, isn’t emotionally judgmental, and so they’re safe to confide in him with no fear of blowback. And that’s what bugged me. I understood why these sorts of scenes were written — they were necessary for the drama — but I always wondered why no one ever had a similar conversation with the ship’s computer, safe and alone back in their quarters. Today, many years later, I find it entertaining to write in to that universe a limitation: that they didn’t confide in the ship’s computer because they couldn’t.

If a machine is sophisticated enough — if it is able to converse with us as any other human could, like Data, but for the sake of engendering intimacy, without the body — then it’s likely we would make great use of this machine. We crave intimacy, and yet, we are often reluctant to do the work and experience the stress of creating that intimacy with other people. Most human conflict comes down to that, doesn’t it? The unknown remaining unknown, questions unasked and unanswered, secrets kept, selves unseen and misunderstood, loneliness, alienation, distance. To overcome any of that requires the risk of exposure — of vulnerability. But with a disembodied voice — no eyes looking back at us — the stakes are not nearly as high. If a machine could listen to us, understand what we are saying, and respond with a programmed emotional toolkit limited to, say, nonjudgemental acceptance and empathy, we would confide in it. Heavily, I think. It would know our deepest secrets. Perhaps it would know us better than any human, not because of its exceptional processing power, but because we would choose to make ourselves known to it.

Imagine such a machine. Imagine the power it might have over those who use it — the addiction we might succumb to. As we get closer and closer to such a reality, it’s no surprise that so many fictional narratives are already on it. When I think over the many examples I’ve seen, few simply present such a technology without an accompanying societal blight, or at least a cautionary tale. In Her, a lonely man falls in love with an operating system. In Black Mirror’s fourth episode, Be Right Back, a grieving wife techno-resurrects her husband and becomes bound to it in ways she never expected. These are pretty fresh, but the idea has been around for a while. In an early film, THX 1138, George Lucas imagined a future in which drugs and machines are both used to control the masses by meeting their emotional needs. Some of the creepier scenes are those in which the protagonist visits a digital confession booth. It seems, on some level, we sense that this sort of thing is not a great idea. We’ll probably build it anyway.

In fact, we already are. You may have read the story of the woman who used her text history with a deceased friend to create a chat bot version of him. As an expression of her grieving process, it’s abundantly understandable. But I think we’d all agree that if the chat bot version of her friend prolongs the denial stage indefinitely, it serves no good purpose.

In the meantime, we suddenly have plenty of machines to talk to. Or, perhaps more accurately put, to talk at. Siri. Amazon Echo. Google Home. All of them are the alpha version of the Enterprise’s computer. They can listen well enough to know when we are addressing them; they can answer simple questions, reporting back facts after a pause not much shorter than the time it takes to refresh a web page; they can execute commands. We can glory in the power of telling a plastic cylinder to dim the lights in our living room without having to rise from the sofa or lift a finger. As rudimentary as all this seems when compared with having a conversation with the same machine, it’s likely just a relative hop, skip and a jump from one to the other. After all, what is needed for a machine confidant is not consciousness, but the appearance of it. The feeling that we are being heard and accepted for who we are. And no doubt we want this badly enough to willingly project it upon the mere shadow of its reality. How much programming, really, is there between an application like Google Keep, to which I can dictate notes and ideas, and the same program, only instead of silently transcribing what I say, it responds while I talk, affirming what I say with the occasional, “uh huh,” or “wow, that’s really interesting” or “and how did that make you feel?” I know myself well enough to know right now — without having used such a program — that I would use it if it was as good at that, in its entry-level, remedial state. And I would know that it was software talking to me, not another mind. And I would quickly go from fascination to regret. And it would take discipline for me to stop using it. And I would not be alone. Millions — god, billions — would be drawn in by the allure of such easy, accessible empathy and validation, by the availability of feeling good whenever they want to. This psychological narcotic — this emotional pornography — will be quite a thing. Will we build it? Probably. But, should we?

It has often been noted that science fiction created the iPad long before Steve Jobs revealed it to an eager crowd in California. Most notably, Star Trek: The Next Generation envisioned a future in which tablets were as commonplace as paper. But what I began to find odd, even in the late nineties — basically, when the internet transformed the PC from a home’s machine to an individual’s machine — was that the crew of the Enterprise would hand tablets to one another. “Here are your orders.” “Here’s the diagnostic.” That sort of thing. Why wouldn’t they just email them? Why wouldn’t each crew member have his or her own tablet and pull information off the main computer when they needed it? Why would anyone ever hand a tablet to someone else? Obviously, Star Trek’s PADD was more a future object than a future technological convention. The notion of the cloud (and therefore, simpler, distributed terminals) wasn’t commonplace in the nineties. But here’s something interesting: As soon as the iPad came out, I thought to myself, “Aha! The PADD! I bet engineers at Apple all watched Star Trek!” And probably so. But, the most common tablets in Star Trek: The Next Generation are more akin to the Kindle than the iPad. They’re clearly single-purpose; almost disposable. Can I imagine handing someone an iPad? No. But can I imagine handing someone a Kindle. Definitely. Today, we have both futures, running in parallel.

Obviously, this isn’t directly related to talking machines, but it’s an interesting thought about the future. Predicting the future is a parlor game; something entertaining but not reliable; insightful about who we are but rarely correct about where we’re going. Playing that game will probably lead to some good, but it probably won’t be predicting the future. Otherwise, what of free will? But the point is, predicting the future should be about setting in motion self-fulfilling prophecies. We should predict futures and then go and make them. One could argue that Star Trek’s PADD was less a predicted future than a catalyzed one. What would you expect a bunch of engineers who grew up watching Star Trek to make?

The same, then, for artificial intelligence. Rather than scoff at the partial future of Star Trek’s computer — little more than a 23rd century Siri — perhaps we might consider it a chosen future. It’s not hard to imagine that, two centuries from now, humans will have the capacity to build computers of which we could only dream today — the kind that could not only convince us of their consciousness, but even exceed us in their thinking and perception. Nor is it difficult to imagine that they might, in those two-hundred years, have learned and experienced enough to know better than to do so. To create machines limited in their scope for the express purpose of ensuring that humans pursue relationships with one another, rather than retreating into digital fantasies.

My worry is that we won’t make such a computer. I worry that market forces are so strong that companies see no next step other than a conversant machine, because there is no next step other than a machine that is always listening, because there is no next step other than a deeper repository of information about us which can be sold to other companies who believe they need it in order to sell us things. Surely, the Enterprise’s Computer was always listening, but it never occurred to me that what it heard was being recorded, analyzed, and sold. It obviously wasn’t. In its prediction of the future, Star Trek either missed the internet — the notion of technological connectivity we take for granted now and which inexorably leads to the creep of intrusions into our privacy — and the sort of artificial intelligence at its center, or it chose to imagine a future without it.

What sort of future might we imagine and create, other than the one we assume will happen, whether we like it or not?

On Screen

After we read Dark Matter, by Blake Crouch (which I thought was ok), a fellow book-clubber recommended that I check out his previous books in the Wayward Pines Trilogy. I’ve been listening to the first book, Pines while my wife and I watch the television adaptation at the same time. Now, the TV series’s first season actually covers the entire trilogy in ten episodes, which, if you ask me, is very cool. It’s a full story with an endpoint. (Yeah, they made a second season, and no, it doesn’t seem necessary.) But anyway, check it out. It’s not perfect. It starts with a premise you’ve certainly encountered before — the inescapable town (see The Twilight Zone, The Prisoner, Black River, Fringe, etc.) — but, it becomes something very different. Any further explanation will spoil it for you, so I’ll just say this: While I quickly began to cobble together a theory for what was really going on, what was really going on was much bigger and more audacious than I ever would have guessed. Avoid spoilers. The surprise alone, I think, is worth investing in ten episodes. You can stream it on Hulu or Amazon Prime.

Also, you should watch this short animation about a transformer named Morpha! Utila!.

Heavy Rotation

Check out this live performance by Nils Frahm at the Montreux Jazz Festival in 2015. It’s pretty great, and it’s been playing in a tab in my browser a lot this week.

Recent Tabs

The first annual Design Census begins on December 1. Oh, and “Interaction Design is Dead,” apparently. What now? The sounds in your backyard are unique, go record them. This is how you project an image on to an unstable surface. And, here’s A Survey of Alternative Displays. This is cool a/v installation in the shape of a geodesic dome. This is the man whose job it is to constantly imagine the total collapse of humanity. There is such a thing as a mirror spider. Take a journey to the threshold of a utopian labyrinth. Solar panels are really cool looking. Robot dreams of freedom, escapes, causes traffic jam. The New York Times is launching a daily 360-degree video series. The first polaroid. This is a cool table. Oh ffs. I’m like.



Written by Christopher Butler on
November 4, 2016
 
Tagged
Essays