COO of Newfangled.com, author of The Strategic Web Designer, infrequent designer, bookworm, science fiction enthusiast...
Gaming Reality, Part 2
(Started this yesterday. Not sure how many entries there will be.)
Civilization is a mirror.
I don’t mean the real thing, by the way—that’s obvious—I mean the game. Civilization, the computer simulation game.
In Civ, going from settler to civilization takes a lot of work. Hours, even. In the real world, millennia. Much of that work is accidental, which—it could be argued—creates much more work than the same process if it was intentional at every step. There has only ever been one civilization project—one for which history provides the continuity. But in the game, civilization is a player’s side-project, or more accurately, civilizations are side-projects; they can be started, paused, saved, restarted, erased, destroyed, built again, and, of course, won. Winning, though, is what always presented the greatest struggle to me as a mini-civilization starter: I don’t recall ever actually winning. But I also don’t recall caring whether I won or not. Had I cared to win—had that actually motivated me to play—I suppose I probably could have won at some point. But I did just enough mental calculus to figure out what would be required of me to win—time and an interest in war—to realize that I was clearly playing for other reasons. (I just Googled “how to win at civilization II and verified that yes, most strategies require military conquest at some point. Just to make sure I wasn’t exaggerating and to verify that yes, I was a wimpy player.)
As has been echoed throughout the rest of my life, I enjoyed the setup far more than what came after. I deeply enjoyed the process of exploring undeveloped terrain, figuring out where the resources are, starting settlements, connecting them, getting trade going, pursuing knowledge and developing technology, etc. The setup is safe. It’s a design-in-a-bubble scenario, a controlled environment. But as soon as “first contact” was made with some other population, I began to lose interest. Usually because that new civilization made it clear they wanted to kill me. I’d keep playing, but the inevitability of war was obvious, and that bugged me.
I get that war is difficult, if not impossible, to avoid in the real world. I’ve been privileged to live at a time and in a place where war can be avoided—I’ve had a choice to not fight that most human beings have not had. I’m thankful for that, because I doubt I’d be as useful and resilient as I’d wish in a war. Who knows, really — I’ve never been in one to find out. So I assume I don’t have the war instinct. But I see the war instinct all around me, even in small ways. I’ve seen it in otherwise normal people turning into monsters when there are too many cars and too few spaces in the grocery store parking lot (this can be truly scary). I’ve seen it lines waiting to be seated precisely at 11am when the latest hipster brunch place opens. I’ve seen it in people who would probably consider themselves intellectually beyond war—thoughtful, liberal, lovers of the grey area—yet fall victim to that common human fear that rises so quickly to the surface when there are too many of us and too little of what we think we need. After all, isn’t that really what most warfare is all about—competing for control of limited resources? Whether that be energy, food, water, land, or people?
Perhaps that’s why I couldn’t flip the militant switch when I played Civilization — I wasn’t competing for anything. I was playing a game. I had leisure time. There was no real threat to kick in that fighting instinct that I imagine we all have. So I’m left to wonder why some can flip that switch, even when they’re safely sitting at home staring at a screen?
Gaming Reality, Part 1
I just finished reading 11/22/63, by Stephen King. The gist: a schoolteacher named Jake is shown a time portal by a friend, Al, who discovered it in the back room of his diner. On their side, it is 2011. On the other side, it is 1958. Each trip made back to 1958, no matter how long it lasts for the traveler, only takes 2 minutes of 2011 time. Each re-entry to 1958 resets the previous one. Those are the rules, which Al discovered after years of making small trips back and forth to score cheap food supplies and explore. But then he becomes obsessed with making serious changes. His final trip lasts for five years as he attempts to prevent the assassination of Kennedy. He comes home from this trip two minutes later, but five years older, dying of cancer, and hoping that Jake will finish the job. The rest of the story is Jake’s. It’s a time travel story, yes, and also a period piece — and quite an enthralling one, too. But it’s also a story about a game, and what happens when you break the rules.
Al and Jake never learn why the rules are what they are. Why does this portal exist? Why is it always the same day and time in 1958 on the other side? Why is every trip, no matter how long it is for the traveler, only two minutes long by 2011 clocks? Why can Al buy 10lbs of ground beef from the store across the street (on the 1958 side), bring it home to 2011, turn around and return to 1958 — which resets his first trip, remember — and buy the same 10lbs of meat again? Well, they never figure out why, but they learn these rules nonetheless. Where it gets interesting is when Jake tries to make serious changes and discovers another rule: the past does not want to be changed. “The past is obdurate” is repeated over and over again throughout the book. How obdurate?
Large trees mysteriously fall and block essential routes; sickness — of the violent flu and blinding migraine variety — strikes just when strength for action is needed; balance is lost suddenly, at the most inopportune moments and in the most precarious of places, like the top of steep staircases; holes appear in pockets through which important keys fall out and lose themselves on the street; cars break down, again and again and again; spare tires go flat in the trunk; strangers get in the way and some become violent; and on and on and on. That obdurate.
Here’s where the whole gaming thing kicks in. Rules and cheat codes.
What seems to be happening here is that there are the ground rules that Al and Jake figure out through basic game play (discover portal, go through portal, explore, return home, repeat), but then there are meta rules — the kind that kick in when Al and Jake want to cheat. It’s as if the universe is saying, “OK, I’ll give you a cheat code, but don’t abuse it.” There are consequences like this in the original SimCity — which I was able to download and run in a DOS emulator on my Mac, which I did recently after watching Jerry’s Map for the tenth time and thinking more and more about world building. You begin with $10,000. You can build things, which cost money, and you can charge taxes, which makes money. There are ideal ratios of residential to commercial to industrial properties, the specifics of which I still haven’t quite figured out. Those are the basic rules. But you can cheat. Holding down SHIFT and typing F-U-N-D instantly gives you another $10,000. Do that too many times, though — I think it’s 8 in a row — and all hell breaks loose. Earthquakes. Fires. Tornadoes. Godzilla. Your stolen empire will suddenly be a shambles. You can disable disasters as a game setting, which makes for pleasant playing so long as you don’t cheat code it up too many times. Then SimCity snatches that control back.
In a way, I like this idea. The universe is a repository of stability. If you want to make withdrawals, you have to also make deposits from time to time.
Book Report: Earth Abides
So again, at the end of the twenty-second year, they gathered at the rock, and Ish with his hammer and cold-chisel cut 22 into the surface of the rock just below 21. They were all there at the rock, because the day was fair, and warm for winter, and the mothers had brought even the youngest babies. Then after the numeral had been cut, all those who were old enough to talk called out Happy New Year as it had been in the Old Times, and as it was still at this time.
But when Ish asked, following the ritual set in the last years, what should be the name of the year, there came only sudden silence.
At last the one to speak was Ezra, the good helper, who knew the ways of men:
”Too much happened this year, and whatever name we give the year will have a bad sound to us. People find comfort in numbers, and no bad thoughts. Let us give this year no name, but remember it only as the Year 22.”
(Earth Abides, 275)
This is how Earth Abides nearly concludes. Its remaining 37 pages contain an unofficial “epilogue” of the story of Isherwood Williams (Ish, for short), whose twilight years are spent observing his legacy: repopulation — a triumph after a global pandemic brings humanity to the brink of extinction — and regression. The progress he had hoped to continue through maintaining the order, literacy, science and culture of 20th century America has slipped through his hands, despite his best efforts. And yet, this book does not end unhappily… nor especially happily. Its ending is like any other moment in human history: complicated. Through Isherwood’s eyes, Stewart describes a picture of the future that is as nuanced as any other cultural transition in history — a struggle to preserve against an inexorable force of change, the exchange of traditions and customs for new ones, the loss of history. It is, by our standards, a history rewound, yet it is the future. Ish’s story is his processing of that future, from struggle, to resignation, to acceptance, to a hint of thanksgiving. It left me wondering, what does progress really mean, and what sort of hope can those of us clinging to the now have for the future?
The plot of Earth Abides, in a nutshell, is this: A pandemic wipes out the majority of people in the United States (the fate of the world beyond is not mentioned). Its protagonist, Ish, is one of a very small group of survivors who come together and struggle to survive and maintain civilization. The rest is their story.
It’s important to note that Earth Abides was published in 1949. 1949! It’s one of the earlier post-apocalyptic novels, and yet it never feels dated or out of touch. In fact, Stewart seems to intentionally avoid discussing the kinds of things that tend to date novels — contemporary slang, current events, and media. Contemporary technology — the internet, mobile phones, etc. — is absent, but then I don’t recall many, if any, references to what would have been contemporary in 1949: telephones, television, rockets, etc. (though there is a brief mention of radio and records very early on in the story). Instead, Stewart focuses on very basic things — Isherwood’s travels (so, yes, cars), food, shelter, reading, education. Aside from an occasional hint, one could comfortably imagine that the book describes now, not 60+ years ago.
The most fascinating element of Stewart’s writing, though, is how he adapts it to correspond to the trajectory of Ish’s community, beginning with the sort of voice, construction and detail you’d expect in a contemporary novel — or any of fiction’s subgenres, like speculative fiction, scifi, etc. — but gradually simplifies it as the novel progresses, to the point of concluding it with a voice that sounds much more like the telling of a fable or mythology. It’s subtle, but it’s certainly there and a key to understanding Stewart’s point — that just as there is expansion of culture, there is also contraction; that the future brings gain, and it also brings loss; that culture is far more fragile than people. That the future might yield people and a culture less interested in the sort of factual detail upon which we live today. That might be troubling, but it’s not out of the question. Stewart seduces us into considering that fairly through language.
It’s a beautiful and thoughtful book. Highly recommended.
P.S. This post was inspired by Robin Sloan’s Summer Reading, which I really think captures the spirit of what the web was meant to be. You should check it out.
Do you have any books to recommend?
Some Recent Publications
I’ve got some stuff to share…
The UTNE Reader contacted my editor at PRINT requesting to re-print my column from the February issue, Future Daydream. We agreed to the re-print, which is in the July/August issue, available in stores now. They’ve run the piece as The Boy in the Bubble, which is an interesting title I think. Here’s a clip:
“On the list of problems to solve, communication has sat at the top for far too long, and consequently, our countless solutions are what fill screens today. After a decade of focusing primarily upon the social applications of interactive technology, we need to turn our attention to other matters and use our many communication tools to address the interaction problems of 21st-century urbanity: resource management, transportation, energy, and infrastructure. It would be a shame to be remembered as the generation that tweeted while the world crumbled around us.”
…an important part of design is telling a compelling story: of why something doesn’t work, how it could be done better, and why our solution could help. Without grounding that story empirically, all we have is a hunch. That’s where measurement comes in. It connects action to defensible outcomes. We designers need to be proactively involved in measurement. Knowing how to gather and interpret interactive data will better position us strategically, not to mention prevent inheriting unhelpfully data-glutted reports and being held accountable to someone else’s vague interpretation of them. But, it’s more important that the data ground our vision in reality than be used to build credibility with clients. We want our clients to trust our judgement, but our measurement process should lead them to the same conclusions we’ve made if they were to do it themselves. Unfortunately, measurement is a discipline that tends not to be under the umbrella of design. It’s time to change that.”
“Amateur data visualization is the product of something deeper, and it shouldn’t be so easily dismissed. That doesn’t exonerate bad data-viz, but understanding its roots is more productive than continuing to dwell on the controversy. If you look at how data visualization is used today, you’ll find something essential to the human experience: play.”
Would love your feedback, as always…?
Just a few things I’ve learned that are on my mind as I draw closer to my 32nd birthday…
I’m nearing the end of my thirty-second year on this planet. There’s nothing particularly special about that, but as this year wraps up, I’ve been reflecting upon a few things that I’ve learned. These aren’t remarkable things. They’re things that if you’d have explained them to me at any point, say, after my seventh birthday, I’d have probably nodded and gotten the gist of what you were saying. But there’s a difference between getting something and really understanding it. And like anything else, there’s a price for that understanding that few seven-year-olds can pay.
So here are a few of those things, collected in the spirit of “there’s nothing too obvious to be written down and shared now and again.”
(There is no priority or purpose to this order. I only used numbers because people like lists.)
1. Your ego wields far too much control over your decision making.
Recognize that you AND your ego will one day die. That will reveal the triviality of much of what you spend so much time pursuing. It will create a taste for new, better things, many of which are invisible to the world around you — especially to those people from whom you’ve wanted acceptance, respect and praise.
There’s more to life than being seen.
2. Don’t try to change people.
You have probably heard the words attributed to Mahatma Ghandi: “Be the change that you wish to see in the world.” The idea is, in other words, to not desire change yet hope that others will bring it about for you. And the idea behind that, of course, is the underlying supposition that you, yourself, are not in need of change. Everyone else is, but not you. Except that’s all wrong. You do need to change. Yes, so do other people, but don’t think you can do anything about them without doing something about you. If you’re willing to accept that, then ask yourself, are you willing to change?
Start by accepting others, as they are right now. And don’t stop expecting brokenness from others until you are no longer broken. Right, that means never. There is no program, no secret knowledge, no book, no guru, no practice — no, not even a long lifetime of hours — that will lead to perfection. Not for you, not for them either. Accept your imperfection, your humanity, and enjoy the life that comes with it.
3. To the extent that you can, change yourself.
Acceptance isn’t the same thing as apathy. In fact, I’ve learned that acceptance is the first step toward positive change. Accepting your permanent imperfection is the same as giving up on a futile quest for solving countless problems all at once — the kind of problem so big that you could waste years just trying to start. Start by acknowledging the truth about yourself and from that, tease out the stuff that’s changeable. Start small. There may be big things you wish to change that you cannot.
I spent the majority of my twenties in denial of who I am. There were people close to me — people I loved dearly — who were so very different in just about every way, and rather than accepting our differences, even celebrating them, I decided that their ways were better than mine. They were extroverts, and I, though profoundly introverted, believed I should be as well. They were spontaneous, jubilant, and playful, and I, quiet and reserved, believed I should not only come out of my shell, but destroy it and never re-enter. I observed well and was a quick study, and though I could sometimes convincingly play the part of a man remade, I was just wearing a very heavy mask. Nobody can do that forever. It’s painful. And really, nobody else wants to be around a guy in a mask, either. It’s painful for them, too, especially when they love you and you love them. So, I ended that decade sobered. The price of confronting who I really am included the end of some of the most important relationships I’d had in my life so far and a new, deep sense of fragility and humility that I doubt will ever be reduced. It’s better, really, that they don’t. There is no true life without the awareness of all that one has to lose, and of all that one is not.
I’ve realized that I can’t change the fact that I am a neurotic, hyper-observant, deeply anxious introvert with an unfortunately long memory for the troubling stuff and a short one for joy. No, I can’t change any of that. But I can practice some things that will bring the balance I need. For instance, I’m learning to practice being present, in this moment, which, though very, very hard for me, is the antidote for anxiety. I get glimpses of this working now and again and even those fleeting moments of peace and contentment make it worth the work. I can use my hyper-observance to gather the facts I need to cross-examine my own emotional conclusions: Oh really? You’re hopelessly alone and misunderstood? What about that time that so-and-so said or did that thing? Why would they do that if what you believe is true? Sound familiar?
That’s just me. Ask yourself what about you can change?
4. Recognize that you will never truly understand someone else’s feelings for you, for better or worse.
5. Get some therapy.
It doesn’t have to be permanent. But if you’re willing to be fully honest with a therapist you trust, you stand to gain a lot.
6. Learn to imagine the people around you as children.
This will naturally encourage empathy and compassion, rather than the ugly feelings we tend to have about even those closest to us: envy, bitterness, even fear. Imagine that person with whom you may be angry as a small, helpless child. If you knew them as a child, retrieve that image from your memory of them then. I guarantee your anger will subside.
(That is, unless you just don’t like children. If that’s the case, I can’t help you ;-) )
This is an incomplete list.
Seeing Time, Part 6
“Cyberspace, especially, draws us into the instant.” (James Gleick, Faster, 286)
…which is probably why futurism is dying right now. But I should probably qualify that, because words like future and futurism seem to be used all the time. The kind of futurism I’m talking about is the kind that involves imagination of the long-term variety, not the kind that involves relatively short-term predictions of things with relatively short-term impact — things like who will win the next election, what the next iPhone will look like, etc. I’m not the only one who feels this way; I posted back in October a short note about how we’re distracted by the now as a result of this kind of short-term futurism, which was actually just a “hear, hear” to Matthew Sheret’s post, The Future is a Blank Canvas Pinned to a Brick Wall. (Note to self: I need to get more creative with my blog post titles.) And Sheret’s post was really just a response to a quote from William Gibson:
“We have too many cards in play to casually erect believable futures.”
In any case, my main point back then was this:
“What’s happening, as far as I can tell, is that our imagination is being inhibited. We’re so focused on the now — that email, text message, instant message, Twitter DM or @, Facebook post, you know what I mean — that our sense of the “next” is being squeezed down to the momentary rather than something larger…there’s no data to prove this. But I do appeal to our ability to sense what is clearly happening. The reduction of the scope of our imagined future from years, to seasons, to moments. Sure, there could be other factors at play, such as loss of hope due to global conflict, economic collapse, environmental issues, general entropy, but amidst that is a significant shift in the pace of life that has stolen the quiet moment of reflection from us.”
This seems related, so…
I was chatting with a colleague this morning about the various bad news (political/social strife, natural disasters, economic struggles, etc.) and he mentioned that there’s an old rule of thumb for stock traders — that 80% of people forget about news after 3 days, but then the rest forget after 21 days. Not exactly a long-term perspective. But given the volume of news today — the 24-hour news cycle — you can’t really blame us for dumping our news cache, can you?
Maybe it’s another one of those strange examples of existential time-dilation, related to what the Directorate of Time said about how “the more we have experienced, the faster time flows.” So the more we experience, even peripherally, the more distorted our sense of now, then, and later will be. Add to that the fact that we’re complicit in allowing rumors about possible entertainment gadgets (not to mention “reports” of such-and-such a celebrity being seen wearing something-or-other at someplace) to qualify as “news,” occupying the same level of importance as a dispatch from a war-torn country. If some guy’s musings about an unreleased cellphone’s feature set is news, then some kind of time-dilating, imagination-suppressing phenomenon must be to blame…
I think that’s it for seeing time.
Seeing Time, Part 5
(Now that I’m on to a fifth installment of this series, I realized I can just provide a link to all of them by linking to the tag, Seeing Time. Duh. I work primarily on the internet.)
I also realize that titling these “Seeing Time” is a little grandiose when they should really just be called “random brain jazz after reading a really out of date tech book.” Just so you know that I realize. On that note:
“The printed word began as advanced technology for rapid transmission of data into the brain. In terms of bits per second, there was no better way to get information, or a story, or facts, from out there to in here.” (James Gleick, Faster, 283)
Here’s my question about this idea: if the printed word accelerated the transmission of data into the brain, what did it do for the retention of data by the brain? As I read this, I thought of how information was transmitted throughout ancient history — much of it orally, which is a method we no longer depend upon, not to mention a skill we no longer have.
So once we gained the ability to write down information, we began storing it anywhere but our own brains — cave walls, stone and clay tablets, papyrus scrolls, vellum codices, and so on. We outsourced our memory, even then, to technology, which is interesting to consider as this concept of cyborgification (so to speak) is all the rage right now because of how we’re doing the exact same thing only with digital media and computers. I suppose the really pertinent question is not whether we are outsourcing our memory to technology — we are, just as we have been since we first learned to write — but whether we are doing so to a reliable storage agent. All that we know of ancient human history is the result of the reliability of the stone, clay, and paper storage units for information. Yet, digital media is much more vulnerable. The volatility of file formats, proprietary (and therefore hazardous) database structures and languages, rapid succession of devices, and, of course, the market forces driving technological innovation today, all contribute to a poor substitute for analog storage methods — perhaps even oral tradition — in light of the retention of cultural knowledge.
James Burke touches on this at the outset of his series, The Day the Universe Changed, too…
By the way, I spent a few minutes Googling around on the history of communications technology and found some interesting stuff. For instance, the World History Site has a very long, categorized timeline of dates in the history of cultural technologies. While browsing there I made a serendipitous discovery. The timeline lists:
1824: Peter Mark Roget proposes theory of persistent vision.
Apparently, this is the same Roget for whom the Thesaurus is named. Neat!
There’s more where that list came from, by the way. Here’s a clearing house (of sorts) of information on communications technologies and world history. You could also hit up Wikipedia’s landing page on the history of technology. But then you’d be there for the rest of the day. There’s a ton there…
By the way, I have to end these with a question mark in order to trigger Tumblr’s “let people answer this” widget, so here you go?
Seeing Time, Part 4
“You are aware that the director of the Directorate of Time is something of a philosopher. He has written, ‘We experience time intervals as much shorter than when we were young.’ He even has equations for this: ‘Delta t(s) ~ Delta Exp/Total Exp’ and ‘dt(s) ~ dt/t or integrated t(s) ~ In(t),’ by which he means, the more we have experienced, the faster time flows. Depressants like alcohol slow time, because the brain receives fewer inputs per second. You may feel, as so many do, that your life could be plotted on a scale where the years from age ten to age twenty seem as long (as eventful) as the years from age twenty to age forty or from forty to eighty. Exponential growth at its most damning. On this scale, the moment of birth is at negative infinity, and as for death…someone else might quote Woody Allen, but the director favors Epicurus: ‘Death is nothing to us, since when we are, death has not come, and when death has come, we are not.’” (James Gleick, Faster, 279-280)
As soon as I turned 30, I began spending an inordinate amount of time thinking about how my perception of time has changed. In particular, I began noticing exactly what the Directorate of Time describes — that the decade from age 10 to 20 seemed far longer than the following one, which brought me to 30. In some ways, the changes in my life were equally intense in both decades. Between 10 and 20, one doubles one’s age, which in and of itself is significant and isn’t as quickly done in later years. But also, one goes from childhood to early adulthood, experiencing hormonal and brain chemistry changes that greatly affect one’s sense of self. In the next decade, I experienced much more circumstantial volatility, but I suppose my sense of self was a bit more consistent. In any case, the distortion makes 40 seem as if it’s far closer than 9 years from now.
And now I’m less than two months from 32.
Incidentally, neuroscientist António_Damásio was recently interviewed on an episode of PRI’s To the Best of Our Knowledge on Memory, Mind & The Self about the role of memory in the development of selfhood in human beings:
“The critical thing in relation to human consciousness is not only the development of mind and the development of self, it’s the development of a self that has biographical characteristics. I think that that is really the passport into big-time consciousness of our kind. What you have there is the possibility of, of course, expanding memory so that it is not just a memory about categories of things, like say animals, mountains, and bodies of water and such, but also the possibility of having memories about specific individuals, specific events, including the events that have happened to you. Once you start having that kind of memory, then you have the possibility of creating some kind of record of what you have been. And eventually, once you get language, then you’re essentially creating the beginnings of culture.”
It seems that he is saying that one’s sense of distinction from other entities is the root of selfhood. I suppose that seems a bit obvious or circular, but it’s different from saying that one’s sense of self is (and always has been to humans) inherent to the experience of being alive. Damasio seems to be saying that our sense of self — the human sense of self — originates with our ability to recognize that we are not something else, that we have individuality in a categorical sense, before we had personality - individuality in the way we tend to see it today.
I wonder, back to Gleick’s passage (and my “I’m-post-30-now-and-running-out-of-time” feelings), whether my ability to forget enables the sense of time dilation the Directorate calculates. What might that dilation feel like (or would it exist at all) for a person with hyperthymesia? (Incidentally again, that same episode of To the Best of Our Knowledge includes an interview with Jill Price, a 46-year-old resident of New York who has forgotten virtually nothing since the age of 14, and whose case in particular led to the official diagnosis of hyperthymesia.) It’s tragic — the entry for Price notes that hyperthymesia is characterized by a propensity for “spending vast quantities of time thinking about one’s past,” which must be torturous given the detail available. Price confirms this in her interview, which makes me feel thankful for my smaller dispensation of torturous autobiographical recall…
Is anyone out there experiencing psychological time dilation…or perhaps real time dilation?
Seeing Time, Part 3
“‘The historical record shows that humans have never, ever opted for slower,’ points out the historian Stephen Kern. We fool ourselves with false nostalgia—a nostalgia for what never was. Whenever we speed up the present, as a curious side-effect we slow down the past. ‘If a man travels to work on a horse for twenty years,’ Kern says, ‘and then an automobile is invented and he travels in it, the effect is both an acceleration and a slowing…That very acceleration transforms his former means of traveling into something it had never been—slow—whereas before it had been the fastest way to go.’ Until the futurist Filippo Marinetti began talking about speeding up rivers, ‘the Danube had never seemed so deliciously slow.’ Peering back through history, we see scenes in a kind of slow motion that did not exist then. We have invented it.” (277-278)
Sure, there’s a distortion of perception that happens when some new technology changes how we do things today and think about how we did them yesterday, but it’s not always just about speed. For instance, texting is not necessarily faster than a phone call. A text-based conversation could elapse over a longer period of time than a phone call, but it feels faster because we don’t have to focus on that conversation in the same way we do when we’re talking, live, over the phone. With texting, you can quickly send a message and then stop thinking about it until you receive a reply. Ultimately, it probably requires the same overall amount of attention, but in a less concentrated way. But texting is another available method that we didn’t have before. It gives us the choice of communicating one way rather than another. We once had only letters. Over time, our choices have multiplied: telegram, telephone, fax, email, instant message, text message, etc. I guess that means that, in light of Gleick’s passage, acceleration is one consideration and distribution is another.
The last part about speeding up rivers made me think about a slideshow I saw recently documenting Lost Rivers, which “depicts places poised between loss and beauty, acknowledging the price of urbanization while seeking to reclaim a sense of connection with these natural spaces.” It seems that attempting to “speed up rivers,” or in general, messing with nature to accelerate industry, is not a good idea…
But back again to slow, because the rivers made me think of the opening scene of Andrei Rublev, which depicts a group of 15th century Russian monks launching a hot air balloon. Some run along a river, making their way to the launch point. As the monks prepare, many references to time:
Hold on a second. Come on, quick! Come on, fast! We won’t have enough time. Just a second.
And then, spectacular footage shot from the balloon as it slowly crosses the landscape and eventually makes landfall near a horse —the contemporary vehicle of travel — by the river.
Seeing Time, Part 2
“Your sense of acceleration has not blinded you to the brevity of the present moment.” (273)
This is the first sentence of the last chapter, titled The End. I’m still not exactly sure what it means. Sometimes I feel quite the opposite — that much of what we do only makes sense given a denial of our transience on this planet (building projects, storing up wealth, the way we spend the time we do have, etc.). I’ve often had this realization while waiting on line in stores, where I’ll be looking around at the people around me and wondering, “what are we all doing here, living like this, when we’re all going to die?” On the other hand, perhaps Gleick is right — that our ambitions and daily toil are also just as motivated by the awareness that we won’t be here forever, so the simple fact that we have desires validates what we are willing to do (standing on line, for example) to achieve them…
Meanwhile, last evening I watched (thanks to Frank’s recommendation) the first episode of Ways of Seeing, in which John Berger points out that the way we perceive a painting is changed by its surroundings — the sound around us when we’re looking at it, the wall behind it, whether it’s been reproduced and displayed somewhere else, etc. He meditates quite a bit on the pace of seeing, which caused me to marvel that even in our rushing about to do all we might do in life before we no longer have it, we still stop to look at things.
Then I thought, what if someone were to choose a painting, say for instance Da Vinci’s Virgin of the Rocks (Berger talks about it at length in the first episode) which has been placed in many different locations since it was first painted in the late 15th/early 16th century, and create a sonic story of its life so far? The image would never change, but the sound we hear would. Early in the piece, you might hear the echoing goings on of the Milan church for which it was painted. Then you’d hear the sounds of transport as it made its way to England. You might hear hushed conversations of spectators around it, perhaps the bragging of its owner of the time. Eventually, more transport sounds, installation in the British National Gallery, etc. It would be a purely sonic version of Noah’s Everyday, and rather than the brooding soundtrack he deftly chose — which turns mundanity into something more dramatic — the sounds of mundanity would bring life to the painting, which would remain still. Someone should do this…
This makes me think of a book I had as a child — The Little House — which tells the story of a house as its surroundings radically change over time. The image above, by the way, comes from a brief article comparing the storybook image to a real-life example of a house that refuses to give in to the pressures of progress. Anyway, The Little House is kind of an analog version of Everyday, isn’t it?
Seeing Time, Part 1
Today I was flipping through a book I read sometime in the last two years: Faster, by James Gleick. It’s been on my mind since I just started reading his newest book, The Information. This passage, in his Afterword, resonated more with me in this pass (apparently) than the first time I read it:
“We struggle to perceive the process of change even as we ourselves are changing. After all, flux is our style, if not our destiny. We don’t exist in a steady state, and we don’t have a motionless platform from which to observe the changing world around us. Sometimes we fail to perceive profound transformations that we’ve been staring at; sometimes we blink and we notice a revolution. The most profound comment on this is still Richard Feynman’s; he was sitting outdoors in New Mexico, looking up at a blue and turbulent sky and talking about the evolution of his field, theoretical physics. ‘It is really like the shape of clouds,’ he said. ‘As one watches them they don’t seem to change, but if you look back a minute later, it is all very different.’” (287)
As soon as I read this, I immediately recalled a few picture-every-day type projects I’d seen on the web. They’re the sort of thing that only the late 20th century could have produced; the technology necessary for that much documentation was just not available or practical beforehand. Here’s one that began way back in 1976 and continues to track a family’s appearance each year. Here’s another one — particularly sad, I should say. Here, of course, is the famous Noah-takes-a-picture-of-himself-everyday-for-six-years video that was later spoofed by the Simpsons. I am sure there are many, many more… This one is the most recent I’ve seen, and perhaps the most stunning. The subject photographed himself every day between 1991 and 2007, and animated the images (which, incidentally, also track the position of the Earth relative to the Sun). I had to watch it several times in order to really process the physical change he experienced over 17 years. Even at such a rapid speed, you track with him and lose sight of the earlier images. By the time you reach the last image, you know he looks older, but you can’t exactly describe why. The last time I watched it, I also noticed how the intervals between haircuts grew much smaller as he aged.
But I also wonder if the relationship between the availability of that technology and the output of that kind of condensed observation project is parabolic — that maybe we’ll get to the point of an overwhelming amount of immediacy of media that nobody cares anymore to mine the insights that condensing it produces.
But then I think of a couple of other stories—one book and one film—that push this idea of constant/ubiquitous surveillance even further. The film is The Final Cut, the story of a “cutter,” a sort of life-editor/sin-eater of the continuous footage our lives produce via implants who creates the final cuts screened after one dies. The book is The Light of Other Days, which tells the story of the radical societal impact of a time-viewer, which is just as good at showing you an event 100 years in the past as it is one less than a second ago, which is to say that it makes everything, everywhere, everywhen available.
Anyway, this stuff is just on my mind. What do you think?
Networked Cities and Crumbling Infrastructure
I’ve been following the ongoing conversation around the internet of things and the networked city and have enjoyed it very much.
(To follow along, check out—in no particular order—Really Interesting Group, mammoth, City of Sound, frog, The Infrastructurist, Berg, Dentsu, Stamen, and Quiet Babylon. Very light on Americans, by the way. Something to think and perhaps feel a bit of shame about.)
There’s a feeling of being right on the cusp of something—that, soon, many things will be profoundly different for us not just in the world of screens that we’ve already been immersed in, but also in the physical world in which we exist (or, for lack of a less cynical way of saying it, the world in between screens). The only problem is that this future world—the networked city we’re imagining now—just isn’t that appealing to me. The idea of being followed around by the ghost in the machine, being addressed by name on public transportation (because of technology, not because I know the bus driver), being sold things in the park by disembodied voices, etc., is dystopic to me. All we’re doing is imagining a future in which the virtual world we already know so well, characterized primarily by commerce, is manifest as an invisible layer over the organic, physical one. How interesting, really, is that?
What we really need to do is apply the technology in ways that will network the city to make our day to day experience less virtual. Invisibility is actually pretty key to that. So is a focus on us, but not from the perspective of finding more ways to reach us with advertising (no matter how soft or “social” it may seem), but from the perspective of making the work we do more productive, more efficient, safer, more enjoyable, etc.
This occurred to me dramatically when I read an article recently bemoaning the fact that the United States physical infrastructure has declined to such a great extent as to rank us shockingly low in global terms. Yes, us. It’s sad that we walk around crumbling cities with shiny new gadgets in our hands. Our infrastructure is hurting. It really shows where our focus is—I could string together quite a nice metaphor about these screens being the reflecting pool to our Narcissus, but that’s been done. But what’s even more troubling to me is how quickly our narcissistic trance could be broken and transformed to anger and entitlement by a dumb and harmful accident due to lack of upkeep on a bridge or road or something similar. I can imagine the response, the outraged questioning: “Why wasn’t this prevented? How could we let this happen in America?” I know I risk oversimplifying things by imagining that the answer lies in our unproductive online distraction. But hey, I’m going to say so anyway. Maybe we’d get more done, and have a more reliable physical experience if we weren’t so obsessed by our virtual one.
So how would this tweak to the motivation actually change what a networked city could be? One example came to mind right away, and could be the confluence of several technological trends of interest right now. Imagine if every municipal trashcan had a sensor in it that could detect when it had reached capacity. That sensor could report back to a main database. That simple full/not full report could be measured over time, and once the data set represented a large enough span of time, we could begin to do analysis on it to predict in advance when the trashcans would be full. Couple that will an algorithm that would apply the full/not full statistical analysis to the pickup crews and their routes, and we could create a system that plans and assigns routes based upon realtime data. I believe that would be a truly smart system that would create all kinds of efficiencies: better route planning would, of course, save time and fuel by reducing the waste of hitting cans and streets that don’t need attention, but also reduce monotony for the workers, which I’m willing to bet would increase their happiness and reduce turnover.
My firm began working with a client several years ago that created software to coordinate municipal road projects, in particular between systems that don’t ordinarily know what the other is doing. Their tool would allow workers to know when a street is being or has already been dug up, whether to fix an electricity problem, communications issue, or a sewer, water, or gas main. They created it because many cities and towns impose moratoriums on digging in order to reduce traffic problems, so that if a street is dug up it can’t be dug up again for a period of years after. You can imagine, then, why coordination is so important. If a street is cut in order to do maintenance on an electrical line, then resealed before the sewer team can do the work they may need to do, the sewer work is delayed significantly. I didn’t realize it at the time, but their tool was anticipating where this networked city concept could go. The only limitation is that their tool (as far as I know right now) has to be adopted on the municipal level and is not being run in the cloud. To distribute it widely enough to make it really effective, it would have to be adopted and installed on enough machines and mobile devices to fill it with enough data to make it worthwhile. It’s just kind of clunky right now. But if each of the municipal systems were networked, the same kind of analysis system I described for the trashcans could be created to detect and anticipate problems and then plan maintenance routines that are efficiently coordinated across systems.
We don’t need more advertising systems, but we do need smarter infrastructure. We certainly have the technology to do this—we’ve spent at least the last decade amassing huge amounts of data from consumer technology use and continue to gather it at unprecedented levels; surely we have something better to do with this computing power than find new ways to do advertising (see here and here and here for an indictment of how we’re using our tech and time, and here and here for an indication of what’s technically possible).
Ethical Technology, Part 5
Well, I think I’m winding down here. I can tell that I’m close to being out of things to say (for now, anyway); my mind has begun to wander back to my last multi-part monoramble on seeing time… Perhaps I’ll also resurrect that one sometime soon.
In the meantime, the last few notes I had are probably not enough to chatter on about individually, so I’ll just briefly mention them here.
* * *
Is there a difference in the ethics of a user and the ethics of a creator? I think there is. When I began listing some ideas about ethics and technology, this was the first thing I wrote down. I never really fleshed it out, and still haven’t really. But what I’m thinking of is this: We all are users. There are no creators that are not also users. But there are some users that are not creators. Many of them, in fact. Therefore, creators have an ethical responsibility that users not only do not have, but cannot have. The user-creator has a scope of understanding that the user-consumer does not share, which creates the responsibility. What, specifically, is within the purview of that responsibility is probably very debatable. Does the user-creator have responsibility for issues of economy—the influence a technology might have on the world around it—sustainability—the resources required to create, distribute, and maintain a technology—or the well-being of the user—whether the use of a technology could have negative physical, mental or social repercussions? Are these questions being asked before an idea becomes a product? Are these questions being asked before a product is advertised?
* * *
The internet is a country. Or, perhaps better said, corporations dealing in data must begin to think deeply about the political and diplomatic issues that arise from what they do. In response to efforts within some countries to restrict their citizens’ internet access, Secretary of State Clinton recently said:
“When ideas are blocked, information deleted, conversations stifled and people constrained in their choices, the Internet is diminished for all of us.”
I agree in principle, just as I agree that many people in the world would be largely better off if their country looked a bit more like ours. However, some of the freedoms and systems that we enjoy are inextricable from our culture, and therefore wouldn’t always be simple to export elsewhere. Political systems shape culture expansively, and are not as easily spread as, say, fast food franchises. What I’m getting at here is that I’m not sure we can say so simply that the internet is this meta-political, meta-cultural entity to which countries are subordinate. Perhaps this will end up being true—that the technological manifestation of the world population will supersede political boundaries in a way that restricts the level of control that sovereign nations have enjoyed for millennia—but at this point in time, it seems to me that internet companies should be acknowledging the political and diplomatic restrictions that are in place now. That is, unless they want to be international activists. If so, then go for it. But if not, I’d ask why a business dealing in information shouldn’t have to play by the same rules overseas as a business dealing in hamburgers.
* * *
Ok, one last thing…
Here’s an ethical rule that seems to be an internet extension of the The Golden Rule: Ask not for that which you would not give yourself. This is a huge problem in marketing: On the one hand, I as an individual am unlikely to trust those to whom I feel obliged to share personal information (i.e. corporations, banks, etc.). In those transactions, I am subject to their terms and have no means by which to control the relationship. This structure is responsible for the trend we see online today, where applications and services enable users to sign in with pre-existing social accounts (i.e. Facebook, Twitter, and the like) rather than have to create entirely new ones. But this is not necessarily a benefit to the user. Sure, it’s convenient in the short-term—one less username/password to remember—but it also creates connections by which your information is now shared in a greater network than you probably intended. If you declined that opportunity, and created a unique user account for every service, you’d be shocked to discover some day that Twitter knew your comings and goings on Mint.com, down to the transaction, and therefore was able to build a detailed profile of your interests, decisions, etc. enriched by your tweets and then sell it to advertisers. Of course, this specific situation is not happening right now with Twitter, but it could. It is happening with Facebook, which is the largest advertising network on the internet today, and operating in exactly this way. Regardless of whether your Facebook account information has any overlap with any other service or website, Facebook has taken it upon themselves to follow you everywhere you go on the web, and sells that insight to anyone who wants to advertise to their millions of users. The exchange is socialization and convenience for the monetization of human beings.
Ethical Technology, Part 4
So far, my ramblings have gone from monopolies of information to the filter bubble to the economics of the internet.
Today, what about automation in general? Is is ultimately dehumanizing?
Cyborgology had a good post recently about automation called Commentary on Race Against Machine, in which they noted that Norbert Weiner, mathematician, father of cybernetics and author of The Human Use of Human Beings, wrote that automation is essentially…
“…the precise economic equivalent of slave labor.”
In other words, Weiner believed that if any process can be automated or maintained/accelerated by robotics, than the only hope to compete “organically” (i.e. using human beings) would be to employ slave labor. I think what that really means is that there is a limit to growth, or that growth is unsustainable without either moving toward full automation or treating people like slaves. In either case, the poor as a swath of society grows.
So we need to think a bit more critically about progress. What it means and what we’re willing to do to create it. Oh, and what the world might actually be like if that progress is made.
“Any increase in productivity required a corresponding increase in the number of consumers capable of buying the product.”
That comes from an Economist story about automation and artificial intelligence that begins with an anecdote about Ford Motor Company. Henry Ford famously paid his workers top wages in order to ensure that they could afford the product they had a part in creating. In doing so, he attracted the most talent 1914 had to offer to Ford and away from competitors. But the article goes on to point out that today’s technological progress—far beyond replacing a man with a motor—has reached a level of sophistication which renders not just the doer obsolete, but the entire job. Today, artificial intelligence is beginning to surpass the cognitive abilities of humankind, which means that the technologically-induced obsolescence previously limited to manual labor will now expand to swallow creative work as well. Indeed, if Eric Schmidt is right, we have already turned a corner from employing machines to help us with the work our thinking has produced to asking machines what we should be thinking about. He has said:
“I actually think most people don’t want Google to answer their questions. They want Google to tell them what they should be doing next.”
Huh. I wonder, who, exactly, does he mean by “people”? Everything in me bristles at this, but you know what? I may just be a relic of the past. (Yes, I’m fishing for you to tell me otherwise.)
But back to Henry Ford. What I’m wondering about is what happens when you pair his idea to make it possible for his workers to drive the cars they made with automation? If economies are inexorably moving toward greater levels of automation, and putting vast swaths of working people out of work and into poverty, at what point does the number of poor exceed the number of consumers that would buy the products being produced by automation? There must be a fine line past which imbalances are too disruptive if not irreparable. In other words, does this situation not ultimately undermine itself? I feel that it certainly does, though that does not necessarily preclude it from happening. Myopia has always been a critical flaw of society.
Doug Rushkoff has been thinking out loud that perhaps society is simply moving away from work as its central pursuit. After all, if it gets done, why worry that it’s not us doing it? So he asks, “Are Jobs Obsolete?” I think he’s probably on to something, but unfortunately, I don’t expect working society to go quietly into the night. Even if the trajectory of technology would otherwise support us spending our time differently—he points out that isn’t this what technology is for, anyway?—I wonder if good, old-fashioned human nature isn’t too large an obstruction to this kind of societal shift. It prompts many very serious questions. Who are we if we’re not working? If we’re not producing? I’m sure the answers are simple to the enlightened, but one person’s enlightenment is another person’s crazy. In between us and the kind of tomorrow Rushkoff describes is, I imagine, quite a bit of conflict.
Shifting this just a bit:
Something that fascinated me while I was reading The Numerati a few years ago was not that new software was capable of doing the large-scale data analysis that Stephen Baker described in the book—stuff that people used to do but just slightly more accurately than a coin toss—but that the major role of human beings in all of it was sales. The few who had the brainpower to conceive of the algorithms upon which the machine “intelligence” was built did so with self-sufficiency in mind. The goal was not to design a complete system, but to design the beginning of one that would learn and develop itself into one beyond the sophistication of which its creators were capable. In the interim, the only other role of note is facilities maintenance—ensuring that the world of the machine mind doesn’t run out of juice and blink out of existence. But surely that, too, is a role the machine could take on itself at some point not too far off. Ultimately, this sort of technology leaves its creators behind.
So, this is the transitional space we’re in. It’s an uncharted, hazy place in which huge leaps are being made in machine intelligence and automation that are largely unseen and unacknowledged by the majority of the working populace. When progress outpaces our ability to perceive it—to really get our minds around it—conflict is unavoidable. The “gains” of this sort of technology can only look like losses to everyone else. And of course, someone stands to profit, because someone will own that technology. But those someones become even fewer in number, an even more obscure elite. This is what prompts revolutions, which is why, before any real utopia is possible, we must first survive an even more severe winnowing of opportunity and equality.