Being Online in 2014
Hours in to the long, slow drive up north for the holidays, with nothing but wilderness around me, I was still thinking about the web.
I had left in the wee hours of the morning, wide awake and reflecting upon some of the recent pieces I’ve read about the stream of information that the internet has become, how it’s shaped our communication, and what relevance our individual plots of the virtual landscape still have today. My mind had been wandering the winding trail these thoughts had worn, exploring what being “online” means today, with all of its distractions, temporal rifts, and emotional temperature.
In the second decade of the 21st century and the third decade of the web, the difference between online and offline has become quite difficult to grasp — hence the quotes. But that doesn’t mean that the difference between online and offline is no longer meaningful. If it wasn’t so meaningful, we wouldn’t be so busy talking about it, would we? Some of us are fighting to discard the distinction altogether, while some of us fear that the distinction must be preserved, else we lose our humanity. There is, of course, plenty of room in between and indeed plenty of nuance to consider. You can be online and offline, and you can do both without fear, I think. But there’s still some sorting out to do. My mind was trying to do some of it on the road that day.
As the landscape around me — my fellow travelers in their cars, the trees, the hills, the signs and roadside stops — became just a steady blur, I realized that traveling the modern highway — a theoretically offline experience — is very much like being online.
The number of fast food restaurants in America must out of scale with American population growth. Must be! By mid-Pennsylvania, I estimated that I’d probably seen a sign for fast-food every ten minutes or so, on average. Of course I didn’t measure this, but I’d wager that the actual interval is close to my estimate. As an observant traveler, it is difficult to reject the conclusion that there are far more immediate options for fast food than ever before. After all, how else could it be possible for a place like Starbucks to be as ubiquitous as it is? Starbucks is “the largest coffeehouse company in the world, with 20,891 stores in 62 countries, including 13,279 in the United States, 1,324 in Canada, 989 in Japan, 851 in the People’s Republic of China, 806 in the United Kingdom, 556 in South Korea, 377 in Mexico, 291 in Taiwan, 206 in the Philippines, 179 in Turkey, 171 in Thailand, and 167 in Germany.” Those are the current numbers. Ubiquitous is no exaggeration. Oh, and all that growth has been since Starbucks’ founding in 1971 — 43 years ago. “Since 1987,” when growth really picked up, “Starbucks has opened on average two new stores every day.” (These quotes from the Wikipedia entry; emphasis mine.) Starbucks’ primary competitor, Dunkin’ Donuts — who, as a New Englander, I would have guessed still exceeded Starbucks’ size and reach — has grown to 15,000 restaurants total since its founding in 1950. A decade head start and still, a donut company is beat by a coffee company. In America.
Donuts and coffee are probably not what come to mind when you think of fast food. Fast food is hamburgers and French fries and soda. Fast food is McDonalds. But McDonalds’ ubiquity is hardly as surprising — at least not to me. Since its founding in 1940, McDonalds has grown to over 34,000 restaurants worldwide. To get a proper sense for scale, Burger King, founded in 1953, currently has only 13,000 locations. Only? If you evenly distributed Burger King’s US locations, there would be an average of 171 per state. Wendy’s, founded in 1969, has 6,650. You’d think that with the big three brands giving us over 50,000 places to eat hamburgers and French fries, there couldn’t possibly be demand for more, or room for another burger join in the mix. Except there’s Five Guys. A latecomer to the feeding frenzy, Five Guys managed to expand to five locations between 1986 and 2001. Just five. But since 2003, they’ve grown to 1,000 with 1,500 more currently in development. I know of three of them that I could conveniently visit on my way home from work. Five Guys is currently the fastest-growing fast food chain in the United States.
That’s just burgers. We also love pizza and tacos. Here are a few more to consider: Dominos has 10,000 locations. Pizza Hut has over 11,000. Taco Bell, 6,446.
Eat up.
Taking in to account all the brands I haven’t named, we have hundreds of thousands of options available to us. All of the sudden, that any one of them would be within five minutes reach no longer seems so incredible. The data, on the other hand, remain astounding. Consider this: The number of places we can eat — restaurants, convenience shops, stands, kiosks, and of course our beloved food trucks — has grown 1,000-fold since Starbucks got started. Forty years of capitalism setting the table.
This is the law of supply and demand at work. Our demand, apparently, is unrelenting. Five Guys would have no chance if we didn’t want to eat more hamburgers. In fact, in 1970, right around when Wendy’s was getting started, there were 3,200 calories available in the marketplace for each American to consume per day. By 1990, when Five Guys was more like three-and-a-half Guys, the amount of calories had swelled to 3,900 per day per American. Incidentally, surveys show that people own up to consuming an additional 200 calories per day over the last 30 years. So where are the other 700? Oh, we ate those too. We just don’t admit it.
The marketplace of devourable calories extends beyond restaurants. It includes grocery stores, too. And as you can probably imagine, the last 40 years have been quite good to the grocery store chains. In the 1970’s, a typical grocery store’s inventory included 10,000-15,000 bar codes. Today, there are probably around 600,000 bar codes in your nearest Wegman’s. A third of them represent unique items, while the rest represent different sizes of the same item. So at minimum, there has been a 1,600% foodsplosion.
If you are at all intentional about what you eat, these numbers should be pretty shocking. On the one hand, had I been asked, I certainly would have said that the food situation — in general — in this country was far worse today than when I was a kid. The existence of Red Bull alone would back that up. But on the other hand, many things have improved. That there are so many opportunities to buy fresh, non-processed, whole foods at a reasonable price (but yes, crazy expensive compared with what you might pick up at Costco), is a major improvement over what I grew up with. That the local food movement exists and food culture itself is so central to our generation is, in my opinion, a good thing. But it is a luxury. That foodsplosion I’ve been numerically describing is a result of demand, not just for food, but for food at the lowest cost per calorie. The more that food is purchased, the cheaper it can be. So naturally, advertising is going to play a major role. Did you know that the average American child sees 10,000 advertisements for food products per year on television? First of all, on television? Isn’t this supposed to be the post-TV generation? Hardly. So that number isn’t taking into account what kids are seeing online, or billboards, busses, clothing or periodicals, or hearing on the radio. That’s just TV. On average, that’s a bit over 27 messages each day.
It should come as no surprise, then, that between 1980 and 2002, the average soda intake in America more than doubled. Soda ads are big time. But for teenage boys, intake tripled. Age has a lot to do with susceptibility to such messages. We should be concerned, then, about what’s going on and who is acting accordingly. The shareholder value movement, which kicked off in 1981, resulted in new regulation that required corporations to report growth in profits every 90 days. Shareholders were impatient and demanded quicker return. In order to feed that beast, corporations had to grow demand. That’s where the advertising came in. Then President Reagan helpfully deregulated restrictions on marketing — particularly to children. If you can sing several different versions of a McDonald’s or Coca Cola ad, this is why.
The last few decades have given us more to eat, and more choices of what we eat. We may not be paying for that luxury in cash — compare the price of a McDonald’s “burger” to, say, one made from a cow that lived in the same state as you and actually saw something green in its life — we are obviously paying for it in quality. More food than we could ever possibly want, but virtually no nutrition to speak of.
(Much of the data I referenced comes from a recent CBC Ideas audio documentary called “Stuffed.” (Part 1, Part 2) It goes into this issue in far greater detail.)
So this is our food. This is our economy. This is the landscape around us. This is us.
Over the course of my journey, I succumbed to the beckoning of two exit signs offering me my choice of fast food. Given all the time in the world, I bet I’d have stopped at more and fussed over my choices for longer. And yeah, sure, I could have found a salad within a few miles of most exits if I tried, but I could see the golden arches from the road.
Yes, I thought, being on the road is a lot like being online.
Choice can be an existentially maddening luxury.
A few months ago, Mark and I were flipping through a hilariously thick binder of carpet samples, trying to choose one for our new office space. Some of them stood out for how horrifying they were, whereas others stood out because, hey, they were actually kind of nice looking. But most of them looked exactly the same. It was maybe a moment or two before Mark and I exchanged a look that is among the many coded glances we’ve worked out over the years — this one summing up, “I’d rather pick one quickly and hope for the best rather than waste any more time looking at these!” in one raised eyebrow. We both could sense the immense timesucking power this choice could have over us. We chose quickly.
On the other hand, I’ve wasted what must be years of my life making choices. If I Feltroned my life, a big-ass slice of the pie chart would be labeled “analysis paralysis.” But if I actually Feltroned my life, I’d edit that out so you’d think that I was smart every hour of the day. Who would admit to a true accounting of time spent over a lifetime agonizing over trivial choices?
Hours choosing what game to play, leaving only a few minutes left to play before bedtime. Hours deciding on ice cream flavors. Hours browsing stacks of books until they’ve completely erased my brain’s storage of what books I want to read and what books I’ve read already. Hours browsing stacks of music until they’ve completely stripped me of any artist or genre preferences. Hours choosing college and then college courses. Hours holding identical yogurts in my hands while hundreds more call for consideration from the shelves in front of me. Hours discussing where to eat for dinner. Hours holding pants up to my waist because obviously I’m not going to try them on that would take way too long and I’m a busy man! Hours scanning component cables on Amazon when really whothefuckcarestheyareallexactlythesame! Hours holding my mouse over an arrow while Netflix’s movie ribbon moves covers just too fast to be read but slow enough to induce a strange form of motion sickness only to give up in disgust and think, “There’s nothing to watch!” Really? Is there nothing to watch? Or is it that having everything to watch is virtually the same thing?
And then there’s the web browser. The black hole on my desk, in my lap, in the palm of my hand.
I’ve often found myself mindlessly clicking between the tabs of my browser without any clear purpose. Email. Click. Twitter. Click. Email. Click. Calendar. Click. Basecamp. Click. Digg. Click. Email. Click. Facebook. Click. Instagram. Click. Email. Click. This can go on and on. Most of the time I don’t even come to my senses at all. Instead of saying to myself, “Chris! What on Earth are you doing?” something else snaps me out of this trance. You know, like an instant message, or a text. It’s just one black hole versus another! I’m caught in the midst of a science-fictional battle of time eaters wanting to assimilate me to their drone horde. Rarely am I ever snapped out of this nonsense by the realization that I don’t have to stay in this attention trap where I’m clicking my life away, that I can go and do something — anything! — else. That resistance isn’t futile.
No, because the enormity of choice — of any and all the information in the world — overpowers my own sense of intentionality. It simply runs me over. Who am I to resist reading that article, or that e-book, or that tweet, or that post — learning more about everything that exists — when the alternative is not reading it, not learning, not knowing? What’s more important, my time or being informed? Well, the truth is that’s a fool’s wager. If we could resist the pull of information for even just a moment, we would see that. We would see that we are the fools.
Choice sounds nice. But more is nothing more than more.
It’s easy to see that when the quantity/quality imbalance adds inches to your waste, but far harder to see when it slowly robs you of time and clarity of mind.
Here is a list of things I do online on a regular basis, not including things that are just consumption of information (e.g. reading articles, listening to podcasts, etc.):
- Write email
- Send text messages
- Post status messages, share links, and have conversations on Twitter
- Post images on Instagram
- Write documents on Google Drive
- Publish articles on a variety of different URLs
- Post status messages, images, and links on Facebook
- Add or update information on LinkedIn
- Video chat on Skype, FaceTime, and Google+
Every single thing on this list represents dependency. That hasn’t always been true.
Ten years ago, I had my own email server. Today, my two main email accounts are provided by Google. Ten years ago, I did not text, tweet, or really do any social networking (as that term has come to be understood) of any kind. I had a Friendster account, but come on. I did not video chat, but would have loved to had their been a reliable and mass-adopted way of doing so. (I used Skype regularly ten years ago, but if memory serves, it had not yet added video calling.) I wrote articles using my locally-installed text editor — regularly but not nearly as often as today — which I published on my own site.
Dependency, of course, is not necessarily a bad thing. I’m quite happy to use Google’s email, productivity, and video conferencing tools. They’re far and away better than anything I was using ten years ago and, as of this moment, continue to offer enough value in exchange for my privacy that I’m not seriously considering abandoning them. That, of course, could change any time — these matters are precarious, to say the least. But that speaks to the fact that dependency, while not objectively bad, is not always great. There’s always a balance to consider: Convenience, scope and power for control, privacy and ownership.
It is that exchange that makes me question many of the items on my list. What have I to gain from the time and effort spent giving my information over to corporations who reserve the right to do with it what they will? An age-old question, insofar as the last decade represents an “age” — one we might call the age of everyone having a price.
When it comes to some of these items, I still feel there is much to gain. Twitter, for example, remains a platform I am quite enthusiastic about. Because it is purely about person-to-person connection and exchange of information, that I might give Twitter the corporate entity my time and “intellectual property” (bearing in mind the silliness of calling most of what I share there “intellectual property”) is absolutely fine with me because I gain a continual and meaningful connection to people and information I value. As is well known, Twitter is no more a public service or altruistic effort than any other Silicon Valley venture; there is something in it for them, whether I and millions of other users hand it over directly or by way of some sort of advertising remains to be seen. But when it does, the balance of cost and value will determine whether I and those millions stick around. It’s likely we will, because there is no Twitter without others. It can’t be replicated easily (case in point: App.net) or on one’s own. One is the loneliest tweeter.
Aside from Twitter, there are many other “places” I can spend my time and energy. Engaging on Facebook, Instagram, Flickr, LinkedIn, Google+, Snapchat or wherever all take time. And they take what you put there. They take your data — your words and your pictures and your connections. They take it all and use it for purposes that have nothing to do with why you put it there in the first place. Is that OK? Sure it is; we’re there voluntarily. But is that OK with you and me?
Is my engagement with these platforms equitable, healthy, or a good use of my time? Would I be better off consolidating those efforts, focusing on the networks that offer the most relational value while otherwise keeping my creative close by? I don’t know. I’m asking. I’m asking for myself. I’m asking over time because I find my answers — so far, anyway — inconsistent. I’m still working through it all, and I’m interested in hearing your answers to help me do that. I want us all to ask enough that we emerge doing at least some of this stuff differently.
What if I were to do all the creative things I currently do online — writing and posting images, in particular — on my own? At my homestead, as Frank so aptly put it. What might I lose or gain? Would it even be possible? Some of it surely would. Others, not so much.
I could still post images, and I’d own them all. This sounds great! But I certainly couldn’t replicate the experience Instagram offers. I could build a gallery. That’s pretty much it. There would be no engagement. A few people might pity me and look at what I put there every once in a while, but let’s face it: probably not. If you want to look at pictures your friends took, you’re going to do it where you can look at everything easily. Your friend’s lunch, your cousin’s baby, your neighbor’s trip to Italy, all in one flowing stream. You’re not going to open a browser (like an idiot) and tap out my stupid URL and then wait (like an idiot) for my poorly coded grid of unoptimized images to load so you can just look at them and not like them or not reblog them or not tag them or not comment on them. If you just want to look at things, you go to an art gallery, right? And I can promise you, whatever self-indulgence I post on my site isn’t going to be as good. All these things are true, right? Nobody goes to websites just to look at things anymore. Right?
Maybe. Maybe not.
What about the creative experience for me? Well, if doing this the old fashioned way is too much work to consume, it’s probably even more work to create. Actually, it’s going to be a straight up pain in the ass. I’m going to go out and take pictures with my phone and then upload them to my computer sometime later. Then I’m going to do all kinds of photoshopping to make them look cooler and organize them in a folder and then FTP that folder to my website’s server. Then I’m going to make a page that pulls those pictures and maybe even give each one a clever title and caption. All of this is going to take hours. If I stick with Instagram, I’m going to take a picture, tap it a few times to make it look pretty in a square and then hit “share.” It’s going to show up in my tidy profile grid as well as all of my friends’ streams. They’re going to like it and comment on it. All of this is going to take seconds. The almost complete lack of latency is what makes Instagram so powerful; it’s the insta part! Ain’t gonna be no insta at my homestead.
What about writing? Well, writing would actually be pretty much the same, I think. Over the last few years, I’ve seen contextual engagement around my writing — on-page commentary — decline while adjacent engagement — sharing and commentary on social networks — has increased significantly. As long as it can be read and shared, it doesn’t really seem to matter where it is, aside from the ownership issue, of course. I own it if it’s at home. I don’t if it’s someplace else.
In both cases — creating and sharing images and writing and sharing text — doing it all on my own will increase latency big time. It’s already a lot of work to create content, but to create and maintain the platform where that content lives — and to do the translating work to get that content from my machines to the web — is even more work. Does that mean that if I commit to doing this stuff on my own I will do less of it? Perhaps. Is that a bad thing? I’m not so sure. Less might equal more.
Ten years ago I had a section of my personal site that was a bit of a mashup of all the media types I’ve mentioned. I posted short articles and images — anything that was on my mind, really. I called it the thinktank (admittedly, a bit grandiose) and was mostly inspired by what Hyperkit was doing. But I typically didn’t post anything more often than every few days. In between, I was making things. Not to mention spending far less time in front of screens (but that’s another rant). Oh, and Hyperkit is still around and making beautiful stuff, but they’ve moved their “think tank” — they’re journal — to Tumblr. I’m not sure whether to feel sad about that. I think I do. OK. I definitely feel sad about it, but I don’t think I can justify why without another major digression, and you’re already bored enough.
Ten years ago was an interesting time on the web. We were undergoing a transition that threw little websites like mine — simple HTML pages; no CMS; manually cobbled together and FTP’ed to the server — into serious question. As I’ve already explained, they were labor intensive. It took a long time to get a simple post put together and published. A good CMS took care of that problem. Also, they were needles in an exploding haystack. Much more serious programming skills were needed if you were going to integrate the engagement tools of the time, like comments and RSS feeds. As those things began to be offered “out of the box” by platforms like Wordpress and Blogger, people like me began to migrate. Our homesteads were emptied out and left with notes on the front door, instructing visitors where they could find us — pages of links to our stuff that lived elsewhere. Some of us persevered with our personal sites and just figured out the tech, some of us waited long enough for Wordpress to offer installable versions and just started using that. The transition, though, was one of evolving our engagement from just looking at stuff to sharing and talking about that stuff.
But now, is engagement reliant upon any individual technology? Not really. Content management systems and feeds don’t really offer the boost that they used to. Content management is commoditized and RSS (though still with its devoted users) has been made irrelevant by the social graph. Today, all you really need is a URL. If it’s sharable, it’s engageable.
Now, it seems, is another transition period. Or, perhaps better said, a moment for consideration. Because of social patterns and the investment that has been made in building systems for them, creators again have a choice of where to be. The exchange of freedom for facilities is one that need not be carte blanche. This is a good thing. A great thing, really. But it means we must confront the questions that I’ve been wrestling with: What do I want to create? Why? Do I want to own what I make? And how much time do I want to spend making it? These have always been relevant creative questions, but their answers are uniquely technological. Terms of service apply.
As for me, I’m not exactly ready to make any major declarations. But I must say that I’m leaning toward the homesteader’s perspective. And just to be absolutely clear, this is not a “Farewell, Internet!” piece. I make my living on the internet. I like the internet. I’m not going anywhere. In fact, I’ve moved back in to this particular homestead and I intend to spend more time here. I even brought back the “thinktank,” but decided to pay homage to one of my heroes, Buckminster Fuller, and call it the chronofile. If this is at all a farewell, it is a farewell to wasted time, to fractured attention, to passivity. This is about going back to using the internet, rather than being used by it.
These are issues of consumption; whether food or information, our options and our choices matter. These are issues of creation. What we make, why, how much, and where we put it all matter. These are issues of time; how much we spend doing different things and how what we choose to do has the power to either stretch or shrink it.
Time, for me, is the most important of these. It’s the one that I’ve experienced change more over the last ten years than any other thing, whether it be technology, or format, or custom. The internet is indeed a stream, and it runs through the landscape of time. And like any other stream to landscape relationship, there is erosion; a slow, steady pulling away of moments that alters the face of time itself. On that note, I’ll leave you with the words of William Gibson, who is able to capture this phenomenon far better than I:
“Our ‘now’ has become at once more unforgivingly brief and unprecidently elastic. The half-life of media-product grows shorter still, ’til it threatens to vanish altogether, everting into some weird quantum logic of its own, the Warholian Fifteen Minutes becoming a quark-like blink. Yet once admitted to the culture’s consensus-pantheon, certain things seem destined to be with us for a very long time indeed. This is a function, in large part, of the rewind button. And we would all of us, to some extent, wish to be in heavy rotation.
And as this capacity for recall (and recommodification) grows more universal, history itself is seen to be even more obviously a construct, subject to revision. If it has been our business, as a species, to dam the flow of time through the creation and maintenance of mechanisms of external memory, what will we become when all these mechanisms, as they now seem intended ultimately to do, merge?
The end-point of human culture may well be a single moment of effectively endless duration, an infinite digital Now.”
(Oh, by the way, William Gibson wrote these words in 2003. 2003. Their poignancy aside, that over a decade has passed since they were written only further emphasizes the time-bending power of the stream.)
Written by Christopher Butler on
Tagged