April 5, 2013
Imagine a world where the only media you consume serves to reinforce your particular set of steadfast political beliefs. Sounds like a pretty far-out dystopia, right? Well, in 1969, Internet pioneer Paul Baran predicted just that.
In a paper titled “On the Impact of the New Communications Media Upon Social Values,” Baran (who passed away in 2011) looked at how Americans might be affected by the media landscape of tomorrow. The paper examined everything from the role of media technology in the classroom to the social effects of the portable telephone — a device not yet in existence that he predicted as having the potential to disrupt our lives immensely with unwanted calls at inopportune times.
Perhaps most interestingly, Baran also anticipated the political polarization of American media; the kind of polarization that media scholars here in the 21st century are desperately trying to better understand.
Baran understood that with an increasing number of channels on which to deliver information, there would be more and more preaching to the choir, as it were. Which is to say, that when people of the future find a newspaper or TV network or blog (which obviously wasn’t a thing yet) that perfectly fits their ideology and continuously tells them that their beliefs are correct, Americans will see little reason to communicate meaningfully with others who don’t share those beliefs.
Baran saw the media’s role as a unifying force that contributed to national cohesion; a shared identity and sense of purpose. With more specialized channels at their disposal (political or otherwise) then Americans would have very little overlap in the messages they received. This, Baran believed, would lead to political instability and increased “confrontation” on the occasions when disparate voices would actually communicate with each other.
Baran wrote in 1969:
A New Difficulty in Achieving National Cohesion. A stable national government requires a measure of cohesion of the ruled. Such cohesion can be derived from an implicit mutual agreement on goals and direction — or even on the processes of determining goals and direction. With the diversity of information channels available, there is a growing ease of creating groups having access to distinctly differing models of reality, without overlap. For example, nearly every ideological group, from the student underground to the John Birchers, now has its own newspapers. Imagine a world in which there is a sufficient number of TV channels to keep each group, and in particular the less literate and tolerant members of the groups, wholly occupied? Will members of such groups ever again be able to talk meaningfully to one another? Will they ever obtain at least some information through the same filters so that their images of reality will overlap to some degree? Are we in danger of creating by electrical communications such diversity within society as to remove the commonness of experience necessary for human communication, political stability, and, indeed, nationhood itself? Must “confrontation” increasingly be used for human communication?
National political diversity requires good will and intelligence to work comfortably. The new visual media are not an unmixed blessing. This new diversity causes one to hope that the good will and intelligence of the nation is sufficiently broad-based to allow it to withstand the increasing communication pressures of the future.
The splintering of mass media in the United States over the past half a century has undoubtedly led to the stark “differing models of reality” that Baran describes. The true believers of any ideology will tow the party line and draw strength from their particular team’s media outlets. But the evidence remains inconclusive when it comes to the average American. Simply put, there’s not a lot of evidence that people who aren’t already highly engaged politically will be influenced by partisan media sources to become more radical or reactionary as the case may be.
Writing in the Annual Review of Political Science this year, Markus Prior explains, “Ideologically one-sided news exposure may be largely confined to small, but highly involved and influential segment of the population.” However, “there is not firm evidence that partisan media are making ordinary Americans more partisan.”
Stepping back and looking at ourselves from the perspective of a future historian, it’s easy to argue that we could still be in the early days of highly-polarized mass media. The loosening and eventual elimination of the FCC’s fairness doctrine in the 1980s saw the rise of talk radio hosts unhindered by the need to give opposing viewpoints equal airtime. The rise of the web in the mid-1990s then delivered even more channels for political voices to deliver their messages through the young Internet. User-generated online video saw its rise with the birth of YouTube in the mid-2000s allowing for the dissemination of visual media without many of the regulations politicians and content creators must normally adhere to when broadcasting over the public airwaves. The rise of social media in this decade has seen everyone from your grandmother to hate groups being given a platform to air their grievances. And tomorrow, who knows?
Just how much more polarized our nation’s mainstream political voices can become remains to be seen. But it may be safe to say that when it comes to a lack of message overlap and increased political diversity in new forms of media, Paul Baran’s 1969 predictions have long since become a reality.
January 24, 2013
“Internet introverts are socially dysfunctional; they write online, talk online, view the world from online, order books and pizza online.” Ordering pizza… online? Who ever dreamt of such crazy a thing? This warning about our increasingly isolated and dysfunctional lives reads like it could’ve been written today. But it actually comes from the May 3, 1996 edition of the Los Angeles Times, where Michael Shulman assured readers that ordering pizza online was the exclusive domain of the cyberspace hermit.
I like pizza. Like I really like pizza. But it’s difficult being a pizza lover in Los Angeles. Mostly because, as major cities go, L.A. has the worst pizza in the country.
Don’t get me wrong, L.A. has some of the best food around, besting comparably sized cities like New York and Chicago in just about every category. But when it comes to pizza, it’s incredibly hard to find a decent slice here in the City of Bladerunner. So when I found a good “pizza pizza” place (my highly refined distinction for that greasy variety of pizza that doesn’t include flatbreads or any fancy ingredients I wouldn’t have been able to pronounce when I was five years old) I was blown away. I was really excited that I had my new go-to for that staple of the modern American diet. I was excited… until I realized that anytime I wanted good pizza I’d have to pick up the stupid phone. Sadly, this particular pizza place didn’t have online ordering. In the year 2013 making your customers pick up the phone for delivery is roughly the equivalent of showing up at their door with a sack of flour, some tomato paste and a bucket of mushrooms and telling them they’ll have to cook the pizza themselves. It’s just not done. How old fashioned. How unfuture.
Compared with even a decade ago we’re much more comfortable with buying goods online. And businesses that sell things online often find unexpected benefits to the transaction. For instance, a 2006 study found that a customer ordering pizza online typically spent about 15% more per order than those ordering over the phone.
So when I saw this video of a computer ordering pizza in 1974 (via some nifty voice tech, rather than say the ARPANET) I immediately started thinking about all the futuristic visions of pizza ordering from the late 1980s and early ’90s.
Apple’s “Grey Flannel Navigator” (1988)
Ordering pizza is such a common exercise that it was the perfect foil in future-oriented movies and videos to help place the viewer in a familiar but slightly more technologically advanced setting. The 1988 Apple concept video “Grey Flannel Navigator” used just such a device. The video shows the office of the near future and how one might be able to collaborate and work from home. Midway through the video our protagonist goes to the fridge and apparently doesn’t find anything to his liking. He faithfully returns to his computer and pulls up a window labeled “Pizza-for-One” which guarantees delivery in 30 minutes — or it’s free!
The Net (1995)
The 1995 movie (sorry, “cyber thriller”) The Net has a similar sequence, though in the mid-1990s that kind of service was becoming slightly less absurd. Indeed the anachronism of the pizza ordering in The Net is that Sandra Bullock’s character is not paying by credit card and instead paying for it in cash.
ACLU’s Total Surveillance Society (2004)
But it’s not all sunshine and mushrooms when it comes to ordering pizzas in the future. The ACLU released a video in 2004 that used a pizza order as the backdrop for a dystopian world where everyone’s personal information is kept online — and used against them.
Today, every national pizza chain in the U.S. has online ordering. But it does beg the question: now that we live in “the future” what will ordering a pizza in tomorrow’s future look like? Whatever it looks like, I’m sure only the socially dysfunctional, overworked office drones who are being spied on by Big Brother will do it.
January 15, 2013
There are many different ways to talk about the future, but few are more self-centered than guessing how the generations of tomorrow may judge you and yours.
Some of Keillor’s observations ring true for those of us here in the year 2013: he predicts that the future of air travel will only become more and more cumbersome and he imagines that Americans’ growing dissatisfaction with stagnant wages may become an issue. But the vast majority of the piece reads as cranky “get off my lawn” nostalgia. Which is to say, he’s romanticizing a past that never existed in the service of bemoaning a future that will never arrive. He begins by calling contemporary culture “trash” (being careful to clarify that the New York Times doesn’t qualify as such) and pretty much goes downhill on the future of humanity from there.
But it’s his vision of the media landscape of the future that’s most interesting to me. Maybe because in many ways he didn’t go far enough (only 1,000 movies available on the Internet?) and bizarrely longs for some antiquated version of celebrity that he implies is somehow more pure. But his dominant fear — that the way we consume media would be rapidly changing into the 21st century — was one prophecy fully realized. It’s just up to those of us living in “the future” to decide whether any of those changes are a good thing.
Even just holding this 1996 issue of The New York Times Magazine in my hand makes me acutely aware of how much has changed in the world of publishing since then. The magazine is thick at 216 pages and bursting at the seams with slick colorful ads — a sign of healthy profits for any media outlet in the mid-90s. But as more and more eyeballs (and ad dollars) have shifted to the digital realm, it’s hard to judge a mag by its deadtree count.
Keillor writes about the death of the newspaper and frustrations with getting Internet images to load:
People are going to miss it a lot — they’ll think: What a wonderful thing a newspaper was! You opened it and there it was, you didn’t have to wait three minutes for the art to download, and when your wife said, “Give me a section,” you did.
Of course, few Americans in the year 2013 are waiting three minutes for an image to load online but I personally identify with those who would stubbornly cling to something like the deadtree Sunday Times; something most easily enjoyed (and more importantly shared) over a cup of coffee with some pulp and ink on your fingers. You have no idea how much it pains me to identify with Mr. Guy Noir himself in this case.
Later in the piece Keillor romanticizes the celebrity of the past — the “real” ones — like Frank Sinatra. He worries that in the future we won’t have any common language with which to talk around the water cooler or the dinner table. And Keillor shudders to think about the overwhelming amount of media (10,000 CDs on the Internet, oh my!) future generations will have at their disposal:
People will feel nostalgia for celebrities, real ones, like there used to be back when there were three TV networks and Americans watched the same shows at the same time and talked about them the next day at work. Television was common currency. Sunday afternoons you watched the NFL game with your dad on the couch and then you went to the table and ate pot roast and mashed potatoes. Everybody else did the same thing.
Every American knew Sinatra by sight and by voice, but when you scattered the audience among 200 cable-TV channels and 1,000 movies you could watch on the Internet and 10,000 CDs you could download, there weren’t many true celebrities anymore. People will miss them. There will be new celebrities, thousands of them, but not many people will know who they are.
Like I mentioned, I share some of Keillor’s strange nostalgic notions about deadtrees and sharing a newspaper over breakfast. But what’s most interesting to me is not so much his premature nostalgia for 1996 but his rather stereotypical nostalgia for the 1950s. For a man whose art has focused almost exclusively on the idyllic past that never was, I suppose this makes perfect sense.
NYTimes.com doesn’t seem to have the article digitized but you can read the piece in its entirety at Deseret News. Amy Crehore‘s 1996 oil painting “Nostalgia Man” appeared alongside Keillor’s original article and is republished here with permission.
December 14, 2012
“Who hasn’t heard about the Internet? It’s mentioned on television, in the magazines, and on the radio. Everyone’s talking about it, and everyone wants to get connected to it.” So began the 1995 book, simply titled The Internet by Kerry Cochrane.
Do you remember your first time on the Internet? Mine was pretty typical for a kid in suburban America. It was 1995 and my parents had signed up for a free America Online trial using one of the millions of CD-ROMs that seemed to arrive at our house daily. My dad bought an external 14.4k modem for our Mac Performa and I remember tying up the phone line while talking in random chatrooms and looking up cheat codes for Dark Forces. The Internet was a precious commodity to me then — a metered experience that I had to track diligently so as not to waste a single minute. That is, until AOL offered flat-rate unlimited monthly billing in 1996.
Today the Internet has become a normal and essential part of our lives that we all seem to take for granted. Since high school I’ve probably used the internet every single day with very few exceptions. Today I get irrationally angry when a website doesn’t load within seconds. For a generation that has vague memories of life before the Internet, we now poke fun at just how oddly futuristic our behavior would’ve seemed just a couple of decades earlier.
But this future of online shopping and instant access to much of the world’s knowledge was not a given in 1995. As Kerry Cochrane explained in the introduction to her short book, everyone was talking about it. But there were plenty of skeptics. Clifford Stoll wrote an article for Newsweek in the February 27, 1995, issue expressing skepticism about this new-fangled contraption:
We’re promised instant catalog shopping — just point and click for great deals. We’ll order airline tickets over the network, make restaurant reservations and negotiate sales contracts. Stores will become obselete. So how come my local mall does more business in an afternoon than the entire Internet handles in a month? Even if there were a trustworthy way to send money over the Internet — which there isn’t — the network is missing a most essential ingredient of capitalism: salespeople.
But despite the skepticism — which was completely warranted when you think about what the Internet looked like in 1995 — there were people like Cochrane who were explaining to kids like myself what kind of worlds the internet had to offer.
The first three chapters of Cochrane’s book The Internet is devoted to explaining the basics of things like email and how to find your way around the internet using Archie, Gopher, Veronica and the World Wide Web. But the last chapter is where things really get interesting. Titled, “Fun Places on the Internet,” the fourth and final chapter is like a bizarre time capsule of the Internet’s baby pictures. Because even though the Internet’s “birth” can be traced to the first host-to-host connection at UCLA in 1969, the mid-1990s was really when the Internet went mainstream.
Some of the “Fun Places” shown in the book were expected, like an early e-card site and the Smithsonian home page, while others were a bit strange, like a random elementary school in Cottage Grove, Minnesota. Do you remember your first time online? I’d love to hear about it in the comments. And yes, I realize many of you have been online long before I was even born so consider this a golden opportunity to be snarky about that fact.
And be sure to check out K. Annabelle Smith’s “Evolution of the Homepage” from this past June.
June 27, 2012
In 1987, Bill Gates became the world’s youngest self-made billionaire, making the Forbes 400 Richest People in America list with a net worth of $1.25 billion, up from a measly $900 million the year before. Gates was just 32 years old and Microsoft Windows was still very much in its infancy, the operating system having been introduced just a couple of years earlier in November 1985. The world of 1987 was an exciting one for Gates and he saw even more exciting things ahead.
The January 1987 issue of OMNI magazine featured predictions from 14 “great minds” about what the future held; specifically the world of 20 years hence. Bill Gates predicted that the world of 2007 would be filled with flat panel displays, diverse forms of interactive entertainment, highly advanced voice recognition software and the ability to access vast quantities of information at the touch of a button — this was a capital I, capital A, Information Age.
Gates explains the typical home of 2007:
You’re sitting at home. You have a variety of image libraries that will contain, say, all the world’s best art. You’ll also have very cheap, flat panel-display devices throughout your house that will provide resolution so good that viewing a projection will be like looking at an original oil painting. It will be that realistic.
And the information that is accessed with the help of these displays will seem limitless. His idea of a world database sounds quite familiar to the 1981 predictions of Neil Ardley that we looked at a few months back.
In 20 years the Information Age will be here, absolutely. The dream of having the world database at your fingertips will have become a reality. You’ll even be able to call up a video show and place yourself in it. Today, if you want to create an image on a screen — a beach with the sun and waves — you’ve got to take a picture of it. But in 20 years you’ll literally construct your own images and scenes. You will have stored very high-level representations of what the sun looks like or how the wind blows. If you want a certain movie star to be sitting on a beach, kind of being lazy, believe me, you’ll be able to do that. People are already doing these things.
Gates predicts the perfection of a technology that has been around for decades, but one that many people of 2012 might associate with the name Siri: voice recognition.
Also, we will have serious voice recognition. I expect to wake up and say, “Show me some nice Da Vinci stuff,” and my ceiling, a high-resolution display, will show me what I want to see—or call up any sort of music or video. The world will be online, and you will be able to simulate just about anything.
I would love to see an iPhone commercial where Zooey Deschanel or Samuel L. Jackson say “Siri, show me some nice Da Vinci stuff.”
Gates continues by explaining that you’ll be able to realistically simulate racing formula cars in Daytona but worries what it might mean when people no longer have any reason to leave the house.
There’s a scary question to all this: How necessary will it be to go to real places or do real things? I mean, in 20 years we will synthesize reality. We’ll do it super-realistically and in real time. The machine will check its database and think of some stories you might tell, songs you might sing, jokes you might not have heard before. Today we simply synthesize flight simulation.
Gates believed that all of our technological advancements would also mean the end of credit cards and checks — old technologies replaced by voice and fingerprint recognition.
A lot of things are going to vanish from our lives. There will be a machine that keys off of physiological traits, whether it’s voiceprint or fingerprint, so credit cards and checks — pretty flimsy deals anyway — have to go.
Gates also welcomed the death of what he calls “passive entertainment.”
I hope passive entertainment will disappear. People want to get involved. It will really start to change the quality of entertainment because it will be so individualized. If you like Bill Cosby, then there will be a digital description of Cosby, his mannerisms and appearance, and you will build your own show from that.
Later in the article Gates is cautious and believes that we may eventually test just how much information the human mind can take.
Probably all this progress will be pretty disruptive stuff. We’ll really find out what the human brain can do, but we’ll have serious problems about the purpose of it all. We’re going to find out how curious we are and how much stimulation we can take. There have been experiments in which a monkey can choose to ingest cocaine and the monkey keeps going to create some pretty intense experiences through synthesized video-audio. Do you think you’ll reach a point of satisfaction when you no longer have to try something new or make something better? Life is really going to change; your ability to access satisfying experiences will be so large.
Gates ends his article by explaining that he doesn’t think we can really extrapolate with much accuracy from the year 1987.
But in the next 20 years you won’t be able to extrapolate the rate of progress from any previous pattern or curve because the new chips, these local intelligences that can process information, will cause a warp in what it’s possible to do. The leap will be unique. I can’t think of any equivalent phenomenon in history.
I’d argue that the vast majority of Gates’ predictions are actually fairly accurate. Here in the year 2012 we’ve seen many of his ideas about the world of 2007 become a reality. But perhaps the most interesting prediction of the bunch is about interactive entertainment. It’s fascinating that the internet has given rise to a remix culture that values slightly different modes of interaction — from the creation of a new video itself right down to the comments — though they’re typically unsanctioned by the original artists and rights holders.
For the time being, it would seem that modern copyright law makes these forms of remix entertainment targets for litigation — despite many obvious examples of fair use. And it’s not just remix culture, but the right to parody itself that has been under attack with the rise of the internet. An animated parody show about Bill Cosby himself, called House of Cosbys received a cease and desist letter in 2005 for even daring to imitate Bill Cosby’s voice and likeness. And if you’ve ever seen House of Cosbys you can probably attest that it’s likely not what Bill Gates had in mind when he was picturing the future.
Image above is a screenshot from this video: