May 16, 2013
Visions of driverless cars zipping around on the highways of the future are nothing new. Visions of automated highways date back to at least the 1939 New York World’s Fair, and the push-button driverless car was a common dream depicted in such midcentury utopian artifacts as 1958′s Disneyland TV episode “Magic Highway, U.S.A.” But here in the 21st century there’s a growing sense that the driverless car might actually (fingers crossed, hope to die) be closer than we think. And thanks to the progress being made by companies like Google (not to mention just about every major car company), some even believe that driverless vehicles could become a mainstream reality within just five years.
Despite all the well-known sci-fi predictions of the 20th century (not to mention those of the 21st, like in the movies Minority Report and iRobot) many people forget the very earnest and expensive investment in this vision of the future from recent history. That investment was the multi-million dollar push by the U.S. Congress to build an automated highway system in the 1990s.
In 1991 Congress passed the Intermodal Surface Transportation Efficiency Act, which authorized $650 million to be spent over the course of the next six years on developing the technology that would be needed for driverless cars running on an automated highway. The vision was admittedly bold, seeing as how primitive all of the components needed for such a system were at that time. Even consumer GPS technology — which today we take for granted in our phones and vehicles — wasn’t a reality in the early 1990s.
The real-world benefits of automated highways were thought to be improving safety by removing human error from the equation, as well as improved travel times and better fuel economy.
The National Automated Highway System Consortium was formed in late 1994 and were comprised of nine core organizations, both public and private: General Motors, Bechtel Corporation, The California Department of Transportation, Carnegie Mellon University, Delco Electronics, Hughes Electronics, Lockheed Martin, Parsons Brinckerhoff, and the University of California-Berkeley.
The goal was eventually to allow for fully automated operation of an automobile — what a Congressional report described as “hands-off, feet-off” driving.
The program was not without its detractors. In December of 1993 Marcia D. Lowe at the Worldwatch Institute wrote a scathing op-ed in the Washington Post. Perhaps unsurprisingly, Lowe mentions “The Jetsons.”
Computer-equipped cars driving themselves on automated highways. A scene out of “The Jetsons?” Not exactly.
Smart cars and highways have quietly emerged as the latest and most-expensive proposal to solve the nation’s traffic problems. Government spending on the little known Intelligent Vehicle and Highway Systems program is expected to exceed $40 billion over the next 20 years. (By comparison, in the first 10 years of the Strategic Defense Initiative, Washington spent $30 billion.)
Even more astonishing is the total lack of organized opposition to the idea, despite evidence that smart cars and highways may well exacerbate the very problems they are supposed to solve.
By 1997 the program had to show its technical feasibility in a demonstration in San Diego, California. On July 22 of that year the demonstration test vehicles rode down 7.6 miles of the HOV lane on Interstate 15. The Associated Press even reported that the prototype highway should be running by 2002.
During the lead up to the San Diego demonstration in 1997, the NAHSC produced a video called “Where The Research Meets The Road.” You can watch the video below.
Needless to say, the program didn’t deliver driverless cars and automated highways to Americans. So what was the problem? The legislation didn’t really give the Department of Transportation any direction on how they should go about the research—only that they needed to demonstrate it by 1997. But perhaps the biggest problem was that the legislation never clearly defined what was meant by “fully automated highway system.”
May 10, 2013
The Omni Future Almanac was published in 1982 — a year when America would see double-digit inflation and double-digit unemployment. Despite all this, the authors of the book were generally optimistic about the future of the nation. Technology, they explained, would solve many of the problems facing the country. In conjunction with this, the American people would surely worker smarter and simplify their lives, all while improving everyone’s standard of living.
From the book:
By 2000, most Americans will be experiencing a new prosperity. The problems of shrinking energy supplies and spiraling costs will be offset by developments in computers, genetic engineering, and service industries that will bring about lifestyle changes that will in turn boost the economy. Basically, Americans will be able to simplify their lives and spend less money supporting themselves. Indeed, energy conservation will force Americans to become more resourceful fiscally and to spend less on many items.
But what about prices of the future? That double-digit inflation stoked fears that prices for common food items in the future would skyrocket.
The average price of a pound of beef in the year 2010? The book predicted it would be $22.75. The actual cost? About $3.75.
The prices of a loaf of bread? They predicted it would hit $8. Actual cost? About $2.50.
But which single commodity did they predict would level out in the 21st century? Somewhat shockingly, gasoline.
That’s right, the book predicted that a gallon of gas (which cost about $1 in 1980) would peak at $4 in 1990 and then level off to $2 not only in the year 2000 but maintain that price into the year 2010 as well.
But those staggering prices for basic sustenance doesn’t look quite so scary when you consider what they thought the average American would be paid.
A secretary of the year 2010? $95,000. A factory worker? $95 an hour.
Of course, wages for secretaries, factory workers and public high school teachers haven’t even kept pace with inflation. But at least a subway ride isn’t yet $20.
May 2, 2013
“The ubiquity and power of the computer blur the distinction between public and private information. Our revolution will not be in gathering data — don’t look for TV cameras in your bedroom — but in analyzing information that is already willingly shared.”
Are these the words of a 21st century media critic warning us about the tremendous quantity of information that the average person shares online?
Nope. It’s from a 1985 article for the Whole Earth Review by Larry Hunter, who was writing about the future of privacy. And it’s unlikely Mr. Hunter could have any more accurately predicted the Age of Facebook — or its most pervasive fears.
Hunter begins his article by explaining that he has a privileged peek into the computerized world that’s just over the horizon:
I live in the future. As a graduate student in Artificial Intelligence at Yale University, I am now using computer equipment that will be commonplace five years from now. I have a powerful workstation on my desk, connected in a high-speed network to more than one hundred other such machines, and, through other networks, to thousands of other computers and their users. I use these machines not only for research, but to keep my schedule, to write letters and articles, to read nationwide electronic “bulletin boards,” to send electronic mail, and sometimes just to play games. I make constant use of fancy graphics, text formatters, laser printers — you name it. My gadgets are both my desk and my window on the world. I’m quite lucky to have access to all these machines.
He warns, however, that this connectedness will very likely come with a price.
Without any conspiratorial snooping or Big Brother antics, we may find our actions, our lifestyles, and even our beliefs under increasing public scrutiny as we move into the information age.
Hunter outlines the myriad ways that corporations and governments will be able to monitor public behavior in the future. He explains how bloc modelling helps institutions create profiles that can be used for either benign or nefarious purposes. We can guess that credit service companies beginning to sell much more specific demographic information to credit card companies in the early 1980s generally falls into the nefarious column:
How does Citicorp know what your lifestyle is? How can they sell such information without your permission? The answer is simple: You’ve been giving out clues about yourself for years. Buying, working, socializing, and traveling are acts you do in public. Your lifestyle, income, education, home, and family are all deductible from existing records. The information that can be extracted from mundane records like your Visa or Mastercard receipts, phone bill, and credit record is all that’s needed to put together a remarkably complete picture of who you are, what you do, and even what you think.
And all this buying, working and socializing didn’t even include through mediums like Facebook or Twitter in 1985. Hunter explains that this information, of course, can be used in a number of different ways to build complex pictures of the world:
While the relationship between two people in an organization is rarely very informative by itself, when pairs of relationships are connected, patterns can be detected. The people being modeled are broken up into groups, or blocs. The assumption made by modelers is that people in similar positions behave similarly. Blocs aren’t tightly knit groups. You may never have heard of someone in your bloc, but because you both share a similar relationship with some third party you are lumped together. Your membership in a bloc might become the basis of a wide variety of judgements, from who gets job perks to who gets investigated by the FBI.
In the article Hunter asks when private information is considered public; a question that is increasingly difficult to answer with the proliferation of high-quality cameras in our pockets, and on some on our heads.
We live in a world of private and public acts. We consider what we do in our own bedrooms to be our own business; what we do on the street or in the supermarket is open for everyone to see. In the information age, our public acts disclose our private dispositions, even more than a camera in the bedroom would. This doesn’t necessarily mean we should bring a veil of secrecy over public acts. The vast amount of public information both serves and endangers us.
Hunter explains the difficulty in policing how all of this information being collected might be used. He makes reference to a metaphor by Jerry Samet, a Professor of Philosophy at Bentley College who explained that while we consider it an invasion of privacy to look inside someone’s window from the outside, we have no objection to people inside their own homes looking at those outside on the public sidewalk.
This is perhaps what makes people so creeped out by Google Glass. The camera is attached to the user’s face. We can’t outlaw someone gazing out into the world. But the added dimension that someone might be recording that for posterity — or collecting and sharing information in such a way — is naturally upsetting to many people.
Why not make gathering this information against the law? Think of Samet’s metaphor: do we really want to ban looking out the window? The information about groups and individuals that is public is public for a reason. Being able to write down what I see is fundamental to freedom of expression and belief, the freedoms we are trying to protect. Furthermore, public records serve us in very specific, important ways. We can have and use credit because credit records are kept. Supermarkets must keep track of their inventories, and since their customers prefer that they accept checks, they keep information on the financial status of people who shop in their store. In short, keeping and using the kind of data that can be turned into personal profiles is fundamental to our way of life — we cannot stop gathering this information.
And this seems to be the same question we ask of our age. If we volunteer an incredibly large amount of information to Twitter in exchange for a free communications service, or to Visa in exchange for the convenience of making payments by credit card, what can we reasonably protect?
Hunter’s prescription sounds reasonable, yet somehow quaint almost three decade later. He proposes treating information more as a form of intangible property, not unlike copyright.
People under scrutiny ought to be able to exert some control over what other people do with that personal information. Our society grants individuals control over the activities of others primarily through the idea of property. A reasonable way to give individuals control over information about them is to vest them with a property interest in that information. Information about me is, in part, my property. Other people may, of course, also have an interest in that information. Citibank has some legitimate interests in the information about me that it has gathered. When my neighbor writes down that I was wearing a red sweater, both of us should share in the ownership of that information.
Obviously, many of Hunter’s predictions about the way in which information would be used came true. But it would seem that there are still no easy answers to how private citizens might reasonably protect information about themselves that’s collected — whether that’s by corporations, governments or other private citizens.
Chillingly, Hunter predicted some of our most dire concerns when Mark Zuckerberg wasn’t yet even a year old: “Soon celebrities and politicians will not be the only ones who have public images but no private lives — it will be all of us. We must take control of the information about ourselves. We should own our personal profiles, not be bought and sold by them.”
What do you think? Does our age of ubiquitous sharing concern you? Do you think our evolving standard of what is considered private information generally helps or hurts society?
April 26, 2013
“Who needs a car in L.A.? We got the best public transportation system in the world!” says private detective Eddie Valiant in the 1988 film Who Framed Roger Rabbit?
Set in 1947, Eddie is a car-less Angeleno and the movie tells the tale of a an evil corporation buying up the city’s streetcars in its greedy quest to force people out of public transit and into private automobiles. Eddie Valiant’s line was a wink at audiences in 1988 who knew quite well that public transportation was now little more than a punchline.
Aside from Detroit there’s no American city more identified with the automobile than Los Angeles. In the 20th century, the Motor City rose to prominence as the home of the Big Three automakers, but the City of Angels is known to outsiders and locals alike for its confusing mess of freeways and cars that crisscross the city — or perhaps as writer Dorothy Parker put it, crisscross the “72 suburbs in search of a city.”
Los Angeles is notorious for being hostile to pedestrians. I know plenty of Angelenos who couldn’t in their wildest dreams imagine navigating America’s second largest city without a car. But I’ve spent the past year doing just that.
About a year and a half ago I went down to the parking garage underneath my apartment building and found that my car wouldn’t start. One thing I learned when I moved to Los Angeles in 2010 was that a one-bedroom apartment doesn’t come with a refrigerator, but it does come with a parking space. “We only provide the essentials,” my apartment’s building manager explained to me when I asked about this regional quirk of the apartment rental market. Essentials, indeed.
My car (a silver 1998 Honda Accord with tiny pockets of rust from the years it survived harsh Minnesota winters) probably just had a problem with its battery, but I really don’t know. A strange mixture of laziness, inertia, curiosity and dwindling funds led me to wonder how I might get around the city without wheels. A similar non-ideological adventure began when I was 18 and thought “I wonder how long I can go without eating meat?” (The answer was apparently two years.)
Living in L.A. without a car has been an interesting experiment; one where I no longer worry about fluctuations in the price of gas but sometimes shirk social functions because getting on the bus or train doesn’t appeal to me on a given day. It’s been an experiment where I wonder how best to stock up on earthquake disaster supplies (I just ordered them online) and how to get to Pasadena to interview scientists at JPL (I just broke down and rented a car for the day). The car — my car — has been sitting in that parking spot for over a year now, and for the most part it’s worked out pretty well.
But how did Los Angeles become so automobile-centric? How did Angeleno culture evolve (or is it devolve?) to the point where not having a car is seen as such a strange thing?
Los Angeles owes its existence as a modern metropolis to the railroad. When California became a state in 1850, Los Angeles was just a small frontier town of about 4,000 people dwarfed by the much larger Californian cities of San Francisco and Sacramento. Plagued by crime, some accounts claimed that L.A. suffered a murder a day in 1854. But this tiny violent town, referred to as Los Diablos (the devils) by some people in the 1850s would become a boomtown ready for a growth explosion by the 1870s.
From the arrival of the transcontinental railroad in 1876 until the late 1920s, the City of Angels experienced incredibly rapid population growth. And this growth was no accident. The L.A. Chamber of Commerce, along with the railroad companies, aggressively marketed the city as one of paradise — a place where all your hopes and dreams could come true. In the late 19th century Los Angeles was thought to be the land of the “accessible dream” as Tom Zimmerman explains in his book Paradise Promoted.
Los Angeles was advertised as the luxurious city of the future; a land of both snow-capped mountains and beautiful orange groves — where the air was clean, the food was plentiful and the lifestyle was civilized. In the 1880s, the methods of attracting new people to the city involved elaborate and colorful ad campaigns by the railroads. And people arrived in trains stuffed to capacity.
With the arrival of the automobile in the late 1890s the City of Angels began experimenting with the machine that would dramatically influence the city’s landscape. The first practical electric streetcars were started in the late 1880s, replacing the rather primitive horse-drawn railways of the 1870s. The mass transit system was actually borne of real estate developers who built lines to not only provide long term access to their land, but also in the very immediate sense to sell that land to prospective buyers.
By the 1910s there were two major transit players left: The Los Angeles Streetway streetcar company (LARY and often known as the Yellow Cars) and the Pacific Electric Railway (PE and often known simply as the Red Cars).
No one would mistake Who Framed Roger Rabbit? for a documentary, but the film has done a lot to cement a particular piece of L.A. mythology into the popular imagination. Namely, that it was the major car companies who would directly put the public transit companies out of business when they “purchased” them in the 1940s and shut them down. In reality, the death of L.A.’s privately-owned mass transit would be foreshadowed in the 1910s and would be all but certain by the end of the 1920s.
By the 1910s the streetcars were already suffering from widespread public dissatisfaction. The lines were seen as increasingly undependable and riders complained about crowded trains. Some of the streetcar’s problems were a result of the automobile crowding them out in the 1910s, congesting the roads and often causing accidents that made service unreliable. Separating the traffic of the autos, pedestrians and streetcars were seen as a priority that would not be realized until the late 20th century. As Scott L. Bottles notes in his book Los Angeles and the Automobile, “As early as 1915, [the L.A. Public Board of Utilities] called for plans to separate these trains from regular street traffic with elevated or subway lines.”
The recession-plagued year 1914 saw the explosive rise of the “jitney,” an unlicensed taxi that took passengers for just a nickel. The private streetcar companies refused to improve their service in a time of recession and as a result drove more and more people to alternatives like the jitney and buying their own vehicle.
The Federal Road Act of 1916 would jumpstart the nation’s funding of road construction and maintenance, providing matching funding to states. But it was the Roaring Twenties that would set Los Angeles on an irreversible path as a city dominated by the automobile. L.A.’s population of about 600,000 at the start of the 1920s more than doubled during the decade. The city’s cars would see an even greater increase, from 161,846 cars registered in L.A. County in 1920 to 806,264 registered in 1930. In 1920 Los Angeles had about 170 gas stations. By 1930 there were over 1,500.
This early and rapid adoption of the automobile in the region is the reason that L.A. was such a pioneer in the area of automotive-centric retailing. The car of the 1920s changed the way that people interacted with the city and how it purchased goods, for better and for worse. As Richard Longstreth notes in his 2000 book, The Drive-In, The Supermarket, and the Transformation of Commercials Space in Los Angeles, the fact that Southern California was the “primary spawning ground for the super service station, the drive-in market, and the supermarket” was no coincidence. Continuing the trend of the preceding decades, the population of Los Angeles swelled tremendously in the 1910s and ’20s, with people arriving by the thousands.
“This burgeoning middle class created one of the highest incidences of automobile ownership in the nation, and both the diffuse nature of the settlement and a mild climate year-round yielded an equally high rate of automobile use,” Longstreth explains. The city, unencumbered by the geographic restrictions of places like San Francisco and Manhattan quickly grew outward rather than upward; fueled by the car and quite literally fueled by the many oil fields right in the city’s backyard. Just over the hills that I can see from my apartment building lie oil derricks. Strange metal robots in the middle of L.A. dotting the landscape, bobbing for that black gold to which we’ve grown so addicted.
Los Angeles would see and turn down many proposals for expanded public transit during the first half of the 20th century. In 1926 the Pacific Electric built a short-running subway in the city but it did little to fix the congestion problems that were happening above ground.
In 1926 there was a big push to build over 50 miles of elevated railway in Los Angeles. The city’s low density made many skeptical that Los Angeles could ever support public transit solutions to its transportation woes in the 20th century. The local newspapers campaigned heavily against elevated railways downtown, even going so far as to send reporters to Chicago and Boston to get quotes critical of those cities’ elevated railways. L.A.’s low density was a direct result of the city’s most drastic growth occurring in the 1910s and ‘20s when automobiles were allowing people to spread out and build homes in far flung suburbs and not be tied to public transit to reach the commercial and retail hub of downtown.
As strange as it may seem today, the automobile was seen by many as the progressive solution to the transportation problems of Los Angeles in the 1920s. The privately owned rail companies were inflating their costs and making it impossible for the city to buy them out. Angelenos were reluctant to to subsidize private rail, despite their gripes with service. Meanwhile, both the city and the state continued to invest heavily in freeways. In 1936 Fortune magazine reported on what they called rail’s obsolescence.
Though the city’s growth stalled somewhat during the Great Depression it picked right back up again during World War II. People were again moving to the city in droves looking for work in this artificial port town that was fueling the war effort on the west coast. But at the end of the war the prospects for mass transit in L.A. were looking as grim as ever.
In 1951 the California assembly passed an act that established the Los Angeles Metropolitan Transit Authority. The Metro Transit Authority proposed a monorail between the San Fernando Valley and downtown Los Angeles. A 1954 report issued to the Transit Authority acknowledged the unique challenges of the region, citing its low density, high degree of car ownership and current lack of any non-bus mass rapid transit in the area as major hurdles.
The July 1954 issue of Fortune magazine saw postwar expansion brought on by the car as an almost insurmountable challenge for the urban planner of the future:
As a generation of city and regional planners can attest, it is no simple matter to draw up a transit system that will meet modern needs. In fact, some transportation experts are almost ready to concede that the decentralization of urban life, brought about by the automobile, has progressed so far that it may be impossible for any U.S. city to build a self-supporting rapid-transit system. At the same time, it is easy to show that highways are highly inefficient for moving masses of people into and out of existing business and industrial centers.
Somewhat interestingly, that 1954 proposal to the L.A. Metro Transit Authority called their monorail prescription “a proper beginning of mass rapid transit throughout Los Angeles County.” It was as if the past five decades had been forgotten.
Longtime Los Angeles resident Ray Bradbury never drove a car. Not even once. When I asked him why, he said that he thought he’d “be a maniac” behind the wheel. A year ago this month I walked to his house which was about a mile north of my apartment (uphill) and arrived dripping in sweat. Bradbury was a big proponent of establishing monorail lines in Los Angeles. But as Bradbury wrote in a 2006 opinion piece in the Los Angeles Times, he believed the Metro line from downtown to Santa Monica (which now stretches to Culver City and is currently being built to reach Santa Monica) was a bad idea. He believed that his 1960s effort to promote monorails in Los Angeles made a lot more sense financially.
Bradbury said of his 1963 campaign, “During the following 12 months I lectured in almost every major area of L.A., at open forums and libraries, to tell people about the promise of the monorail. But at the end of that year nothing was done.” Bradbury’s argument was that the taxpayers shouldn’t have to foot the bill for transportation in their city.
With the continued investment in highways and the public repeatedly voting down funding for subways and elevated railways at almost every turn (including our most recent ballot’s Measure J which would have extended a sales tax increase in Los Angeles County to be earmarked for public transportation construction) it’s hard to argue that anyone but the state of California, the city of Los Angeles, and the voting public are responsible for the automobile-centric state of the city.
But admittedly the new Metro stop in Culver City has changed my life. Opened in June of last year, it has completely transformed the way that I interact with my environment. While I still may walk as far as Hollywood on occasion (about 8 miles), I’m able to get downtown in about 25 minutes. And from Downtown to Hollywood in about the same amount of time.
Today, the streetcars may be returning to downtown L.A. with construction starting as early as 2014 pending quite a few more hurdles. Funding has nearly been secured for the project which would again put streetcars downtown by 2016.
But even with all of L.A.’s progress in mass transit, my car-less experiment will probably come to a close this year. Life is just easier with a car in a city that still has a long way to go in order to make places like Santa Monica, Venice, the Valley and (perhaps most crucially for major cities trying to attract businesses and promote tourism) the airport accessible by train.
But until then my car will remain parked downstairs. I’ll continue to walk almost everywhere, and you can be sure I’ll dream of the L.A. monorails that never were.
April 24, 2013
When I was in second grade I made a diorama of a city of the future. This was the early 1990s and the diorama was supposed to represent the year 2000—somehow still lightyears away for a young kid during the George H. W. Bush administration. My little diorama city had cars that ran on a magnetic track, some tall awkwardly-shaped buildings, and a way of recycling rainwater that supposedly (at least in my juvenile mind) was great for the environment.
Children of the 20th century (present bloggers excluded, perhaps) had some fascinating visions for the future. They tended to be pretty optimistic, but each generation betrays its own fears for the world of tomorrow. In the 1960s, kids imagined flying cars and jetpacks, tempered by fears around the Cold War. In the 1970s, kids expected their future to be filled with robot maids and vacations to Mars, but they also worried about violence, the price of gas and skyrocketing unemployment.
In this film from 1983 we hear from American kids about their visions for cities of the future. The kids have constructed and drawn cities that include peoplemovers run by computer, underground shops and even horse-drawn transportation. The end of this clip shows a kid who warns that humanity will be destroyed if we don’t find an alternative to gasoline soon—a fear that made a lot of sense to children of the 1970s and 1980s, but maybe less so to children of the 1960s or 1990s.
What did you envision the world of the future looking like when you were a kid? How do you think the time in which you grew up influenced your outlook?