January 7, 2013
Here in Washington we have heard of this thing you call “advance planning,” but we are not yet ready to embrace it. A bit too futuristic.
Still, we can’t help but admire from afar those who attempt to predict what could happen more than a month from now. So I was impressed a few weeks ago when the big thinkers at IBM imagined the world five years hence and identified what they believe will be five areas of innovation that will have the greatest impact on our daily lives.
They’ve been doing this for a few years now, but this time the wonky whizzes followed a theme--the five human senses. Not that they’re saying that by 2018, we’ll all be able to see, hear and smell better, but rather that machines will–that by using quickly-evolving sensory and cognitive technologies, computers will accelerate their transformation from data retrieval and processing engines to thinking tools.
See a pattern?
Today, let’s deal with vision. It’a logical leap to assume that IBM might be referring to Google’s Project Glass. No question that it has redefined the role of glasses, from geeky accessory that helps us see better to combo smartphone/data dive device we’ll someday wear on our faces.
But that’s not what the IBMers are talking about. They’re focused on machine vision, specifically pattern recognition, whereby, through repeated exposure to images, computers are able to identify things.
As it turns out, Google happened to be involved in one of last year’s more notable pattern recognition experiments, a project in which a network of 1,000 computers using 16,000 processors was, after examining 10 million images from YouTube videos, able to teach itself what a cat looked like.
What made this particularly impressive is that the computers were able to do so without any human guidance about what to look for. All the learning was done through the machines working together to decide which features of cats merited their attention and which patterns mattered.
And that’s the model for how machines will learn vision. Here’s how John Smith, a senior manager in IBM’s Intelligent Information Management, explains it:
“Let’s say we wanted to teach a computer what a beach looks like. We would start by showing the computer many examples of beach scenes. The computer would turn those pictures into distinct features, such as color distributions, texture patterns, edge information, or motion information in the case of video. Then, the computer would begin to learn how to discriminate beach scenes from other scenes based on these different features. For instance, it would learn that for a beach scene, certain color distributions are typically found, compared to a downtown cityscape.”
How smart is smart?
Good for them. But face it, identifying a beach is pretty basic stuff for most of us humans. Could we be getting carried away about how much thinking machines will be able to do for us?
Gary Marcus, a psychology professor at New York University, thinks so. Writing recently on The New Yorker’s website, he concludes that while much progress has been made in what’s become known as “deep learning,” machines still have a long way to go before they should be considered truly intelligent.
“Realistically, deep learning is only part of the larger challenge of building intelligent machines. Such techniques lack ways of representing causal relationships (such as between diseases and their symptoms), and are likely to face challenges in acquiring abstract ideas like “sibling” or “identical to.” They have no obvious ways of performing logical inferences, and they are also still a long way from integrating abstract knowledge, such as information about what objects are, what they are for, and how they are typically used.”
The folks at IBM would no doubt acknowledge as much. Machine learning comes in steps, not leaps.
But they believe that within five years, deep learning will have taken enough forward steps that computers will, for instance, start playing a much bigger role in medical diagnosis, that they could actually become better than doctors when it comes to spotting tumors, blood clots or diseased tissue in MRIs, X-rays or CT scans.
And that could make a big difference in our lives.
Seeing is believing
Here are more ways machine vision is having an impact on our lives:
- Putting your best arm forward: Technology developed at the University of Pittsburgh uses pattern recognition to enable paraplegics to control a robotic arm with their brains.
- Your mouth says yes, but your brain says no: Researchers at Stanford found that using pattern recognition algorithms on MRI scans of brains could help them determine if someone actually had lower back pain or if they were faking it.
- When your moles are ready for their close ups: Last year a Romanian startup named SkinVision launched an iPhone app that allows people to take a picture of moles on their skin and then have SkinVision’s recognition software identify any irregularities and point out the risk level–without offering an actual diagnosis. Next step is to make it possible for people to send images of their skin directly to their dermatologist.
- Have I got a deal for you: Now under development is a marketing technology called Facedeals. It works like this: Once a camera at a store entrance recognizes you, you’re sent customized in-store deals on your smart phone. And yes, you’d have to opt in first.
- I’d know that seal anywhere: A computerized photo-ID system that uses pattern recognition is helping British scientists track gray seals, which have unique markings on their coats.
Video bonus: While we’re on the subject of artificial intelligence, here’s a robot swarm playing Beethoven, compliments of scientists at Georgia Tech. Bet you didn’t expect to see that today.
More from Smithsonian.com
December 27, 2012
In the spirit of the post-holiday season, allow me to present my final list of 2012: six innovators who are pushing technology in fresh directions, some to solve stubborn problems, others to make our lives a little fuller.
Watch for more from all of them in the new year.
1. Keep your hands off my robot: We’ve all seem videos of adorably cute robots,, but when you actually have to work with one, they apparently can be less than lovable. That’s where Leila Takayama comes in. She’s a social scientist with Willow Garage, a San Francisco area company that develops robots, and her job is to figure out how to get humans to connect with mechanical co-workers.
She’s seen cases where robots have gotten on people’s nerves so much that they park them in a closet. One of the keys, she’s found, is to make robots seem more fallible. Like having them shake their heads when they fail at something. Oddly enough, Takayama says, a reaction like that can make a robot “seem more competent.”
She’s worked on robots designed to help elderly people, recommending that the number of cameras on the robots’ heads be reduced because too many could make people uneasy. More recently, she’s been analyzing a robot called Project Texai, which is operated directly by humans, rather than running on its own. And she’s discovered some interesting things, such as how people who operate the robot don’t like it when other people stand too close to it or touch its buttons. “There comes a point for a lot of people when they feel as if the robot is their body.”
Another key question she’s wrestling with: Is it better to have a robot at eye level with a person when he or she is sitting or standing?
2. One day even lamp posts won’t be dumb: As Chris Harrison sees it, the world is full of surfaces, so why are we spending so much time touching little screens or tapping on cramped keyboards. Harrison, a researcher at Carnegie-Mellon University, has been a leader in finding ways to turn everyday objects–a couch, a doorknob, a glass of water–into interactive devices.
His approach is to use the natural conductivity of objects–or attach electrodes to those that aren’t–and connect them to a controller that responds to different types of signals. A couch, for instance, could be wired to turn on the TV if someone sits on it in a certain spot. Or you could turn off all the lights in your place by twisting the doorknob or tapping on a table. Almost anything with a surface could be connected to a computer and allow you to make things happen with simple gestures or touches.
3. Finally, a tatt for Grandma: There’s no questions that health tech is booming–although that’s not always a good thing considering that health apps don’t always live up to their hype. But Nanshu Lu, an engineering professor at the University of Texas, has created a product that could have a huge impact on how we monitor what’s going on inside our bodies.
She has refined what are known as “epidermal electronics,” but basically they’re electronic tattoos that can track your vital signs, including your temperature, heart beat and brain and muscle activity. Lu has managed to develop ultra-thin, water-soluble silicon patches that contain tiny sensors and can actually bond with skin. No adhesives necessary. They last through showers and exercise, never losing their ability to gather your most personal data. The hope is that one day her tattoos will be able to treat diseases.
4. In phones we trust: When you’re out on the road or on vacation in a new place, it can get frustrating to have to search for info on your smart phone. Really, if your phone is so smart, shouldn’t it be able to anticipate your needs and feed you info as you need it, based on where you are and what time of day it is?
That’s the premise behind the mobile apps software developed by Flybits, brainchild of Hossein Rahnama, director of the Digital Media Zone at Toronto’s Ryerson University. Flybits is already being used at several Canadian airports and Toronto’s transit system to coordinate with a traveler’s itinerary and provide information that’s both personalized and contextually relevant, such as directions to the car rental counters or the gate to your connecting flight after you get off a plane.
The company has also developed software it calls Flybits Lite, which lets you know friends and other contacts who are taking in the same concert or watching the same movie you are.
5. Do you really want to know how many times you’ve ordered donuts?: It would be easy to dismiss the Memoto Camera as the epitome of 21st century self-indulgence. It’s a postage-stamp sized wearable camera that documents your life by taking two photos every minute, or roughly 2,000 pictures a day.
For most of us that’s one big load of digital tedium. Martin Kallstrom, the man behind the concept and CEO of the Swedish startup Memoto, would acknowledge as much. But he also knows how many memorable moments are missed–”the day your daughter took her first step, or that night you laughed the night away with friends.”
Clearly, he’s not alone in believing that a “lifelogging” camera is an idea whose time has come. He and his partners had hoped to raise $75,000 on Kickstarter. By the time the fundraising campaign ended earlier this month, online backers had pledged more than $550,000.
6. And no, it won’t fetch you a beer: For several years now, Steve Castellotti has been all about brain-powered machines. But his latest innovation, Puzzlebox Orbit, is taking the concept to the public. It’s a little helicopter you control with your mind.
Given that this is not something we do every day, it comes enclosed in a protective sphere so the rotor blades don’t chop up the furniture. It also comes with a device called the Puzzlebox Pyramid, which serves as a combination base/remote control unit for the chopper. But since your mind is doing the controlling, the Pyramid’s role is to wirelessly transmit your brain activity from a headset you wear. It also lets you know how you’re doing–a circle of LED lights on the Pyramid’s face is designed to reflect your level of concentration or relaxation.
Thanks to a funding boost from Kickstarter, Castellotti and his chief engineer and partner Hao Zhang plan to start selling the Puzzlebox Orbit for about $90 next year. But Castellotti believes it won’t become just another pricey tool that ends up in the basement. He sees it as teaching tool that can be used in schools to introduce kids to neuroscience and also as a way for people for people to start to become familiar with the potential of biofeedback.
To spur that process, the company will make its source code and hardware schematics available and encourage developers to hack away. For example, says Castellotti, a “motivated experimenter” might hack the Puzzlebox system so his TV would automatically change channels when his concentration level stays too low for too long. Say so long to vegging out.
Video bonus: Take at look at Chris Harrison’s most recent project, called Skinput, It involves the use of an armband with bio-acoustic sensors that can turn a body into a touch screen.
Video bonus bonus: And here’s a Puzzlebox Orbit tutorial that was part of the Kickstarter pitch for its nifty brain-controlled toy.
More from Smithsonian.com
December 14, 2012
There are times when I wonder why so many scientists are spending so much time trying to recreate something as fickle and full of fogginess as the human brain.
But who am I kidding? Those dyspeptic moments inevitably pass, as anyone who’s been following this blog knows. Every few months, it seems, I’m back writing about the latest attempt to build machines that can learn to recognize objects or even develop cognitive skills.
And now there’s Spaun.
Staying on task
Its full name is the Semantic Pointer Architecture Unified Network, but Spaun sounds way more epic. It’s the latest version of a techno brain, the creation of a Canadian research team at the University of Waterloo.
So what makes Spaun different from a mindboggingly smart artificial brain like IBM’s Watson? Put simply, Watson is designed to work like a supremely powerful search engine, digging through an enormous amount of data at breakneck speed and using complex algorithms to derive an answer. It doesn’t really care about how the process works; it’s mainly about mastering information retrieval.
But Spaun tries to actually mimic the human brain’s behavior and does so by performing a series of tasks, all different from each other. It’s a computer model that can not only recognize numbers with its virtual eye and remember them, but also can manipulate a robotic arm to write them down.
Spaun’s “brain” is divided into two parts, loosely based on our cerebral cortex and basal ganglia and its simulated 2.5 million neurons–our brains have 100 billion–are designed to mimic how researchers think those two parts of the brain interact.
Say, for instance, that its “eye” sees a series of numbers. The artificial neurons take that visual data and route it into the cortex where Spaun uses it to perform a number of different tasks, such as counting, copying the figures, or solving number puzzles.
Soon it will be forgetting birthdays
But there’s been an interesting twist to Spaun’s behavior. As Francie Diep wrote in Tech News Daily, it became more human than its creators expected.
Ask it a question and it doesn’t answer immediately. No, it pauses slightly, about as long as a human might. And if you give Spaun a long list of numbers to remember, it has an easier time recalling the ones it received first and last, but struggles a bit to remember the ones in the middle.
“There are some fairly subtle details of human behavior that the model does capture,” says Chris Eliasmith, Spaun’s chief inventor. “It’s definitely not on the same scale. But it gives a flavor of a lot of different things brains can do.”
The fact that Spaun can move from one task to another brings us one step closer to being able to understand how our brains are able to shift so effortlessly from reading a note to memorizing a phone number to telling our hand to open a door.
And that could help scientists equip robots with the ability to be more flexible thinkers, to adjust on the fly. Also, because Spaun operates more like a human brain, researchers could use it to run health experiments that they couldn’t do on humans.
Recently, for instance, Eliasmith ran a test in which he killed off the neurons in a brain model at the same rate that neurons die in people as they age. He wanted to see how the loss of neurons affected the model’s performance on an intelligence test.
One thing Eliasmith hasn’t been able to do is to get Spaun to recognize if it’s doing a good or a bad job. He’s working on it.
Here are a few other recent developments in brain research and artificial intelligence:
- I can’t get this song out of your head: Scientists in Berlin wired guitarists playing a duet with electrodes and found that when they had to closely coordinate their playing, their brain activity became synchronized. But when they weren’t coordinated, when one was leading and the other following, their brain activity was distinctly different.
- One day the brain may actually understand itself: A team of MIT neuroscientists has developed a way to monitor how brain cells coordinate with each other to control specific behaviors, such as telling the body to move. Not only could this help them map brain circuits to see how tasks are carried out, but it also may provide insight into how psychiatric diseases develop.
- Deep thinking is so yesterday: The top prize in a recent competition sponsored by pharmaceutical giant Merck went to a team of researchers from the University of Toronto who used a form of artificial intelligence known as deep learning to help discover molecules that could become new drugs.
- So robots will learn how to stare at smart phones?: To teach robots how to function in social situations, scientists at Carnegie-Mellon University are tracking groups of people with head-mounted cameras to see when and where their eyes converge in social settings.
- Unfortunately, they keep trying to hide nuts: By using the deceptive behavior of birds and squirrels as a model, researchers at Georgia Tech have been able to develop robots that can trick each other.
Video bonus: Check out a demo of Spaun in action.
More from Smithsonian.com
October 15, 2012
The International Association of Police Chiefs held its convention in San Diego earlier this month and one of the booths drawing a lot of attention belonged to a California company called AeroVironment, Inc.
It’s in the business of building drones.
One of its models–the Raven–weighs less than five pounds and is the most popular military spy drone in the world. More than 19,000 have been sold. Another of its robot planes–the Switchblade–is seen as the kamikaze drone of the future, one small enough to fit into a soldier’s backpack.
But AeroVironment is zeroing in on a new market–police and fire departments too small to afford their own helicopters, but big enough to have a need for overhead surveillance. So in San Diego, it was showing off yet another model, this one called the Qube.
The camera never blinks
AeroVironment likes to tout the Qube as just what a future-thinking police department needs–a flying machine that fits in the trunk of a cop car–it’s less than five pounds and just three feet long–can climb as high as 500 feet and stays airborne as long as 40 minutes.
Outfitted with high-resolution color and thermal cameras that transmit what they see to a screen on the ground, the Qube is being marketted as a moderately-priced surveillance tool ($50,000 and up) for keeping fleeing criminals in sight or being eyes in the sky for SWAT teams dealing with hostage situations or gunmen they can’t see.
A few police departments have already taken the plunge into what are officially known as Unmanned Aerial Vehicles (UAVs)–big cities like Miami, Houston, and Seattle, but also smaller towns, such as North Little Rock, Ark., Ogden, Utah and Gadsen, Ala. Most used Homeland Security grants to buy their drones and they all had to be specially authorized by the FAA to fly them.
So far, they haven’t flown them all that much because the Federal Aviation Administration (FAA) doesn’t yet allow drones to be used in populated areas and near airports, at an altitude above 400 feet, or even beyond the view of the operator. But that’s going to change, with the FAA estimating that by the end of the decade, at least 15,000 drones will be licensed to operate over the U.S.
I spy a pool party
So how is this going to work? What’s to keep all those unmanned aircraft from hitting planes or helicopters or crashing into buildings? And what’s going to prevent them from spying on private citizens or shooting video of pool parties?
The FAA is wrestling with all that now and, given the need to ensure both safe skies and individual privacy, the agency may have a hard time nailing down regulations by August, 2014, the deadline Congress set earlier this year with the goal of opening up public airspace to commercial drones in the fall of 2015.
The feds are already behind schedule in selecting six locations in the U.S. where they’ll test drones to see if they can do what their manufacturers say they can do and, more importantly, if they can be kept from flying out of control. Later this month, however, at Fort Sill, Oklahoma, the Department of Homeland Security will start grading different drones on how well they perform when lives are at stake, say with a hostage situation, or a spill of hazardous waste or a search and rescue mission.
For a technology still largely seen as a deadly, and controversial, weapon for going after suspected terrorists, it couldn’t hurt to be able show how a drone can help find a lost kid or save an Alzheimer’s patient wandering through the woods.
Not so private eyes
Still, the idea of police departments or government agencies having access to flying cameras makes a lot of people uneasy. This summer, when a rumor started on Twitter that the EPA was using drones to spy on American farmers, it shot through the blogosphere, was repeated on TV, and then in condemning press releases issued by several congressmen–even though it wasn’t true.
As Benjamin Wittes and John Villasenor pointed out in the Washington Post earlier this year, the FAA isn’t a privacy agency. It’s loaded with aviation lawyers. Yet it will be dealing with some very dicey issues, such as how do you define invasion of privacy from public airspace and who can get access to video shot by a drone.
To quote Wittes and Villasenor:
“The potential for abuses on the part of government actors, corporations and even individuals is real — and warrants serious consideration before some set of incidents poisons public attitudes against a field that promises great benefits.”
Judging from a pair of surveys on the subject, the public is already pretty wary. Of those recently surveyed by the Associated Press, about a third said they are “extremely concerned” or “very concerned” about how drones could affect their privacy.
Another national poll, taken this summer by the Monmouth University Polling Institute, found that while 80 percent of the people surveyed like the idea of drones helping with search and rescue missions and 67 percent support using them to track runaway criminals, about 64 percent said they are “very concerned” or “somewhat concerned” about losing their privacy.
And they definitely don’t like the notion of police departments using them to enforce routine laws. Two out of three people surveyed said they hate the idea of drones being used to issue speeding tickets.
When robots fly
Here’s more recent research on flying robots:
- No crash courses: NASA scientists are testing two different computer programs to see if they can help drones sense and then avoid potential mid-air collisions. In theory, an unmanned aircraft would be able to read data about other flying objects and change its speed and heading if it appeared to be on a collision course.
- What goes up doesn’t have to come down: Two recent innovations could dramatically increase the flight time of both giant drones and handheld ones. Lockheed Martin has found a way to recharge its huge Stalker drones wirelessly using lasers, allowing them to stay airborne for as long as 48 hours. And Los Angeles-based Somatis Technologies is working on a process to convert wind pressure and vibrations into energy and that could triple the battery life of hand-launched drones to almost three hours.
- Get your protest souvenir photos here: Russia is stepping up its drone program and will continue to use them to monitor street protests.
- The face is familiar: The Congressional Research Service released a report last month suggesting that law enforcement agencies could, in the near future, outfit drones with facial recognition or biometric software that could “recognize and track individuals based on attributes such as height, age, gender and skin color.”
- Talk to me when it makes honey: Harvard researchers have been working on a tiny–not much larger than a quarter–robotic bee for five years and now it can not only take off on its own power, but it can also pretty much fly where they want it to go.
- Two blinks to get rid of red eye: Chinese scientists have designed quadcopters that can be controlled by human thought and be told to take a photo by the blink of an eye.
Video bonus: This promo video by AeroVironment sure makes it feel like the Qube drone could have its own TV series.
More from Smithsonian.com
September 25, 2012
About a year ago I wrote about the first meeting of the 100 Year Starship Symposium (100YSS), a conference designed to keep scientists focused on what it will take for humans to be able to travel outside our solar system.
Luckily, they still have about a century to figure it out. NASA and DARPA, the research arm of the Defense Department, are behind the project, and the latter has kicked in $500,000 to start wrestling with the ridiculously difficult challenge of traveling trillions of miles in space by 2100.
Last week, at the second 100YSS meeting, there actually was a bit of progress to note. Along with a discussion of how many pair of underpants would be required to make such a trip and a rendition of the “Star Trek” theme song by Lt. Uhura herself, came a report that warp drive might actually be possible, that it would require far less energy than previously thought for a spaceship to travel several times faster than the speed of light.
Good news, but still a long, long way from making real something we used to see happen on TV every week. It reminded me, though, of the iterative, and often methodical process of science and how too often the focus on innovation is more about the potential of new ideas and technology and less about how they actually evolve in the real world.
So here are updates on five innovations I’ve written about in the past year. Some are already making their mark; others remain on a low boil.
1) When robots play nice: Robots work great by themselves, but mix them in with humans and it can get a little dicey. Most robots, while amazingly efficient and powerful, can also be dangerous to people nearby because, to put it simply, they don’t know we’re there.
That’s not the case, however, with a new model designed by Boston-based Rethink Robotics. It’s called Baxter and it’s been given the artificial intelligence to slow its motions when it detects a person approaching. And, to alert humans that it’s aware of their presence, its face turns red.
Next month Rethink will start selling Baxter, which can be trained by humans to do different tasks. The goal is to expand the robot market beyond big factories by providing a model that’s safe and relatively inexpensive–Baxter will cost $22,000, a steal by robot standards.
2) Replicator 2! Coming soon to an office near you!: Much has been written about 3-D printing as the future driver of manufacturing. But Bre Pettis, CEO of Brooklyn-based MakerBot Industries, has always believed in the more personal side of 3-D printers. He thinks they belong in people’s homes right next to their PCs.
Since 2009, the company has sold 13,000 of its MakerBot models. But buyers have largely been hobbyists who ordered their printers online. Now the company is taking things up a notch. Last week Pettis unveiled The Replicator 2, a sleek, stylized and more expensive model, one designed to fit right into the suitably applianced home. Also last week, MakerBot opened its first real store, in Manhattan no less.
Ah, but there’s also a bit of a dark side to giving people the power to print objects at home. Last month, a Wisconsin engineer showed readers of his blog the working gun he made.
3) Every picture tells a story. Or three: When it came on the market early this year, the Lytro camera had some people saying it would do for cameras what the iPhone did for cell phones. It made photos interactive, allowing you to change what’s in focus in an image after the fact. Chicago’s Museum of Science and Industry was impressed enough to include a Lytro in its 2012 Smart Home exhibit.
The Lytro still may transform photography, but not this year. Probably not next year, either. For now at least, most people seem perfectly content with the photos they can take on their smart phones, and they aren’t ready to pay $400 for a camera shaped like a stick of butter that allows them to do something with photos they’re not in the habit of doing.
This summer, Lytro founder Ren Ng stepped down as CEO, a move he said would allow him to focus on the company’s vision and not get bogged down in day-to-day operations. This likely has a lot to do with how quickly Lytro, which raised $50 million in private funding, has grown. It still isn’t able to fill online orders immediately–it won’t share sales figures–but Ng says it has reduced the wait time to about a month.
In case you haven’t seen how Lytro photography works, here’s a sampling.
4) Apple has spoken: A lot of attention has already been paid to the new features of the iPhone 5–its bigger screen, 4G speed, longer battery life. But it’s also worth noting something it doesn’t have–a Near-Field Communication (NFC) chip.
That’s what turns a smart phone into a mobile wallet, enabling it to make payments by waving it at checkout devices in stores. There was much speculation that if Apple gave NFC its blessing, it would push the technology mainstream in the U.S.
But Apple balked, in part because not many stores in the the U.S. have been willing to upgrade their checkout systems with NFC devices. Customers haven’t exactly been clamoring for them and besides, if Apple’s not buying in, why bother, say store owners. (Ah, the vicious circle.)
This is not good news for Isis, a partnership of mobile carriers, including Verizon and AT&T, and credit card companies, such as American Express and Capital One. The day after Apple introduced its new smart phone–minus a NFC chip–Isis announced that it was delaying the launch of its NFC mobile payments service.
5) But who’s going to blow the horn?: Since I first wrote about it in July, 2011, Google’s driverless car has received big boosts in Nevada, which last spring became the first state to issue license plates to autonomous vehicles, and California, where last month, in an extremely rare case of bipartisanship, Democrats and Republicans joined forces to overwhelmingly pass a self-driving car law. It directs the state’s Department of Motor Vehicles and the California Highway Patrol to develop safety and performance standards for robotic vehicles.
But Google’s just getting warmed up. It’s following up its success in lobbying officials there by pushing similar legislation in Florida, Arizona, Hawaii and Oklahoma. And this is a concept that’s trending: BMW and Audi are known to be working on their own versions and no less prestigious an organization as the Institute of Electrical and Electronics Engineers (IEEE) recently predicted that by 2040, 75 percent of the vehicles on the road won’t have human drivers.
Still, it’s not all open road ahead. Automakers have raised questions about their liability if they start selling driverless cars–although Google is quick to point out that its fleet of autonomous Priuses have so far logged 300,000 miles without one accident. And a consumer watchdog group in California fought the driverless car legislation, raising privacy concerns about how all the data gathered by the vehicles is used. Could you start receiving ads based on where your car drives?
Video bonus: This was probably inevitable. A candidate in Florida has come under fire for his support of driverless cars and now one of his opponent’s campaign ads features an old lady with a walker nearly run down at a stop sign by, you guessed it, a car without a driver. In case you miss the point, the large type next to her asks: “Will Driverless Cars REALLY Slow for Pedestrians?”
More from Smithsonian.com