December 4, 2013
On “60 Minutes” the other night, Amazon founder Jeff Bezos made drones fun again. They’re usually associated with clandestine warfare, but Bezos showed interviewer Charlie Rose–along with the millions of others watching–how the unmanned aircraft can be cool little gizmos that become a part of our daily lives–in this case by delivering stuff you ordered from Amazon right to your doorstep.
Bezos used the program to reveal the wonders of Amazon’s “octocopter,” a mini-drone with the capability of achieving the Holy Grail of e-commerce–deliveries within 30 minutes. This is still years away, as Bezos acknowledged, but it’s clear he thinks drones will one day be as ubiquitous as Domino’s drivers.
Bezos’ demo had the desired effect–his octocopter was all over the Internet on Cyber Monday, burnishing Amazon’s reputation as a company gliding along the cutting edge of customer service. Some derided the the whole thing as little more than a beautifully orchestrated publicity stunt, given the not insignificant hurdles commercial drones still need to clear. Other websites, such as The Telegraph in the U.K., piled on. It produced a list of nine things that could go “horribly wrong”–from drone hackers to long weather delays to packages falling from the sky.
The truth is, we won’t really know all that can go wrong–or right–with commercial drones until closer to 2020, at least in the U.S. It could happen sooner, but the Federal Aviation Administration (FAA) has been moving slowly and cautiously, not surprising, considering that we’re talking about tens of thousands of pilotless vehicles buzzing around in public airspace. Extensive drone testing at six still-to-be-named locations won’t begin until next year, almost a year and a half behind the schedule set by Congress.
Me, my drone and I
But let’s step back for a minute and forget about messy things like political and legal realities. If Bezos is right, more personal drones are inevitable. Many, no doubt, will be used to make deliveries. (That already appears to be happening in China.) But what else will they be able to do?
Plenty, if you believe some of the ideas that have been floated. And those little flying machines could become a lot more personal than most of us would have imagined.
Consider the possibilities:
1) I’m ready for my selfie: Not long ago, a group of designers from a product strategy firm named frog staged a workshop with the purpose of imagining ways that drones could become a much bigger part of our lives. One idea was an aircraft called the Paparazzi, and, true to its name, it would be all about following you around and recording your life in photos and videos. It would then feed everything directly to your Facebook page. Yes, it sounds ridiculously self-indulgent, but then again, who could have imagined our obsession with self portraits on phones?
2) Cut to the chase: Here’s another idea from the frog workshop, a drone they named the Guardian Angel. Described as the “ultimate accessory for serious runners,” it would act as a trainer or exercise companion by flying ahead and setting the pace. It could conceivably tap into data from a heart monitor a runner is wearing and push him or her harder to get pulse rate up. Or it could use data from a previous run and let a person race against himself. In short, these drones would be like wearable tech that you don’t actually wear.
3) Take that, Siri: Researchers at M.I.T., meanwhile, have developed a personal drone app they’ve named Skycall, which serves as a personal tour guide. Sure, you can listen to your smartphone give you directions, but this app/drone combo would actually show you the way. It works like this: You tell the app on your phone where you want to go and it would then identify and contact the nearest unmanned aircraft. It would show up, like a flying cab, and lead you to your destination.
4) Allow me to revel in my greatness: A British drone maker has designed one that’s a variation of the Paparazzi mentioned above, although his is geared more to outdoor types, such as mountain bikers,snowboarders and surfers. It tracks a person through a smartphone and, from overhead, takes a steady stream of photos and videos to capture his or her awesomeness for posterity.
5) An idea whose time has already come: Finally, Dan Farber, writing for CNET the other day, raised the prospect of what he called a “Kindle Drone.” He sees it as a device about the size of a baseball, loaded with sensors and a camera, that would serve as a guard and personal assistant. On one hand, it could roam your house gathering data and generally making sure everything’s in order. On the other, you could direct it to go find your phone.
Now that has potential.
Video bonus: Here’s a drone in action in China, delivering a cake from the air.
Video bonus bonus: It’s safe to say this is the only engagement ring delivered by drone.
Video bonus plus: Need to map the Matterhorn. No problem, drones at your service.
More from Smithsonian.com
July 17, 2013
Yes, he’s the founder of Space X, the first commercial venture to send a cargo spacecraft to the International Space Station.
And yes, he’s the co-founder of PayPal and chairman of SolarCity, the largest provider of solar power systems in the U.S.
And yes, he’s head of Tesla Motors, which produced the world’s first all-electric sports car, its first electric luxury car and actually turned a profit in the first quarter of 2013.
But earlier this week Elon Musk did something that made even some of his fans wonder if he’s about to fly a little too close to the sun. Or maybe that he’s spent a little too much time out in the sun.
What Musk did was tweet about an invention he calls the “Hyperloop,” promising that in less than a month, he’ll be revealing more details, including its design.
In case you missed it, Musk first started talking about the Hyperloop last summer, describing it as a “cross between a Concorde, a rail gun and an air hockey table,” and suggesting that a sun-powered tube could whisk vehicles between San Francisco and Los Angeles in half an hour.
He referred to it as the “fifth mode” of transporation,” but one that, as he sees it, could leave the other four–planes, trains, boats and cars–in the dust. Here’s what else he told Pando Daily in that interview:
“How would you like something that never crashed, was immune to weather, that goes three or four times as fast as the bullet trains we have now or about twice the speed of an aircraft, that would get you from downtown L.A. to downtown San Francisco in under 30 minutes and it would cost you much less than any other type of transportation.”
A few months later, he would tell Bloomberg News that the Hyperloop would also allow you to leave as soon as you arrive “so there is no waiting for a specific departure time.”
Sounds great. And I assume that you’ll also be able to get giant donuts that turn fat into muscle.
Okay, that’s probably not fair. In truth, Musk’s idea is not all that far-fetched. As Business Insider pointed out recently, it sounds a bit like a 21st century version of a concept pitched by a Rand Corporation physicist named R.M. Salter way back in 1972. He proposed something he called Very High Speed Transit, or VHST, which was essentially an underground tube that could shoot pods from New York to Los Angeles in a little more than 20 minutes.
As Salter saw it, the vehicles would have been driven by electromagnetic waves much as a surfboard rides the ocean’s wave. The VHST would have used all its kinetic energy to accelerate, and that power would be returned when it decelerated, through energy regeneration.
It’s not clear how the Hyperloop would work–that’s what Musk will share next month. What is known is that a Colorado company named ET3 is working on a system using vacuum-sealed tubes that it says could propel capsules as fast as 4,000 miles per hour, while exposing passengers to the G-forces of an ordinary car ride. It’s been reported that ET3 hopes to have a three-mile test track functioning by the end of the year. But Musk is not known to have any connection to the company.
He promises that he won’t patent the Hyperloop concept, that he wants to keep it open source. Musk says he’s looking for “critical feedback” and that he’d welcome partners–so long as they’re like-minded.
As he tweeted on Monday “Happy to work with the right partners. Must truly share philosophical goal of breakthrough tech done fast w/o wasting money on BS.”
There’s been news in the other modes of transportation recently, too. Here’s some of the latest.
- You are here: Researchers at the Toyota Technological Institute in Chicago have devised a system that allows cars to know where they are without relying on GPS. By using two cameras and software that determines when and how the road curves, it can nail down a location by comparing the layout of the route and its intersections to a map of the area from OpenStreetMap. The designers claim that in 20 seconds, the system can figure out where you are, even if you’re in a tunnel.
- Siri, I’ve met something new: GM announced recently that some of its new models rolling out later this year will come with their own apps store. Instead of living in a smartphone, these apps would be directly accessible from your car. It’s part of the accelerating trend 0f turning cars into moving smartphones, with the goal of not just creating another source of revenue for car makers, but also allowing dealers to stay connected to their customers. Among the possibilities: Diagnostic apps that can monitor your car’s condition and send e-mail or text alerts if it needs servicing, Internet radio apps for a more customized selection of music, or news, traffic, and weather apps for real-time information on what’s happening on the road ahead.
- Talk fast, this is my stop: Coming soon to the Prague subway: A car on each train that’s set aside for singles. The idea is to give time-crunched singles a chance to meet up while riding to work or elsewhere. What’s not clear is how they’ll keep married lurkers out.
- Pump it up: A team of Canadian engineers recently conquered one of aviation’s greatest challenges by designing a helicopter of sorts that is powered by a human pumping pedals. For their effort, they won the Sikorsky Prize, a $250,000 challenge that had gone unclaimed since it was first offered by the American Helicopter Society 33 years ago.
- Is it me or did the window just try to sell me a car?: The British online broadcaster Sky Go, along with the German ad agency BBDO Düsseldorf, are planning to use a new technology that would allow windows on buses or trains to send ad messages directly into your brain. It works like this: When a commuter rests his or her head against a window, oscillations beamed into the glass are converted into sound through a process called bone conduction, and he or she will hear the ad message while other passengers remain oblivious.
Video bonus: No one’s quite sure what Elon Musk’s Hyperloop will be, but the closest thing to it may be the “evacuated tube transport” concept being developed by ET3. Now this is 21st century travel.
Video bonus bonus: It doesn’t look like any helicopter you’ve ever seen, but the Atlas gets airborne through one guy pedaling.
More from Smithsonian.com
June 7, 2013
Andrea, the first tropical storm of hurricane season is churning up the East Coast today and while it’s not expected to do much more than deliver a heavy drenching, it has kicked off the first wave of storm tracking.
Will it hug the coast or drift inland? Will it dump and inch of rain or three? Will it provide us with our first 2013 image of a TV reporter doing unintended slapstick on a beach?
Already we’ve been told that this could be one nasty season, with a prediction from the National Oceanic and Atmospheric Administration (NOAA) of seven to 11 hurricanes, of which three to six could be major–that’s with winds of 111 mph or higher. And hurricane experts at Colorado State University are pretty confident–they put the likelihood at 72 percent–that at least one of those major hurricanes will make landfall somewhere along the Gulf Coast or the Eastern seaboard. Keep in mind that Sandy was not considered a major hurricane when it swept in over New Jersey last fall.
Hurricane forecasting is much more science than crapshoot these days. Computer models have become amazingly accurate, considering how many variables need to be taken into account–temperature, wind speed, humidity, barometric pressure, topography–from many different locations at different times. All told, there can be hundreds of thousands of factors that need to be weighed. And the task is complicated by the fact that we only have about 60 years of good historical data to plug into the models.
Most of the real-time data that gets fed into the computers comes from dropsonde sensors that are dropped into the storms from big, heavy “hurricane hunters,” planes that are essentially flying laboratories. These are impressive machines. They also are quite expensive. One plane costs about $22 million.
Kamran Mohseni thinks there may be a better way to gather storm data. It’s about thinking small.
Mohseni, an engineering professor at the University of Florida, believes the next generation of hurricane hunters will be drones small enough to almost fit into the palm of your hand, but able to engage fierce hurricanes by riding the wind rather than trying to punch through it. Its weight–about as much as an iPod Nano–is an asset in his mind. “Our vehicles don’t fight the hurricane,” he says. “We use the hurricane to take us places.”
His take is that instead of relying on a few “super-duper” aircraft, why not use hundreds of little drones that through their sheer numbers, could make the data that much more accurate or, as he put it, “You get super duper on an aggregate level.”
Mohseni’s drones, with their sensors, would be launched with commands from a laptop, and then, with the help of mathematical models that predict where the best wind currents can be found, would be able to hitch a ride into the storm. Once there, the drones can be powered up or down as needed, with the goal of taking advantage of the wind’s power to explore the hurricane.
Riding the waves
But Mohseni is not just talking about flying drones. He also has developed underwater vehicles designed to mimic jellyfish as they move through the ocean. He envisions them as a tiny naval fleet working in tandem with a squadron of his flying drones, and that could allow scientists to also gather data from under the sea, which can be particularly difficult to collect.
He realizes, of course, that even though his drones–since they won’t resist the wind–aren’t likely to be blown apart, a lot of them will be lost once they take on a hurricane. But because they’re so small and light, they’re not likely to do much damage if they hit something. And he figures the data gained will be worth the expense.
Each of his drones costs about $250.
Eyes of the storm
Here are other recent developments in weather tech:
- It’s a wind win: The Canadian firm Aeryon Labs has developed an “Unmanned Aerial Vehicle” (UAV) designed to do military reconnaissance in bad weather. It promises that its SkyRanger drone can remain stable in winds for 40 and survive gusts of 55 mph and also can function in temperatures from -22 to 122º Fahrenheit.
- It was a dark and stormy flight: Later this summer NASA will send a pair of large unmanned aircraft loaded with instruments out over the Atlantic to study more closely how hurricanes form and build in intensity. Last fall, the agency used one of these drones, called Global Hawk, but will add another as it expands its focus to wind and rain bands inside hurricanes.
- After all, why shouldn’t clouds be able to get that inner glow: With the goal of seeing how lasers might affect cloud formation, researchers at the Karlsruhe Institute of Technology in Germany found that lasers can actually make a cirrus cloud glow. Unfortunately, lasers aren’t able to do this yet with real clouds; the scientists produced the effect on clouds created in the lab.
- Not to mention, an awesome shield against flying beer: And now, meet the Rainshader, an umbrella that looks more like a motorcycle helmet on a stick. Designed to protect you from rain at sporting events, it promises not to blow inside out, poke people in the eye, or drip on those sitting next to you. And, best of all, because it can he held to sit low on your head, it shouldn’t block anyone else’s view.
Video bonus: Watch Kamran Mohseni’s little hurricane hunters taking flight.
Video bonus bonus: And for old time’s sake, the lighter side of big storms.
More from Smithsonian.com
February 8, 2013
When John Brennan, President Obama’s choice to be the next head of the CIA, appeared before a Senate committee yesterday, one question supplanted all others at his confirmation hearing:
How are the decisions made to send killer drones after suspected terrorists?
The how and, for that matter, the why of ordering specific drone strikes remains largely a mystery, but at least one thing is clear–the decisions are being made by humans who, one would hope, wrestle with the thought of sending a deadly missile into an occupied building.
But what if humans weren’t involved? What if one day life-or-death decisions were left up to machines equipped with loads of data, but also a sense of right and wrong?
That’s not so far fetched. It’s not going to happen any time soon, but there’s no question that as machines become more intelligent and more autonomous, a pivotal part of their transformation will be the ability to learn morality.
In fact, that may not be so far away. Gary Marcus, writing recently in The New Yorker, presented the scenario of one of Google’s driverless cars before forced to make a split-second decision: “Your car is speeding along a bridge at 50 miles per hour when errant school bus carrying 40 innocent children crosses its path. Should your car swerve, possibly risking the life of its owner (you), in order to save the children, or keep going, putting all 40 kids at risk? If the decision must be made in milliseconds, the computer will have to make the call.”
And what about robotic weapons or soldiers? Would a drone be able to learn not to fire on a house if it knew innocent civilians were also inside? Could machines be taught to follow the international rules of war?
Ronald Arkin, a computer science professor and robotics expert at Georgia Tech, certainly thinks so. He’s been developing software, referred to as an “ethical governor,” which would make machines capable of deciding when it’s appropriate to fire and when it’s not.
Arkin acknowledges that this could still be decades away, but he believes that robots might one day be both physically and ethically superior to human soldiers, not vulnerable to the emotional trauma of combat or desires for revenge. He doesn’t envision an all-robot army, but one in which machines serve with humans, doing high-risk jobs full of stressful snap decisions, such as clearing buildings.
Beware of killer robots
But others feel it’s time to squash this type of thinking before it goes too far. Late last year, Human Rights Watch and Harvard Law School’s Human Rights Clinic issued a report, “Losing Humanity: The Case Against Killer Robots,” which, true to its title, called on governments to ban all autonomous weapons because they would “increase the risk of death or injury to civilians during armed conflict.”
At about the same a time, a group of Cambridge University professors announced plans to launch what they call the Center for the Study of Existential Risk. When it opens later this year, it will push for serious scientific research into what could happen if and when machines get smarter than us.
The danger, says Huw Price, one of the Center’s co-founders, is that one day we could be dealing with “machines that are not malicious, but machines whose interests don’t include us”.
The art of deception
Shades of Skynet, the rogue artificial intelligence system that spawned a cyborg Arnold Schwarzenegger in The Terminator movies. Maybe this will always be the stuff of science fiction.
But consider other research Ronald Arkin is now doing as part of projects funded by the Department of Defense. He and colleagues have been studying how animals deceive one another, with the goal of teaching robots the art of deception.
For instance, they’ve been working on programming robots so that they can, if necessary, feign strength as animals often do. And they’ve been looking at teaching machines to mimic the behavior of creatures like the eastern gray squirrel. Squirrels hide their nuts from other animals, and when other squirrels or predators appear, the gray squirrels will sometimes visit places where they used to hide nuts to throw their competitors off the track. Robots programmed to follow a similar strategy have been able to confuse and slow down competitors.
It’s all in the interest, says Arkin, of developing machines that won’t be a threat to humans, but rather an asset, particularly in the ugly chaos of war. The key is to start focusing now on setting guidelines for appropriate robot behavior.
“When you start opening that Pandora’s Box, what should be done with this new capability?,” he said in a recent interview. “I believe that there is a potential for non-combatant casualties to be lessened by these intelligent robots, but we do have to be very careful about how they’re used and not just release them into the battlefield without appropriate concern.”
To believe New Yorker writer Gary Marcus, ethically advanced machines offer great potential beyond the battlefield.
The thought that haunts me the most is that that human ethics themselves are only a work-in-progress. We still confront situations for which we don’t have well-developed codes (e.g., in the case of assisted suicide) and need not look far into the past to find cases where our own codes were dubious, or worse (e.g., laws that permitted slavery and segregation).
What we really want are machines that can go a step further, endowed not only with the soundest codes of ethics that our best contemporary philosophers can devise, but also with the possibility of machines making their own moral progress, bringing them past our own limited early-twenty-first century idea of morality.”
Machines march on
Here are more recent robot developments:
- Hmmmm, ethical and sneaky: Researchers in Australia have developed a robot that can sneak around by moving only when there’s enough background noise to cover up its sound.
- What’s that buzzing sound?: British soldiers in Afghanistan have started using surveillance drones that can fit in the palms of their hands. Called the Black Hornet Nano, the little robot is only four inches long, but has a spy camera and can fly for 30 minutes on a full charge.
- Scratching the surface: NASA is developing a robot called RASSOR that weighs only 100 pounds, but will be able to mine minerals on the moon and other planets. It can move around on rough terrain and even over bolders by propping itself up on its arms.
- Ah, lust: And here’s an early Valentine’s Day story. Scientists at the University of Tokyo used a male moth to drive a robot. Actually, they used its mating movements to direct the device toward an object scented with female moth pheromones.
Video bonus: So you’re just not sure you could operate a 13-foot tall robot? No problem. Here’s a nifty demo that shows you how easy it can be. A happy model even shows you how to operate the “Smile Shot” feature. You smile, it fires BBs. How hard is that?
More from Smithsonian.com
October 18, 2012
A few months ago Google shared with us another challenge it had taken on. It wasn’t as fanciful as a driverless car or as geekily sexy as augmented reality glasses, but in the end, it could be bigger than both. In fact, it likely will make both of them even more dynamic.
What Google did was create a synthetic brain, or at least the part of it that processes visual information. Technically, it built a mechanical version of a neural network, a small army of 16,000 computer processors that, by working together, was actually able to learn.
At the time, most of the attention focused on what all those machines learned, which mainly was how to identify cats on YouTube. That prompted a lot of yucks and cracks about whether the computers wondered why so many of the cats were flushing toilets.
But Google was going down a path that scientists have been exploring for many years, the idea of using computers to mimick the connections and interactions of human brain cells to the point where the machines actually start learning. The difference is that the search behemoth was able to marshal resources and computing power that few companies can.
The face is familiar
For 10 days, non-stop, 1,000 computers–using those 16,000 processors–examined random thumbnail images taken from 10 million different YouTube videos. And because the neural network was so big–it had more than a billion connections–it was able to learn to identify features on its own, without any real human guidance. Through the massive amount of information it absorbed, the network, by recognizing the relationships between data, basically taught itself the concept of a cat.
Impressive. But in the realm of knowledge, is this cause for great jubilation? Well, yes. Because eventually all the machines working together were able to decide which features of cats merited their attention and which patterns mattered, rather than being told by humans which particular shapes to look for. And from the knowledge gained through much repetition, the neural network was able to create its own digital image of a cat’s face.
That’s a big leap forward for artificial intelligence. It’s also likely to have nice payoffs for Google. One of its researchers who worked on the project, an engineer named Jeff Dean, recently told MIT’s Technology Review that now his group is testing computer models that understand images and text together.
“You give it ‘porpoise” and it gives you pictures of porpoises,” Dean explained. “If you give it a picture of a porpoise, it gives you ‘porpoise’ as a word.”
So Google’s image search could become far less dependent on accompanying text to identify what’s in a photo. And it’s likely to apply the same approach to refining speech recognition by being able to gather extra clues from video.
No question that the ability to use algorithms to absorb and weave together many streams of data, even different types of data, such as sound and images, will help make Google’s driverless car that much more autonomous. Same with Google glasses.
But now a slice of perspective. For all its progress, Google still has a long way to go to measure up to the real thing. Its massive neural network, the one with a billion connections, is, in terms of neurons and synapses, still a million times smaller than the human brain’s visual cortex.
A matter of intelligence
Here are more recent developments in artificial intelligence:
- A bee, or not a bee: A team of British scientists are attempting to create an accurate model of a honeybee’s brain. By reproducing the key systems that make up a bee’s perception, such as vision and scent, the researchers hope to eventually be able to install the artificial bee brain in a small flying robot.
- But does it take the cover into account?: New software called Booksai is using artificial intelligence to give you book recommendations based on the style, tone, mood and genre of things you already know you like to read.
- Do I always look this good?: Scientists at Yale have programmed a robot that can recognize itself in the mirror. In theory, that should make the robot, named Nico, better able to interact with its environment and humans.
- Lost in space no more: Astronomers in Germany have developed an artificial intelligence algorithm to help them chart and explain the structure and dynamics of the universe with amazing accuracy.
- Walk this way: Scientists at MIT have created a wearable intelligent device that creates a real-time map of where you’ve just walked. It’s designed as a tool to help first responders coordinate disaster search and rescue.
Video bonus: In France–where else?–an inventor has created a robot that not only prunes grape vines, but also has the intelligence to memorize the specific needs of each plant. And now it’s learning to pick grapes.
More from Smithsonian.com