June 7, 2013
Andrea, the first tropical storm of hurricane season is churning up the East Coast today and while it’s not expected to do much more than deliver a heavy drenching, it has kicked off the first wave of storm tracking.
Will it hug the coast or drift inland? Will it dump and inch of rain or three? Will it provide us with our first 2013 image of a TV reporter doing unintended slapstick on a beach?
Already we’ve been told that this could be one nasty season, with a prediction from the National Oceanic and Atmospheric Administration (NOAA) of seven to 11 hurricanes, of which three to six could be major–that’s with winds of 111 mph or higher. And hurricane experts at Colorado State University are pretty confident–they put the likelihood at 72 percent–that at least one of those major hurricanes will make landfall somewhere along the Gulf Coast or the Eastern seaboard. Keep in mind that Sandy was not considered a major hurricane when it swept in over New Jersey last fall.
Hurricane forecasting is much more science than crapshoot these days. Computer models have become amazingly accurate, considering how many variables need to be taken into account–temperature, wind speed, humidity, barometric pressure, topography–from many different locations at different times. All told, there can be hundreds of thousands of factors that need to be weighed. And the task is complicated by the fact that we only have about 60 years of good historical data to plug into the models.
Most of the real-time data that gets fed into the computers comes from dropsonde sensors that are dropped into the storms from big, heavy “hurricane hunters,” planes that are essentially flying laboratories. These are impressive machines. They also are quite expensive. One plane costs about $22 million.
Kamran Mohseni thinks there may be a better way to gather storm data. It’s about thinking small.
Mohseni, an engineering professor at the University of Florida, believes the next generation of hurricane hunters will be drones small enough to almost fit into the palm of your hand, but able to engage fierce hurricanes by riding the wind rather than trying to punch through it. Its weight–about as much as an iPod Nano–is an asset in his mind. “Our vehicles don’t fight the hurricane,” he says. “We use the hurricane to take us places.”
His take is that instead of relying on a few “super-duper” aircraft, why not use hundreds of little drones that through their sheer numbers, could make the data that much more accurate or, as he put it, “You get super duper on an aggregate level.”
Mohseni’s drones, with their sensors, would be launched with commands from a laptop, and then, with the help of mathematical models that predict where the best wind currents can be found, would be able to hitch a ride into the storm. Once there, the drones can be powered up or down as needed, with the goal of taking advantage of the wind’s power to explore the hurricane.
Riding the waves
But Mohseni is not just talking about flying drones. He also has developed underwater vehicles designed to mimic jellyfish as they move through the ocean. He envisions them as a tiny naval fleet working in tandem with a squadron of his flying drones, and that could allow scientists to also gather data from under the sea, which can be particularly difficult to collect.
He realizes, of course, that even though his drones–since they won’t resist the wind–aren’t likely to be blown apart, a lot of them will be lost once they take on a hurricane. But because they’re so small and light, they’re not likely to do much damage if they hit something. And he figures the data gained will be worth the expense.
Each of his drones costs about $250.
Eyes of the storm
Here are other recent developments in weather tech:
- It’s a wind win: The Canadian firm Aeryon Labs has developed an “Unmanned Aerial Vehicle” (UAV) designed to do military reconnaissance in bad weather. It promises that its SkyRanger drone can remain stable in winds for 40 and survive gusts of 55 mph and also can function in temperatures from -22 to 122º Fahrenheit.
- It was a dark and stormy flight: Later this summer NASA will send a pair of large unmanned aircraft loaded with instruments out over the Atlantic to study more closely how hurricanes form and build in intensity. Last fall, the agency used one of these drones, called Global Hawk, but will add another as it expands its focus to wind and rain bands inside hurricanes.
- After all, why shouldn’t clouds be able to get that inner glow: With the goal of seeing how lasers might affect cloud formation, researchers at the Karlsruhe Institute of Technology in Germany found that lasers can actually make a cirrus cloud glow. Unfortunately, lasers aren’t able to do this yet with real clouds; the scientists produced the effect on clouds created in the lab.
- Not to mention, an awesome shield against flying beer: And now, meet the Rainshader, an umbrella that looks more like a motorcycle helmet on a stick. Designed to protect you from rain at sporting events, it promises not to blow inside out, poke people in the eye, or drip on those sitting next to you. And, best of all, because it can he held to sit low on your head, it shouldn’t block anyone else’s view.
Video bonus: Watch Kamran Mohseni’s little hurricane hunters taking flight.
Video bonus bonus: And for old time’s sake, the lighter side of big storms.
More from Smithsonian.com
May 22, 2013
As much time as we spend with our cell phones and laptops and tablets, it’s still pretty much a one-way relationship. We act, they respond. Sure, you can carry on a conversation with Siri on your iPhone, and while she is quick, it hardly qualifies as playful bantering. You ask questions, she gives answers.
But what if these devices could really read our emotions? What if they could interpret every little gesture, every facial cue so that they can gauge our feelings as well as–maybe better than–our best friends? And then they respond, not with information, but what might pass for empathy.
We’re not there yet, but we’re quickly moving in that direction, driven by a field of science known as affective computing. It’s built around software that can measure, interpret and react to human feelings. This might involve capturing your face on camera and then applying algorithms to every aspect of your expressions to try to make sense of each smirk and chin rub. Or it might involve reading your level of annoyance or pleasure by tracking how fast or with how much force you tap out a text or whether you use emoticons. And if you seem too agitated–or drunk–you could get a message suggesting that you might want to hold off pressing the send icon.
Seeing how difficult it is for us humans to make sense of other humans, this notion of programming machines to read our feelings is no small challenge. But it’s picking up speed, as scientists sharpen their focus on teaching devices emotional intelligence.
Every move you make
One of the better examples of how affective computing can work is the approach of a company called, appropriately, Affectiva. It records expressions and then, using proprietary algorithms, scrutinizes facial cues, tapping into a database of almost 300 million frames of elements of human faces. The software has been refined to the point where it can associate various combinations of those elements with different emotions.
When it was developed at M.I.T’s Media Lab by two scientists, Rosalind Picard and Rana el Kaliouby, the software, known as Affdex, was designed with the purpose of helping autistic children communicate better. But it clearly had loads of potential in the business world, and so M.I.T. spun the project off into a private company. It has since raised $21 million from investors.
So how is Affdex being used? Most often, it’s watching people watching commercials. it records people as they view ads on their computers–don’t worry, you need to opt in for this–and then, based on its database of facial cues, evaluates how the viewers feel about what they’ve seen. And the software doesn’t provide just an overall positive or negative verdict; it breaks down the viewers’ reactions second by second, which enables advertisers to identify, with more precision than ever before, what works in a commercial and what doesn’t.
It also is able to see that while people say one thing, their faces can say another. During an interview with the Huffington Post, el Kaliouby gave the example of the response to an ad for body lotion that aired in India. During the commercial, a husband playfully touches his wife’s exposed stomach. Afterwards, a number of women who had watched it said they found that scene offensive. But, according to el Kaliouby, the videos of the viewers showed that every one of the women responded to the scene with what she called an “enjoyment smile.”
She sees opportunities beyond the world of advertising. Smart TVs could be that much smarter about what kind of programs we like if they’re able to develop a memory bank of our facial expressions. And politicians would be able to get real-time reactions to each line they utter during a debate and be able to adapt their messages on the fly. Plus, says el Kaliouby, there could be health applications. She says it’s possible to read a person’s heart rate with a webcam by analyzing the blood flow in his or her face.
“Imagine having a camera on all the time monitoring your heart rate,” she told the Huffington Post, “so that it can tell you if something’s wrong, if you need to get more fit, or if you’re furrowing your brow all the time and need to relax.”
So what do you think, creepy or cool?
Here are five other ways machines are reacting to human emotions:
- And how was my day?: Researchers at the University of Cambridge have developed an Android mobile app that monitors a person’s behavior throughout the day, using incoming calls and texts, plus social media posts to track their mood. The app, called “Emotion Sense,” is designed to create a “journey of discovery,” allowing users to have a digital record of the peaks and valleys of their daily lives. The data can be stored and used for therapy sessions.
- And this is me after the third cup of coffee: Then there’s Xpression, another mood-tracking app created by a British company called EI Technologies. Instead of relying on people in therapy to keep diaries of their mood shifts, the app listens for changes in a person’s voice to determine if they are in one of five emotional states: calm, happy, sad, angry or anxious/frightened. It then keeps a list of a person’s moods and when they change. And, if the person desires, this record can automatically be sent to a therapist at the end of every day.
- What if you just hate typing on a phone? : Scientists at Samsung are working on software that will gauge your frame of mind by how you type out your tweets on your smartphone. By analyzing how fast you type, how much the phone shakes, how often you backspace mistakes, and how many emoticons you use, the phone should be able to determine if you’re angry, surprised, happy, sad, fearful, or disgusted. And based on what conclusion it draws, it could include with your tweet the appropriate emoticon to tip off your followers to your state of mind.
- Just don’t invite your friends over to watch: Using a sensor worn on the wrist and a smartphone camera worn around the neck, researchers at M.I.T. have created a “lifelogging” system that collects images and data designed to show a person which events represented their emotional highs and lows. The system, called Inside-Out, includes a bio-sensor in a wristband that tracks heightened emotions through electrical charges in the skin while the smartphone tracks the person’s location and takes several photos a minute. Then, at the end of the day, the user can view their experiences, along with all the sensor data.
- Your brow says you have issues: This probably was inevitable. Researchers at the University of Southern California have created a robotic therapist that not only is programmed to encourage patients with well-timed “Uh-huhs,” but also is expert, using motion sensors and voice analysis, at interpreting a patient’s every gesture and voice inflection during a therapy session.
Video bonus: Want to see how bizarre this trend of devices reading human emotions can get? Check out this promotion of Tailly, a mechanical tail that picks up your level of excitement by tracking your heart rate and then wags appropriately.
More from Smithsonian.com
April 15, 2013
Last fall, shoppers outside a Macy’s store in Boston were given a chance to test drive a robot. They were invited, compliments of Brigham and Women’s Hospital, to sit at a console and move the machine’s arm the same way surgeons would in an operating room.
And why not? What says cutting-edge medicine more than robotic surgery? Who wouldn’t be impressed with a hospital where robot arms, with all their precision, replace surgeons’ hands?
The surgeons, of course, control the robots on computers where everything is magnified in 3D, but the actual cutting is done by machines. And that means smaller incisions, fewer complications and faster recoveries.
But earlier this year, the Food and Drug Administration (FDA) began surveying doctors who use the operating room robots known as the da Vinci Surgical System. The investigation was sparked by a jump in incidents involving da Vinci robots, up to 500 in 2012.
The California company that makes the da Vinci, Intuitive Surgical, says the spike has to do with a change in how incidents are reported, as opposed to problems with its robots. It’s also true that robot surgery is being done a lot more frequently–almost 370,000 procedures were done in the U.S. last year, which is three and a half times as many as in 2008.
And the procedures are getting more complicated. At first, the robots were used primarily for prostate surgeries, then for hysterectomies. Now they’re removing gall bladders, repairing heart valves, shrinking stomachs during weight loss surgery, even handling organ transplants.
Not surprisingly, FDA survey has stirred up a swirl of questions about machine medicine. Have hospitals, in their need to justify the expense of a $1.5 million robot, ratcheted up their use unnecessarily? Has Intuitive Surgical placed enough emphasis on doctors getting supervised training on the machines? And how much training is enough?
It’s not an uncommon scenario for technological innovation. A new product gets marketed aggressively to companies–in this case hospitals–and they respond enthusiastically, at least in part because they don’t want to miss out on the next big thing.
But is newer always better? A study published recently in The Journal of the American Medical Association, compared outcomes in 264,758 women who had either laparoscopic or robotically assisted hysterectomies at 441 different hospitals between 2007 and 2010. Neither method is invasive.
But the researchers found no overall difference in complication rates between the two methods, and no difference in the rates of blood transfusion. The only big difference between the two is the cost–the robotic surgery costs one-third more than laparoscopic surgery.
Then there’s the matter of loosening training standards. When the FDA allowed the da Vinci system to be sold back in 2000, it was under a process called “premarket notification.” By claiming that new devices are similar to others already on the market, manufacturers can be exempted from rigorous trials and tough requirements. In this case, Intuitive Surgical was not formally required to offer training programs for surgeons.
The company did tell the FDA that it planned to require a 70-item exam and a three-day training session for doctors. But, as a recent New York Times article noted, Intuitive changed its policy just two years later. Instead it required surgeons to pass a 10-question online quiz and spend only a day in hands-on training.
So ultimately it’s up to the hospitals to set training standards. But in their rush to embrace the future, they can be tempted to avoid being too demanding. In one 2008 case that has resulted in a lawsuit against Intuitive, a patient suffered serious complications, including impotence and incontinence, while having his prostate gland removed. The surgeon, it turned out, had never done robotic surgery without supervision before.
A researcher at Johns Hopkins Hospital, Dr. Martin Makary, who has previously criticized hospitals for overhyping robotic surgery on their websites, has another study coming out soon that suggests that the problems involving da Vinci robots are underreported. “The rapid adoption of robotic surgery,” he contends, “has been done, by and large, without the proper evaluation.”
Dr. David Samadi, Chief of Robotics and Minimally Invasive Surgery at the Mount Sinai School of Medicine in New York, has a different way of looking at robotic surgery: “A good driver in a Lamborghini is going to win NASCAR. But someone’s who not a a good driver in a Lamborghini…he’s going to flip the car and maybe kill himself.”
Here are some other ways robots are being used in hospitals:
- Down go the mean old germs: Doctors at Johns Hopkins Hospital in Baltimore have turned to robots to take on the superbugs that have become such a threat of spreading dangerous infections among patients. After a hospital room is sealed, the robots spend the next half hour spraying a mist of hydrogen peroxide over every surface. Other hospitals are taking a a different approach in dealing with nasty bacteria–they’re using robots that zap germs with beams of ultraviolet light.
- And you’ll be able to see your face in the scalpel: GE is developing a robot that will keep the tools of the operating room sterile and organized. Instead of relying on humans doing this by hand–clearly not the most efficient process–the robot, by recognizing unique coding on each piece of equipment, will be able to sort scalpels from clamps from scissors, sterilize them and then deliver everything to the operating room.
- Bedside manner, without the bedside part: Earlier this year the FDA approved a medical robot called RP-VITA, which was developed by iRobot and InTouch Health. The machine moves around the hospital to rooms of patients identified by the doctor. Once in a room, it connects the doctor to the patient or hospital staff through the robot’s video screen.
- The buddy system: Researchers at Columbia University found that the pain ratings of hospitalized children dropped significantly when they interacted with “therapeutic robot companions.”
Video bonus: When da Vinci is good, it’s very, very good. Here’s a video of a surgeon using one to peel a grape.
Video bonus bonus: Okay, admittedly this has nothing to do with robotic surgery, but it’s the hottest robot video on the Web right now–an impressive, yet somewhat creepy demo of Boston Dynamics’ “Petman” in camo gear.
More from Smithsonian.com
February 8, 2013
When John Brennan, President Obama’s choice to be the next head of the CIA, appeared before a Senate committee yesterday, one question supplanted all others at his confirmation hearing:
How are the decisions made to send killer drones after suspected terrorists?
The how and, for that matter, the why of ordering specific drone strikes remains largely a mystery, but at least one thing is clear–the decisions are being made by humans who, one would hope, wrestle with the thought of sending a deadly missile into an occupied building.
But what if humans weren’t involved? What if one day life-or-death decisions were left up to machines equipped with loads of data, but also a sense of right and wrong?
That’s not so far fetched. It’s not going to happen any time soon, but there’s no question that as machines become more intelligent and more autonomous, a pivotal part of their transformation will be the ability to learn morality.
In fact, that may not be so far away. Gary Marcus, writing recently in The New Yorker, presented the scenario of one of Google’s driverless cars before forced to make a split-second decision: “Your car is speeding along a bridge at 50 miles per hour when errant school bus carrying 40 innocent children crosses its path. Should your car swerve, possibly risking the life of its owner (you), in order to save the children, or keep going, putting all 40 kids at risk? If the decision must be made in milliseconds, the computer will have to make the call.”
And what about robotic weapons or soldiers? Would a drone be able to learn not to fire on a house if it knew innocent civilians were also inside? Could machines be taught to follow the international rules of war?
Ronald Arkin, a computer science professor and robotics expert at Georgia Tech, certainly thinks so. He’s been developing software, referred to as an “ethical governor,” which would make machines capable of deciding when it’s appropriate to fire and when it’s not.
Arkin acknowledges that this could still be decades away, but he believes that robots might one day be both physically and ethically superior to human soldiers, not vulnerable to the emotional trauma of combat or desires for revenge. He doesn’t envision an all-robot army, but one in which machines serve with humans, doing high-risk jobs full of stressful snap decisions, such as clearing buildings.
Beware of killer robots
But others feel it’s time to squash this type of thinking before it goes too far. Late last year, Human Rights Watch and Harvard Law School’s Human Rights Clinic issued a report, “Losing Humanity: The Case Against Killer Robots,” which, true to its title, called on governments to ban all autonomous weapons because they would “increase the risk of death or injury to civilians during armed conflict.”
At about the same a time, a group of Cambridge University professors announced plans to launch what they call the Center for the Study of Existential Risk. When it opens later this year, it will push for serious scientific research into what could happen if and when machines get smarter than us.
The danger, says Huw Price, one of the Center’s co-founders, is that one day we could be dealing with “machines that are not malicious, but machines whose interests don’t include us”.
The art of deception
Shades of Skynet, the rogue artificial intelligence system that spawned a cyborg Arnold Schwarzenegger in The Terminator movies. Maybe this will always be the stuff of science fiction.
But consider other research Ronald Arkin is now doing as part of projects funded by the Department of Defense. He and colleagues have been studying how animals deceive one another, with the goal of teaching robots the art of deception.
For instance, they’ve been working on programming robots so that they can, if necessary, feign strength as animals often do. And they’ve been looking at teaching machines to mimic the behavior of creatures like the eastern gray squirrel. Squirrels hide their nuts from other animals, and when other squirrels or predators appear, the gray squirrels will sometimes visit places where they used to hide nuts to throw their competitors off the track. Robots programmed to follow a similar strategy have been able to confuse and slow down competitors.
It’s all in the interest, says Arkin, of developing machines that won’t be a threat to humans, but rather an asset, particularly in the ugly chaos of war. The key is to start focusing now on setting guidelines for appropriate robot behavior.
“When you start opening that Pandora’s Box, what should be done with this new capability?,” he said in a recent interview. “I believe that there is a potential for non-combatant casualties to be lessened by these intelligent robots, but we do have to be very careful about how they’re used and not just release them into the battlefield without appropriate concern.”
To believe New Yorker writer Gary Marcus, ethically advanced machines offer great potential beyond the battlefield.
The thought that haunts me the most is that that human ethics themselves are only a work-in-progress. We still confront situations for which we don’t have well-developed codes (e.g., in the case of assisted suicide) and need not look far into the past to find cases where our own codes were dubious, or worse (e.g., laws that permitted slavery and segregation).
What we really want are machines that can go a step further, endowed not only with the soundest codes of ethics that our best contemporary philosophers can devise, but also with the possibility of machines making their own moral progress, bringing them past our own limited early-twenty-first century idea of morality.”
Machines march on
Here are more recent robot developments:
- Hmmmm, ethical and sneaky: Researchers in Australia have developed a robot that can sneak around by moving only when there’s enough background noise to cover up its sound.
- What’s that buzzing sound?: British soldiers in Afghanistan have started using surveillance drones that can fit in the palms of their hands. Called the Black Hornet Nano, the little robot is only four inches long, but has a spy camera and can fly for 30 minutes on a full charge.
- Scratching the surface: NASA is developing a robot called RASSOR that weighs only 100 pounds, but will be able to mine minerals on the moon and other planets. It can move around on rough terrain and even over bolders by propping itself up on its arms.
- Ah, lust: And here’s an early Valentine’s Day story. Scientists at the University of Tokyo used a male moth to drive a robot. Actually, they used its mating movements to direct the device toward an object scented with female moth pheromones.
Video bonus: So you’re just not sure you could operate a 13-foot tall robot? No problem. Here’s a nifty demo that shows you how easy it can be. A happy model even shows you how to operate the “Smile Shot” feature. You smile, it fires BBs. How hard is that?
More from Smithsonian.com
January 23, 2013
During his inaugural speech Monday, President Barack Obama uttered a phrase that during last year’s presidential campaign were The-Words-That-Shall-Not-Be-Spoken.
He mentioned climate change.
In fact, President Obama didn’t just mention it, he declared that a failure to deal with climate change “would betray our children and future generations.”
But ask any Washington pundit if Congress will do anything meaningful on the subject and they’ll tell you that that’s as likely as D.C. freezing over in July.
Also this week, as it turns out, a study was released outlining the latest geoengineering idea for saving the planet in the event of an unstoppable downward spiral of the Earth’s climate.
This one would involve dumping billions of tons of dust of the mineral olivine into the oceans, a process that, in theory at least, could significantly reduce carbon dioxide levels and also slow the increasing acidification of the oceans.
But there’s a catch. Actually, there are many. For starters, the German scientists who did the study estimate that it would require an undertaking as large as the entire world’s coal industry to mine enough olivine, and then it would take at least 100 large ships working 24/7 for a year to spread enough of the mineral dust around to have an impact. Plus, all that olivine dust would undoubtedly change the biology of the oceans in ways no one can really predict.
Back to nature
Okay, back to reality. The only response to climate change that’s truly moving forward is what’s known as adaptation. Or, put more simply, preparing for the worst.
It’s not likely that there will be another Hurricane Sandy this year. Maybe not next year either. But no one running a city, particularly along a coastline, can dare to think that the next devastating superstorm won’t come along for another 50 years.
So their focus is on minimizing the damage when it does hit. And, perhaps not surprisingly, they’re increasingly looking to nature’s resiliency to help them deal with nature’s wrath.
Case in point: One proposal to reduce future flooding of Lower Manhattan is built around the idea of converting part of that section of the city into wetlands and salt marshes. That’s right, the concrete jungle, or at least the lower end of it, would get very squishy.
As architect Stephen Cassell envisions the transformation, the edge of low-lying neighborhoods, such as Battery Park, would become a patchwork of parks and marshes that could sop up future storm surges. And on the more vulnerable streets, asphalt would be replaced with porous concrete that could soak up excess water like a bed of sponges.
It’s just one of several ideas that have been floated, but its mimicking of natural wetlands has a simple, rugged appeal. As Cassell told the New York Times:
““We weren’t fully going back to nature with our plan. We thought of it more as engineered ecology. But if you look at the history of Manhattan, we have pushed nature off the island and replaced it with man-made infrastructure. What we can do is start to reintegrate things and make the city more durable.”
Know your roots
But that’s almost prosaic compared to Skygrove, the concept for a skyscraper inspired by the mangrove tree. Mangroves, which often grow in swamps or along rivers, are known for their gnarly network of roots that keep their trunks above the water.
Architects at the New York firm of HWKN copied that model for a building that could sit above rising water. Instead of having a single foundation, the Skygrove would rest on a base of “roots” extending outward like fingers spread under the water.
Each root of the building–which is meant to be a vertical office park for the City of New York–would be independent of the others and self-sufficient, able to provide its own energy. And each would be designed to survive whatever extreme weather may come its way.
To believe the designers, the Skygrove is a model for the kinds of buildings we may see more often in what they call the “newly nebulous coastal zone.”
It’s nature’s way
Here are other new inventions based on mimicking nature:
- But do not try this on trees: A London industrial designer has created a super-strong bicycle helmet by modeling it after the heads of woodpeckers.
- No word yet on how it may affect human mating: A team of researchers has found that LED lights that copy the structure of a firefly’s “lantern” are 55 percent brighter.
- Okay, let’s clear the air: A Copenhagen chemist has invented an air-cleaning device that mimics the process through which the Earth’s atmosphere cleans itself. In response to sunlight, polluting gases rising into the sky form particles when they come across compounds such as ozone. And those newly formed particles are washed out of the atmosphere by rain. The invention, which removes industrial pollutants from the air, is now being tested at a Danish plant.
- But do they ever tell dogs “You’ll just feel a little stick?”: One day we could have less painful hypodermic needles thanks to a group of scientists who studied porcupine quills. They determined that the backwards-facing barbs on a quill help it enter skin easily and then stay in place. The researchers learned this by measuring how much force it took to push in and pull out porcupine quills jabbed into pig skin and raw chicken meat.
- Mussels and bodybuilding: A team of researchers from Penn State and the University of Texas, Arlington believe that a version of the powerful adhesive that allows mussels to stick stubbornly to underwater surfaces can be used in operating rooms to close and heal wounds.
Video bonus: An idea whose time, sadly, has come: robot cockroaches. It will creep you out.
More from Smithsonian.com