May 22, 2013
As much time as we spend with our cell phones and laptops and tablets, it’s still pretty much a one-way relationship. We act, they respond. Sure, you can carry on a conversation with Siri on your iPhone, and while she is quick, it hardly qualifies as playful bantering. You ask questions, she gives answers.
But what if these devices could really read our emotions? What if they could interpret every little gesture, every facial cue so that they can gauge our feelings as well as–maybe better than–our best friends? And then they respond, not with information, but what might pass for empathy.
We’re not there yet, but we’re quickly moving in that direction, driven by a field of science known as affective computing. It’s built around software that can measure, interpret and react to human feelings. This might involve capturing your face on camera and then applying algorithms to every aspect of your expressions to try to make sense of each smirk and chin rub. Or it might involve reading your level of annoyance or pleasure by tracking how fast or with how much force you tap out a text or whether you use emoticons. And if you seem too agitated–or drunk–you could get a message suggesting that you might want to hold off pressing the send icon.
Seeing how difficult it is for us humans to make sense of other humans, this notion of programming machines to read our feelings is no small challenge. But it’s picking up speed, as scientists sharpen their focus on teaching devices emotional intelligence.
Every move you make
One of the better examples of how affective computing can work is the approach of a company called, appropriately, Affectiva. It records expressions and then, using proprietary algorithms, scrutinizes facial cues, tapping into a database of almost 300 million frames of elements of human faces. The software has been refined to the point where it can associate various combinations of those elements with different emotions.
When it was developed at M.I.T’s Media Lab by two scientists, Rosalind Picard and Rana el Kaliouby, the software, known as Affdex, was designed with the purpose of helping autistic children communicate better. But it clearly had loads of potential in the business world, and so M.I.T. spun the project off into a private company. It has since raised $21 million from investors.
So how is Affdex being used? Most often, it’s watching people watching commercials. it records people as they view ads on their computers–don’t worry, you need to opt in for this–and then, based on its database of facial cues, evaluates how the viewers feel about what they’ve seen. And the software doesn’t provide just an overall positive or negative verdict; it breaks down the viewers’ reactions second by second, which enables advertisers to identify, with more precision than ever before, what works in a commercial and what doesn’t.
It also is able to see that while people say one thing, their faces can say another. During an interview with the Huffington Post, el Kaliouby gave the example of the response to an ad for body lotion that aired in India. During the commercial, a husband playfully touches his wife’s exposed stomach. Afterwards, a number of women who had watched it said they found that scene offensive. But, according to el Kaliouby, the videos of the viewers showed that ever one of the women responded to the scene with what she called an “enjoyment smile.”
She sees opportunities beyond the world of advertising. Smart TVs could be that much smarter about what kind of programs we like if they’re able to develop a memory bank of our facial expressions. And politicians would be able to get real-time reactions to each line they utter during a debate and be able to adapt their messages on the fly. Plus, says el Kaliouby, there could be health applications. She says it’s possible to read a person’s heart rate with a webcam by analyzing the blood flow in his or her face.
“Imagine having a camera on all the time monitoring your heart rate,” she told the Huffington Post, “so that it can tell you if something’s wrong, if you need to get more fit, or if you’re furrowing your brow all the time and need to relax.”
So what do you think, creepy or cool?
Here are five other ways machines are reacting to human emotions:
- And how was my day?: Researchers at the University of Cambridge have developed an Android mobile app that monitors a person’s behavior throughout the day, using incoming calls and texts, plus social media posts to track their mood. The app, called “Emotion Sense,” is designed to create a “journey of discovery,” allowing users to have a digital record of the peaks and valleys of their daily lives. The data can be stored and used for therapy sessions.
- And this is me after the third cup of coffee: Then there’s Xpression, another mood-tracking app created by a British company called EI Technologies. Instead of relying on people in therapy to keep diaries of their mood shifts, the app listens for changes in a person’s voice to determine if they are in one of five emotional states: calm, happy, sad, angry or anxious/frightened. It then keeps a list of a person’s moods and when they change. And, if the person desires, this record can automatically be sent to a therapist at the end of every day.
- What if you just hate typing on a phone? : Scientists at Samsung are working on software that will gauge your frame of mind by how you type out your tweets on your smartphone. By analyzing how fast you type, how much the phone shakes, how often you backspace mistakes, and how many emoticons you use, the phone should be able to determine if you’re angry, surprised, happy, sad, fearful, or disgusted. And based on what conclusion it draws, it could include with your tweet the appropriate emoticon to tip off your followers to your state of mind.
- Just don’t invite your friends over to watch: Using a sensor worn on the wrist and a smartphone camera worn around the neck, researchers at M.I.T. have created a “lifelogging” system that collects images and data designed to show a person which events represented their emotional highs and lows. The system, called Inside-Out, includes a bio-sensor in a wristband that tracks heightened emotions through electrical charges in the skin while the smartphone tracks the person’s location and takes several photos a minute. Then, at the end of the day, the user can view their experiences, along with all the sensor data.
- Your brow says you have issues: This probably was inevitable. Researchers at the University of Southern California have created a robotic therapist that not only is programmed to encourage patients with well-timed “Uh-huhs,” but also is expert, using motion sensors and voice analysis, at interpreting a patient’s every gesture and voice inflection during a therapy session.
Video bonus: Want to see how bizarre this trend of devices reading human emotions can get? Check out this promotion of Tailly, a mechanical tail that picks up your level of excitement by tracking your heart rate and then wags appropriately.
More from Smithsonian.com
April 15, 2013
Last fall, shoppers outside a Macy’s store in Boston were given a chance to test drive a robot. They were invited, compliments of Brigham and Women’s Hospital, to sit at a console and move the machine’s arm the same way surgeons would in an operating room.
And why not? What says cutting-edge medicine more than robotic surgery? Who wouldn’t be impressed with a hospital where robot arms, with all their precision, replace surgeons’ hands?
The surgeons, of course, control the robots on computers where everything is magnified in 3D, but the actual cutting is done by machines. And that means smaller incisions, fewer complications and faster recoveries.
But earlier this year, the Food and Drug Administration (FDA) began surveying doctors who use the operating room robots known as the da Vinci Surgical System. The investigation was sparked by a jump in incidents involving da Vinci robots, up to 500 in 2012.
The California company that makes the da Vinci, Intuitive Surgical, says the spike has to do with a change in how incidents are reported, as opposed to problems with its robots. It’s also true that robot surgery is being done a lot more frequently–almost 370,000 procedures were done in the U.S. last year, which is three and a half times as many as in 2008.
And the procedures are getting more complicated. At first, the robots were used primarily for prostate surgeries, then for hysterectomies. Now they’re removing gall bladders, repairing heart valves, shrinking stomachs during weight loss surgery, even handling organ transplants.
Not surprisingly, FDA survey has stirred up a swirl of questions about machine medicine. Have hospitals, in their need to justify the expense of a $1.5 million robot, ratcheted up their use unnecessarily? Has Intuitive Surgical placed enough emphasis on doctors getting supervised training on the machines? And how much training is enough?
It’s not an uncommon scenario for technological innovation. A new product gets marketed aggressively to companies–in this case hospitals–and they respond enthusiastically, at least in part because they don’t want to miss out on the next big thing.
But is newer always better? A study published recently in The Journal of the American Medical Association, compared outcomes in 264,758 women who had either laparoscopic or robotically assisted hysterectomies at 441 different hospitals between 2007 and 2010. Neither method is invasive.
But the researchers found no overall difference in complication rates between the two methods, and no difference in the rates of blood transfusion. The only big difference between the two is the cost–the robotic surgery costs one-third more than laparoscopic surgery.
Then there’s the matter of loosening training standards. When the FDA allowed the da Vinci system to be sold back in 2000, it was under a process called “premarket notification.” By claiming that new devices are similar to others already on the market, manufacturers can be exempted from rigorous trials and tough requirements. In this case, Intuitive Surgical was not formally required to offer training programs for surgeons.
The company did tell the FDA that it planned to require a 70-item exam and a three-day training session for doctors. But, as a recent New York Times article noted, Intuitive changed its policy just two years later. Instead it required surgeons to pass a 10-question online quiz and spend only a day in hands-on training.
So ultimately it’s up to the hospitals to set training standards. But in their rush to embrace the future, they can be tempted to avoid being too demanding. In one 2008 case that has resulted in a lawsuit against Intuitive, a patient suffered serious complications, including impotence and incontinence, while having his prostate gland removed. The surgeon, it turned out, had never done robotic surgery without supervision before.
A researcher at Johns Hopkins Hospital, Dr. Martin Makary, who has previously criticized hospitals for overhyping robotic surgery on their websites, has another study coming out soon that suggests that the problems involving da Vinci robots are underreported. “The rapid adoption of robotic surgery,” he contends, “has been done, by and large, without the proper evaluation.”
Dr. David Samadi, Chief of Robotics and Minimally Invasive Surgery at the Mount Sinai School of Medicine in New York, has a different way of looking at robotic surgery: “A good driver in a Lamborghini is going to win NASCAR. But someone’s who not a a good driver in a Lamborghini…he’s going to flip the car and maybe kill himself.”
Here are some other ways robots are being used in hospitals:
- Down go the mean old germs: Doctors at Johns Hopkins Hospital in Baltimore have turned to robots to take on the superbugs that have become such a threat of spreading dangerous infections among patients. After a hospital room is sealed, the robots spend the next half hour spraying a mist of hydrogen peroxide over every surface. Other hospitals are taking a a different approach in dealing with nasty bacteria–they’re using robots that zap germs with beams of ultraviolet light.
- And you’ll be able to see your face in the scalpel: GE is developing a robot that will keep the tools of the operating room sterile and organized. Instead of relying on humans doing this by hand–clearly not the most efficient process–the robot, by recognizing unique coding on each piece of equipment, will be able to sort scalpels from clamps from scissors, sterilize them and then deliver everything to the operating room.
- Bedside manner, without the bedside part: Earlier this year the FDA approved a medical robot called RP-VITA, which was developed by iRobot and InTouch Health. The machine moves around the hospital to rooms of patients identified by the doctor. Once in a room, it connects the doctor to the patient or hospital staff through the robot’s video screen.
- The buddy system: Researchers at Columbia University found that the pain ratings of hospitalized children dropped significantly when they interacted with “therapeutic robot companions.”
Video bonus: When da Vinci is good, it’s very, very good. Here’s a video of a surgeon using one to peel a grape.
Video bonus bonus: Okay, admittedly this has nothing to do with robotic surgery, but it’s the hottest robot video on the Web right now–an impressive, yet somewhat creepy demo of Boston Dynamics’ “Petman” in camo gear.
More from Smithsonian.com
February 8, 2013
When John Brennan, President Obama’s choice to be the next head of the CIA, appeared before a Senate committee yesterday, one question supplanted all others at his confirmation hearing:
How are the decisions made to send killer drones after suspected terrorists?
The how and, for that matter, the why of ordering specific drone strikes remains largely a mystery, but at least one thing is clear–the decisions are being made by humans who, one would hope, wrestle with the thought of sending a deadly missile into an occupied building.
But what if humans weren’t involved? What if one day life-or-death decisions were left up to machines equipped with loads of data, but also a sense of right and wrong?
That’s not so far fetched. It’s not going to happen any time soon, but there’s no question that as machines become more intelligent and more autonomous, a pivotal part of their transformation will be the ability to learn morality.
In fact, that may not be so far away. Gary Marcus, writing recently in The New Yorker, presented the scenario of one of Google’s driverless cars before forced to make a split-second decision: “Your car is speeding along a bridge at 50 miles per hour when errant school bus carrying 40 innocent children crosses its path. Should your car swerve, possibly risking the life of its owner (you), in order to save the children, or keep going, putting all 40 kids at risk? If the decision must be made in milliseconds, the computer will have to make the call.”
And what about robotic weapons or soldiers? Would a drone be able to learn not to fire on a house if it knew innocent civilians were also inside? Could machines be taught to follow the international rules of war?
Ronald Arkin, a computer science professor and robotics expert at Georgia Tech, certainly thinks so. He’s been developing software, referred to as an “ethical governor,” which would make machines capable of deciding when it’s appropriate to fire and when it’s not.
Arkin acknowledges that this could still be decades away, but he believes that robots might one day be both physically and ethically superior to human soldiers, not vulnerable to the emotional trauma of combat or desires for revenge. He doesn’t envision an all-robot army, but one in which machines serve with humans, doing high-risk jobs full of stressful snap decisions, such as clearing buildings.
Beware of killer robots
But others feel it’s time to squash this type of thinking before it goes too far. Late last year, Human Rights Watch and Harvard Law School’s Human Rights Clinic issued a report, “Losing Humanity: The Case Against Killer Robots,” which, true to its title, called on governments to ban all autonomous weapons because they would “increase the risk of death or injury to civilians during armed conflict.”
At about the same a time, a group of Cambridge University professors announced plans to launch what they call the Center for the Study of Existential Risk. When it opens later this year, it will push for serious scientific research into what could happen if and when machines get smarter than us.
The danger, says Huw Price, one of the Center’s co-founders, is that one day we could be dealing with “machines that are not malicious, but machines whose interests don’t include us”.
The art of deception
Shades of Skynet, the rogue artificial intelligence system that spawned a cyborg Arnold Schwarzenegger in The Terminator movies. Maybe this will always be the stuff of science fiction.
But consider other research Ronald Arkin is now doing as part of projects funded by the Department of Defense. He and colleagues have been studying how animals deceive one another, with the goal of teaching robots the art of deception.
For instance, they’ve been working on programming robots so that they can, if necessary, feign strength as animals often do. And they’ve been looking at teaching machines to mimic the behavior of creatures like the eastern gray squirrel. Squirrels hide their nuts from other animals, and when other squirrels or predators appear, the gray squirrels will sometimes visit places where they used to hide nuts to throw their competitors off the track. Robots programmed to follow a similar strategy have been able to confuse and slow down competitors.
It’s all in the interest, says Arkin, of developing machines that won’t be a threat to humans, but rather an asset, particularly in the ugly chaos of war. The key is to start focusing now on setting guidelines for appropriate robot behavior.
“When you start opening that Pandora’s Box, what should be done with this new capability?,” he said in a recent interview. “I believe that there is a potential for non-combatant casualties to be lessened by these intelligent robots, but we do have to be very careful about how they’re used and not just release them into the battlefield without appropriate concern.”
To believe New Yorker writer Gary Marcus, ethically advanced machines offer great potential beyond the battlefield.
The thought that haunts me the most is that that human ethics themselves are only a work-in-progress. We still confront situations for which we don’t have well-developed codes (e.g., in the case of assisted suicide) and need not look far into the past to find cases where our own codes were dubious, or worse (e.g., laws that permitted slavery and segregation).
What we really want are machines that can go a step further, endowed not only with the soundest codes of ethics that our best contemporary philosophers can devise, but also with the possibility of machines making their own moral progress, bringing them past our own limited early-twenty-first century idea of morality.”
Machines march on
Here are more recent robot developments:
- Hmmmm, ethical and sneaky: Researchers in Australia have developed a robot that can sneak around by moving only when there’s enough background noise to cover up its sound.
- What’s that buzzing sound?: British soldiers in Afghanistan have started using surveillance drones that can fit in the palms of their hands. Called the Black Hornet Nano, the little robot is only four inches long, but has a spy camera and can fly for 30 minutes on a full charge.
- Scratching the surface: NASA is developing a robot called RASSOR that weighs only 100 pounds, but will be able to mine minerals on the moon and other planets. It can move around on rough terrain and even over bolders by propping itself up on its arms.
- Ah, lust: And here’s an early Valentine’s Day story. Scientists at the University of Tokyo used a male moth to drive a robot. Actually, they used its mating movements to direct the device toward an object scented with female moth pheromones.
Video bonus: So you’re just not sure you could operate a 13-foot tall robot? No problem. Here’s a nifty demo that shows you how easy it can be. A happy model even shows you how to operate the “Smile Shot” feature. You smile, it fires BBs. How hard is that?
More from Smithsonian.com
January 23, 2013
During his inaugural speech Monday, President Barack Obama uttered a phrase that during last year’s presidential campaign were The-Words-That-Shall-Not-Be-Spoken.
He mentioned climate change.
In fact, President Obama didn’t just mention it, he declared that a failure to deal with climate change “would betray our children and future generations.”
But ask any Washington pundit if Congress will do anything meaningful on the subject and they’ll tell you that that’s as likely as D.C. freezing over in July.
Also this week, as it turns out, a study was released outlining the latest geoengineering idea for saving the planet in the event of an unstoppable downward spiral of the Earth’s climate.
This one would involve dumping billions of tons of dust of the mineral olivine into the oceans, a process that, in theory at least, could significantly reduce carbon dioxide levels and also slow the increasing acidification of the oceans.
But there’s a catch. Actually, there are many. For starters, the German scientists who did the study estimate that it would require an undertaking as large as the entire world’s coal industry to mine enough olivine, and then it would take at least 100 large ships working 24/7 for a year to spread enough of the mineral dust around to have an impact. Plus, all that olivine dust would undoubtedly change the biology of the oceans in ways no one can really predict.
Back to nature
Okay, back to reality. The only response to climate change that’s truly moving forward is what’s known as adaptation. Or, put more simply, preparing for the worst.
It’s not likely that there will be another Hurricane Sandy this year. Maybe not next year either. But no one running a city, particularly along a coastline, can dare to think that the next devastating superstorm won’t come along for another 50 years.
So their focus is on minimizing the damage when it does hit. And, perhaps not surprisingly, they’re increasingly looking to nature’s resiliency to help them deal with nature’s wrath.
Case in point: One proposal to reduce future flooding of Lower Manhattan is built around the idea of converting part of that section of the city into wetlands and salt marshes. That’s right, the concrete jungle, or at least the lower end of it, would get very squishy.
As architect Stephen Cassell envisions the transformation, the edge of low-lying neighborhoods, such as Battery Park, would become a patchwork of parks and marshes that could sop up future storm surges. And on the more vulnerable streets, asphalt would be replaced with porous concrete that could soak up excess water like a bed of sponges.
It’s just one of several ideas that have been floated, but its mimicking of natural wetlands has a simple, rugged appeal. As Cassell told the New York Times:
““We weren’t fully going back to nature with our plan. We thought of it more as engineered ecology. But if you look at the history of Manhattan, we have pushed nature off the island and replaced it with man-made infrastructure. What we can do is start to reintegrate things and make the city more durable.”
Know your roots
But that’s almost prosaic compared to Skygrove, the concept for a skyscraper inspired by the mangrove tree. Mangroves, which often grow in swamps or along rivers, are known for their gnarly network of roots that keep their trunks above the water.
Architects at the New York firm of HWKN copied that model for a building that could sit above rising water. Instead of having a single foundation, the Skygrove would rest on a base of “roots” extending outward like fingers spread under the water.
Each root of the building–which is meant to be a vertical office park for the City of New York–would be independent of the others and self-sufficient, able to provide its own energy. And each would be designed to survive whatever extreme weather may come its way.
To believe the designers, the Skygrove is a model for the kinds of buildings we may see more often in what they call the “newly nebulous coastal zone.”
It’s nature’s way
Here are other new inventions based on mimicking nature:
- But do not try this on trees: A London industrial designer has created a super-strong bicycle helmet by modeling it after the heads of woodpeckers.
- No word yet on how it may affect human mating: A team of researchers has found that LED lights that copy the structure of a firefly’s “lantern” are 55 percent brighter.
- Okay, let’s clear the air: A Copenhagen chemist has invented an air-cleaning device that mimics the process through which the Earth’s atmosphere cleans itself. In response to sunlight, polluting gases rising into the sky form particles when they come across compounds such as ozone. And those newly formed particles are washed out of the atmosphere by rain. The invention, which removes industrial pollutants from the air, is now being tested at a Danish plant.
- But do they ever tell dogs “You’ll just feel a little stick?”: One day we could have less painful hypodermic needles thanks to a group of scientists who studied porcupine quills. They determined that the backwards-facing barbs on a quill help it enter skin easily and then stay in place. The researchers learned this by measuring how much force it took to push in and pull out porcupine quills jabbed into pig skin and raw chicken meat.
- Mussels and bodybuilding: A team of researchers from Penn State and the University of Texas, Arlington believe that a version of the powerful adhesive that allows mussels to stick stubbornly to underwater surfaces can be used in operating rooms to close and heal wounds.
Video bonus: An idea whose time, sadly, has come: robot cockroaches. It will creep you out.
More from Smithsonian.com
January 7, 2013
Here in Washington we have heard of this thing you call “advance planning,” but we are not yet ready to embrace it. A bit too futuristic.
Still, we can’t help but admire from afar those who attempt to predict what could happen more than a month from now. So I was impressed a few weeks ago when the big thinkers at IBM imagined the world five years hence and identified what they believe will be five areas of innovation that will have the greatest impact on our daily lives.
They’ve been doing this for a few years now, but this time the wonky whizzes followed a theme--the five human senses. Not that they’re saying that by 2018, we’ll all be able to see, hear and smell better, but rather that machines will–that by using quickly-evolving sensory and cognitive technologies, computers will accelerate their transformation from data retrieval and processing engines to thinking tools.
See a pattern?
Today, let’s deal with vision. It’a logical leap to assume that IBM might be referring to Google’s Project Glass. No question that it has redefined the role of glasses, from geeky accessory that helps us see better to combo smartphone/data dive device we’ll someday wear on our faces.
But that’s not what the IBMers are talking about. They’re focused on machine vision, specifically pattern recognition, whereby, through repeated exposure to images, computers are able to identify things.
As it turns out, Google happened to be involved in one of last year’s more notable pattern recognition experiments, a project in which a network of 1,000 computers using 16,000 processors was, after examining 10 million images from YouTube videos, able to teach itself what a cat looked like.
What made this particularly impressive is that the computers were able to do so without any human guidance about what to look for. All the learning was done through the machines working together to decide which features of cats merited their attention and which patterns mattered.
And that’s the model for how machines will learn vision. Here’s how John Smith, a senior manager in IBM’s Intelligent Information Management, explains it:
“Let’s say we wanted to teach a computer what a beach looks like. We would start by showing the computer many examples of beach scenes. The computer would turn those pictures into distinct features, such as color distributions, texture patterns, edge information, or motion information in the case of video. Then, the computer would begin to learn how to discriminate beach scenes from other scenes based on these different features. For instance, it would learn that for a beach scene, certain color distributions are typically found, compared to a downtown cityscape.”
How smart is smart?
Good for them. But face it, identifying a beach is pretty basic stuff for most of us humans. Could we be getting carried away about how much thinking machines will be able to do for us?
Gary Marcus, a psychology professor at New York University, thinks so. Writing recently on The New Yorker’s website, he concludes that while much progress has been made in what’s become known as “deep learning,” machines still have a long way to go before they should be considered truly intelligent.
“Realistically, deep learning is only part of the larger challenge of building intelligent machines. Such techniques lack ways of representing causal relationships (such as between diseases and their symptoms), and are likely to face challenges in acquiring abstract ideas like “sibling” or “identical to.” They have no obvious ways of performing logical inferences, and they are also still a long way from integrating abstract knowledge, such as information about what objects are, what they are for, and how they are typically used.”
The folks at IBM would no doubt acknowledge as much. Machine learning comes in steps, not leaps.
But they believe that within five years, deep learning will have taken enough forward steps that computers will, for instance, start playing a much bigger role in medical diagnosis, that they could actually become better than doctors when it comes to spotting tumors, blood clots or diseased tissue in MRIs, X-rays or CT scans.
And that could make a big difference in our lives.
Seeing is believing
Here are more ways machine vision is having an impact on our lives:
- Putting your best arm forward: Technology developed at the University of Pittsburgh uses pattern recognition to enable paraplegics to control a robotic arm with their brains.
- Your mouth says yes, but your brain says no: Researchers at Stanford found that using pattern recognition algorithms on MRI scans of brains could help them determine if someone actually had lower back pain or if they were faking it.
- When your moles are ready for their close ups: Last year a Romanian startup named SkinVision launched an iPhone app that allows people to take a picture of moles on their skin and then have SkinVision’s recognition software identify any irregularities and point out the risk level–without offering an actual diagnosis. Next step is to make it possible for people to send images of their skin directly to their dermatologist.
- Have I got a deal for you: Now under development is a marketing technology called Facedeals. It works like this: Once a camera at a store entrance recognizes you, you’re sent customized in-store deals on your smart phone. And yes, you’d have to opt in first.
- I’d know that seal anywhere: A computerized photo-ID system that uses pattern recognition is helping British scientists track gray seals, which have unique markings on their coats.
Video bonus: While we’re on the subject of artificial intelligence, here’s a robot swarm playing Beethoven, compliments of scientists at Georgia Tech. Bet you didn’t expect to see that today.
More from Smithsonian.com