May 22, 2013
As much time as we spend with our cell phones and laptops and tablets, it’s still pretty much a one-way relationship. We act, they respond. Sure, you can carry on a conversation with Siri on your iPhone, and while she is quick, it hardly qualifies as playful bantering. You ask questions, she gives answers.
But what if these devices could really read our emotions? What if they could interpret every little gesture, every facial cue so that they can gauge our feelings as well as–maybe better than–our best friends? And then they respond, not with information, but what might pass for empathy.
We’re not there yet, but we’re quickly moving in that direction, driven by a field of science known as affective computing. It’s built around software that can measure, interpret and react to human feelings. This might involve capturing your face on camera and then applying algorithms to every aspect of your expressions to try to make sense of each smirk and chin rub. Or it might involve reading your level of annoyance or pleasure by tracking how fast or with how much force you tap out a text or whether you use emoticons. And if you seem too agitated–or drunk–you could get a message suggesting that you might want to hold off pressing the send icon.
Seeing how difficult it is for us humans to make sense of other humans, this notion of programming machines to read our feelings is no small challenge. But it’s picking up speed, as scientists sharpen their focus on teaching devices emotional intelligence.
Every move you make
One of the better examples of how affective computing can work is the approach of a company called, appropriately, Affectiva. It records expressions and then, using proprietary algorithms, scrutinizes facial cues, tapping into a database of almost 300 million frames of elements of human faces. The software has been refined to the point where it can associate various combinations of those elements with different emotions.
When it was developed at M.I.T’s Media Lab by two scientists, Rosalind Picard and Rana el Kaliouby, the software, known as Affdex, was designed with the purpose of helping autistic children communicate better. But it clearly had loads of potential in the business world, and so M.I.T. spun the project off into a private company. It has since raised $21 million from investors.
So how is Affdex being used? Most often, it’s watching people watching commercials. it records people as they view ads on their computers–don’t worry, you need to opt in for this–and then, based on its database of facial cues, evaluates how the viewers feel about what they’ve seen. And the software doesn’t provide just an overall positive or negative verdict; it breaks down the viewers’ reactions second by second, which enables advertisers to identify, with more precision than ever before, what works in a commercial and what doesn’t.
It also is able to see that while people say one thing, their faces can say another. During an interview with the Huffington Post, el Kaliouby gave the example of the response to an ad for body lotion that aired in India. During the commercial, a husband playfully touches his wife’s exposed stomach. Afterwards, a number of women who had watched it said they found that scene offensive. But, according to el Kaliouby, the videos of the viewers showed that every one of the women responded to the scene with what she called an “enjoyment smile.”
She sees opportunities beyond the world of advertising. Smart TVs could be that much smarter about what kind of programs we like if they’re able to develop a memory bank of our facial expressions. And politicians would be able to get real-time reactions to each line they utter during a debate and be able to adapt their messages on the fly. Plus, says el Kaliouby, there could be health applications. She says it’s possible to read a person’s heart rate with a webcam by analyzing the blood flow in his or her face.
“Imagine having a camera on all the time monitoring your heart rate,” she told the Huffington Post, “so that it can tell you if something’s wrong, if you need to get more fit, or if you’re furrowing your brow all the time and need to relax.”
So what do you think, creepy or cool?
Here are five other ways machines are reacting to human emotions:
- And how was my day?: Researchers at the University of Cambridge have developed an Android mobile app that monitors a person’s behavior throughout the day, using incoming calls and texts, plus social media posts to track their mood. The app, called “Emotion Sense,” is designed to create a “journey of discovery,” allowing users to have a digital record of the peaks and valleys of their daily lives. The data can be stored and used for therapy sessions.
- And this is me after the third cup of coffee: Then there’s Xpression, another mood-tracking app created by a British company called EI Technologies. Instead of relying on people in therapy to keep diaries of their mood shifts, the app listens for changes in a person’s voice to determine if they are in one of five emotional states: calm, happy, sad, angry or anxious/frightened. It then keeps a list of a person’s moods and when they change. And, if the person desires, this record can automatically be sent to a therapist at the end of every day.
- What if you just hate typing on a phone? : Scientists at Samsung are working on software that will gauge your frame of mind by how you type out your tweets on your smartphone. By analyzing how fast you type, how much the phone shakes, how often you backspace mistakes, and how many emoticons you use, the phone should be able to determine if you’re angry, surprised, happy, sad, fearful, or disgusted. And based on what conclusion it draws, it could include with your tweet the appropriate emoticon to tip off your followers to your state of mind.
- Just don’t invite your friends over to watch: Using a sensor worn on the wrist and a smartphone camera worn around the neck, researchers at M.I.T. have created a “lifelogging” system that collects images and data designed to show a person which events represented their emotional highs and lows. The system, called Inside-Out, includes a bio-sensor in a wristband that tracks heightened emotions through electrical charges in the skin while the smartphone tracks the person’s location and takes several photos a minute. Then, at the end of the day, the user can view their experiences, along with all the sensor data.
- Your brow says you have issues: This probably was inevitable. Researchers at the University of Southern California have created a robotic therapist that not only is programmed to encourage patients with well-timed “Uh-huhs,” but also is expert, using motion sensors and voice analysis, at interpreting a patient’s every gesture and voice inflection during a therapy session.
Video bonus: Want to see how bizarre this trend of devices reading human emotions can get? Check out this promotion of Tailly, a mechanical tail that picks up your level of excitement by tracking your heart rate and then wags appropriately.
More from Smithsonian.com
May 8, 2013
Cell phones are so many things now–computer, map, clock, calculator, camera, shopping device, concierge, and occasionally, a phone. But more than anything, that little device that never leaves your person is one amazingly prolific data engine.
Which is why last October, Verizon Wireless, the largest U.S, carrier with almost 100 million customers, launched a new division called Precision Market Insights. And why, at about the same time, Madrid-based Telefonica, one of the world’s largest mobile network providers, opened its own new business unit, Telefonica Dynamic Insights.
The point of these ventures is to mine, reconstitute and sell the enormous amount of data that phone companies gather about our behavior. Every time we make a mobile call or send a text message–which pings a cell tower–that info is recorded. So, with enough computer power, a company can draw pretty accurate conclusions about how and when people move through a city or a region. Or they can tell where people have come from to attend an event. As part of a recent case study, for example, Verizon was able to say that people with Baltimore area codes outnumbered those with San Francisco area codes by three to one inside the New Orleans Superdome for the Super Bowl in February.
In a world enamored of geolocation, this is digital gold. It’s one thing to know the demographic blend of a community, but to be able to find out how many people pass by a business and where they’re coming from, that adds a whole nother level of precision to target marketing.
Follow the crowd
But this data have value beyond companies zeroing in on potential customers. It’s being used for social science, even medical research. Recently IBM crunched numbers from 5 million phone users in the Ivory Coast in Africa and, by tracking movements of people through which cell towers they connected to, it was able to recommend 65 improvements to bus service in the city of Abidjan.
And computer scientists at the University of Birmingham in England have used cell phone data to fine tune analysis of how epidemics spread. Again, it’s about analyzing how people move around. Heretofore, much of what scientists knew about the spread of contagious diseases was based largely on guesswork. But now, thanks to so many pings from so many phones, there’s no need to guess.
It’s important to point out that no actual identities are connected to cell phone data. It all gets anonymized, meaning there shouldn’t be a way to track the data back to real people.
There shouldn’t be.
Leaving a trail
But a study published in Scientific Reports in March found that even anonymized data may not be so anonymous after all. A team of researchers from Louvain University in Belgium, Harvard and M.I.T. found that by using data from 15 months of phone use by 1.5 million people, together with a similar dataset from Foursquare, they could identify about 95 percent of the cell phones users with just four data points and 50 percent of them with just two data points. A data point is an individual’s approximate whereabouts at the approximate time they’re using their cell phone.
The reason that only four locations were necessary to identify most people is that we tend to move in consistent patterns. Just as everyone has unique fingerprints, everyone has unique daily travels. While someone wouldn’t necessarily be able to match the path of a mobile phone–known as a mobility trace–to a specific person, we make it much easier through geolocated tweets or location “check-ins,” such as when we use Foursquare.
“In the 1930s, it was shown that you need 12 points to uniquely identify and characterize a fingerprint,” the study’s lead author, Yves-Alexandre de Montijoye, told the BBC in a recent interview. “What we did here is the exact same thing, but with mobility traces. The way we move and the behavior is so unique that four points are enough to identify 95 percent of the people.”
“We think this data is more available than people think. When you share information, you look around and you feel like there are lots of people around–in a shopping center or a tourist place–so you feel this isn’t sensitive information.”
In other words, you feel anonymous. But are you really? De Montijoye said the point of his team’s research wasn’t to conjure up visions of Big Brother. He thinks there’s much good that can come from mining cell phone data, for businesses, for city planners, for scientists, for doctors. But he thinks it’s important to recognize that today’s technology makes true privacy very hard to keep.
The title of the study? “Unique in the Crowd.”
Here are other recent developments related to mobile phones and their data:
- Every picture tells your story: Scientists at Carnegie Mellon University’s Human Computer Interaction Center say their research of 100 smartphone apps found that about half of them raised privacy concerns. For instance, a photo-sharing app like Instagram provided information that allowed them to easily discover the location of the person who took the photo.
- Cabbies with cameras: In the Mexican city of Tuxtla Gutiérrez, taxi drivers have been provided with GPS-enabled cell phones and encouraged to send messages and photographs about accidents or potholes or broken streetlights.
- Follow that cell: Congress has started looking into the matter of how police use cell phone data to track down suspects. The key issue is whether they should be required to get a warrant first.
- Follow that cell II: Police in Italy have started using a data analysis tool called LogAnalysis that makes it especially easy to visualize the relationships among conspiring suspects based on their phone calls. In one particular case involving a series of robberies, the tool showed a flurry of phone activity among the suspects before and after the heists, but dead silence when the crimes were being committed.
Video bonus: If you’re at all paranoid about how much data can be gleaned from how you use your mobile phone, you may not want to watch this TED talk by Malte Spitz.
April 26, 2013
I have good news and bad news for anyone who will be looking for a job in the coming years. The good news is that some time in the future, job interviews may go away. Okay, maybe some companies will still do them for the sake of tradition, but they won’t matter all that much.
Which leads me to the bad news–Big Data is more likely to determine if you get a job. Your dazzling smile, charming personality and awesome resume may count for something, but it’s algorithms and predictive analysis that will probably seal your fate.
Here’s why. Enormously powerful computers are beginning to make sense of the massive amounts of data the world now produces, and that allows almost any kind of behavior to be quantified and correlated with other data. Statistics might show, for instance, that people who live 15 miles from work are more likely to quit their jobs within five years. Or that employees with musical skills are particularly well-suited for jobs requiring them to be multilingual. I’m making those up, but they’re not so far-fetched.
Some human resources departments have already started using companies that mine deep reserves of information to shape their hiring decisions. And they’re discovering that when computers mix and match data, conventional wisdom about what kind of person is good in a job doesn’t always hold true.
Run the numbers
Consider the findings of Evolv, a San Francisco company that’s making a name for itself through its data-driven insights. It contends, for instance, that people who fill out online job applications using a browser that they installed themselves on their PCs, such as Chrome or Firefox, perform their jobs better and change jobs less often. You might speculate that this is because the kind of person who downloads a browser other than the one that came with his or her computer, is more proactive, more resourceful.
But Evolv doesn’t speculate. It simply points out that this is what data from more than 30,000 employees strongly suggests. There’s nothing anecdotal about it; it’s based on info gleaned from ten of thousands of workers. And that’s what gives it weight.
“The heart of science is measurement,” Erik Brynjolfsson, of the Sloan School of Management at M.I.T., pointed out in a recent New York Times article on what’s become known as work-force science. “We’re seeing a revolution in measurement, and it will revolutionize organizational economics and personnel economics.”
Evolv, which largely has focused its research on hourly employees, has spun from data other strands of of H.R. gold, such as:
- People who have been unemployed for a long time are, once they’re hired again, just as capable and stay on their jobs just as long as people who haven’t been out of work.
- A criminal record has long been a thick black mark for someone in the job market, but Evolv says their statistics show that a criminal background has no bearing on how an employee performs or how long they stick with a job. In fact, it has found that ex-criminals actually make better employees in call centers.
- Based on employee surveys, call center workers who are creative stay around. Those who are inquisitive don’t.
- The most reliable call center employees live near the job, have reliable transportation and use one or more social networks, but not more than four.
- Honesty matters. Data shows that people who prove to be honest on personality tests tend to stay on the job 20 to 30 percent longer than those who don’t.
And how do they gauge honesty? One technique is to ask people if they know simple keyboard shortcuts, such as control-V, which allows you to paste text. Later they’ll be asked to cut and paste text using only the keyboard to see if they were telling the truth.
It’s getting creepy
Data-driven hiring has its flaws, of course. One is that it could result in unintended discrimination against minority or older employees. Minority workers, for example, tend to travel farther to their jobs. And that could create legal problems for a company that steers clear of long-distance employees because statistics show they don’t stay in jobs as long.
Then there’s the matter of what lengths a company will go to gather data on its workers. Where will it draw the line when it comes to tracking employees’ behavior in the name of accumulating data?
“The data-gathering technology, to be sure, raises questions about the limits of worker surveillance,” Marc Rotenberg, executive director of the Electronic Privacy Information Center, told The New York Times. “The larger problem here is that all these workplace metrics are being collected when you as a worker are essentially behind a one-way mirror.”
That’s a serious issue, but it’s not likely to slow the trend of replacing a boss’ gut reaction with the perceived wisdom of algorithms.
Case in point: Earlier this year eHarmony, the company that’s made its mark in online matchmaking, announced plans to tweak its algorithms and get into the business of hooking up employees and companies.
Big Data is watching
Here are other ways Big Data is having an impact:
- The roads less traveled: Delivery companies like Fedex and UPS are starting to see significant savings by using data analysis to guide drivers to less congested roads to avoid idling in traffic.
- Have phone, will travel: Scientists in Africa are using data gathered from cell phone usage to track the spread of diseases like malaria by seeing where people travel.
- Big C, meet Big D: The American Society of Clinical Oncology has launched a project to create a massive database of electronic records of cancer cases so doctors can apply analytics to determine how to best treat patients.
Video bonus: Still don’t get the whole Big Data thing. Photographer Rick Smolan shares his epiphany about it.
More from Smithsonian.com
February 8, 2013
When John Brennan, President Obama’s choice to be the next head of the CIA, appeared before a Senate committee yesterday, one question supplanted all others at his confirmation hearing:
How are the decisions made to send killer drones after suspected terrorists?
The how and, for that matter, the why of ordering specific drone strikes remains largely a mystery, but at least one thing is clear–the decisions are being made by humans who, one would hope, wrestle with the thought of sending a deadly missile into an occupied building.
But what if humans weren’t involved? What if one day life-or-death decisions were left up to machines equipped with loads of data, but also a sense of right and wrong?
That’s not so far fetched. It’s not going to happen any time soon, but there’s no question that as machines become more intelligent and more autonomous, a pivotal part of their transformation will be the ability to learn morality.
In fact, that may not be so far away. Gary Marcus, writing recently in The New Yorker, presented the scenario of one of Google’s driverless cars before forced to make a split-second decision: “Your car is speeding along a bridge at 50 miles per hour when errant school bus carrying 40 innocent children crosses its path. Should your car swerve, possibly risking the life of its owner (you), in order to save the children, or keep going, putting all 40 kids at risk? If the decision must be made in milliseconds, the computer will have to make the call.”
And what about robotic weapons or soldiers? Would a drone be able to learn not to fire on a house if it knew innocent civilians were also inside? Could machines be taught to follow the international rules of war?
Ronald Arkin, a computer science professor and robotics expert at Georgia Tech, certainly thinks so. He’s been developing software, referred to as an “ethical governor,” which would make machines capable of deciding when it’s appropriate to fire and when it’s not.
Arkin acknowledges that this could still be decades away, but he believes that robots might one day be both physically and ethically superior to human soldiers, not vulnerable to the emotional trauma of combat or desires for revenge. He doesn’t envision an all-robot army, but one in which machines serve with humans, doing high-risk jobs full of stressful snap decisions, such as clearing buildings.
Beware of killer robots
But others feel it’s time to squash this type of thinking before it goes too far. Late last year, Human Rights Watch and Harvard Law School’s Human Rights Clinic issued a report, “Losing Humanity: The Case Against Killer Robots,” which, true to its title, called on governments to ban all autonomous weapons because they would “increase the risk of death or injury to civilians during armed conflict.”
At about the same a time, a group of Cambridge University professors announced plans to launch what they call the Center for the Study of Existential Risk. When it opens later this year, it will push for serious scientific research into what could happen if and when machines get smarter than us.
The danger, says Huw Price, one of the Center’s co-founders, is that one day we could be dealing with “machines that are not malicious, but machines whose interests don’t include us”.
The art of deception
Shades of Skynet, the rogue artificial intelligence system that spawned a cyborg Arnold Schwarzenegger in The Terminator movies. Maybe this will always be the stuff of science fiction.
But consider other research Ronald Arkin is now doing as part of projects funded by the Department of Defense. He and colleagues have been studying how animals deceive one another, with the goal of teaching robots the art of deception.
For instance, they’ve been working on programming robots so that they can, if necessary, feign strength as animals often do. And they’ve been looking at teaching machines to mimic the behavior of creatures like the eastern gray squirrel. Squirrels hide their nuts from other animals, and when other squirrels or predators appear, the gray squirrels will sometimes visit places where they used to hide nuts to throw their competitors off the track. Robots programmed to follow a similar strategy have been able to confuse and slow down competitors.
It’s all in the interest, says Arkin, of developing machines that won’t be a threat to humans, but rather an asset, particularly in the ugly chaos of war. The key is to start focusing now on setting guidelines for appropriate robot behavior.
“When you start opening that Pandora’s Box, what should be done with this new capability?,” he said in a recent interview. “I believe that there is a potential for non-combatant casualties to be lessened by these intelligent robots, but we do have to be very careful about how they’re used and not just release them into the battlefield without appropriate concern.”
To believe New Yorker writer Gary Marcus, ethically advanced machines offer great potential beyond the battlefield.
The thought that haunts me the most is that that human ethics themselves are only a work-in-progress. We still confront situations for which we don’t have well-developed codes (e.g., in the case of assisted suicide) and need not look far into the past to find cases where our own codes were dubious, or worse (e.g., laws that permitted slavery and segregation).
What we really want are machines that can go a step further, endowed not only with the soundest codes of ethics that our best contemporary philosophers can devise, but also with the possibility of machines making their own moral progress, bringing them past our own limited early-twenty-first century idea of morality.”
Machines march on
Here are more recent robot developments:
- Hmmmm, ethical and sneaky: Researchers in Australia have developed a robot that can sneak around by moving only when there’s enough background noise to cover up its sound.
- What’s that buzzing sound?: British soldiers in Afghanistan have started using surveillance drones that can fit in the palms of their hands. Called the Black Hornet Nano, the little robot is only four inches long, but has a spy camera and can fly for 30 minutes on a full charge.
- Scratching the surface: NASA is developing a robot called RASSOR that weighs only 100 pounds, but will be able to mine minerals on the moon and other planets. It can move around on rough terrain and even over bolders by propping itself up on its arms.
- Ah, lust: And here’s an early Valentine’s Day story. Scientists at the University of Tokyo used a male moth to drive a robot. Actually, they used its mating movements to direct the device toward an object scented with female moth pheromones.
Video bonus: So you’re just not sure you could operate a 13-foot tall robot? No problem. Here’s a nifty demo that shows you how easy it can be. A happy model even shows you how to operate the “Smile Shot” feature. You smile, it fires BBs. How hard is that?
More from Smithsonian.com
January 17, 2013
Utensil history was made last week and I, for one, took pleasure in seeing that we had finally evolved beyond the spork or, as some of you may know it, the foon.
But sadly, the unveiling of the HapiFork at the Consumer Electronics Show (CES) was not universally greeted with great jubilation, but rather with a fair amount of ridicule.
Produced by a Hong Kong company called HapiLabs, the HapiFork is curious little thing. It looks like a fork and works like a fork, but it vibrates like a cellphone. And why it buzzes is the reason the media largely responded with one big group eyeroll.
See, the HapiFork is a fork with a simple and noble mission–to get you to stop eating like a pig. It buzzes to remind you to slow down.
It tracks not only the number of bites you’ve taken, but also how much time has passed between them and how long it takes you to finish the meal. The slower you eat, the fewer calories you consume. And because all the data can be stored on your smart phone, you can measure how less a chowhound you’ve become.
But some critics were not enamored of the concept, portraying the HapiFork as the essence of nanny technology, another “smart” gadget enforcer of data-driven moderation. How, the thinking goes, did it come to this, where forks are telling us to shut our pieholes?
The measure of a man
But maybe, given the obesity epidemic in the U.S. and Europe, it’s time to start listening to buzzing silverware. In fact, there are those who believe the current boom in mobile apps and devices that track our health and bad habits could play a big role in helping the U.S. get its outrageous health care costs under control.
A major health trend this year, according to a new report from PricewaterhouseCoopers, will be a shift by employers and insurance companies to encourage employees to be a lot more proactive when it comes to taking care of themselves. That’s in part due to incentives in the Affordable Care Act, but also because today’s technology–whether it’s sensors, WiFi or smart phones–has made it so much easier to track every move we make, every breath we take.
We’ll likely see more companies turn to employee wellness programs focusing on prevention and tapping into all that data that our smart phones and other health gadgets are able to gather about us. Already, start-ups such as the Boston-based Healthrageous are being hired by companies to work closely with their employees with chronic conditions, such as diabetes or hypertension or even sleep disorders. Healthrageous provides both a tracking device–say a blood glucose monitor for diabetics–and a customized plan to help employees reach their personal goals, which could be anything from fitting into pants you last wore 10 years ago to being able to play with your grandkids.
PUSH Wellness, in Chicago, also contracts out an employee wellness program, but with a different spin. It actually pays cash incentives to workers who meet goals that raise their “PUSH” score–a number based on a person’s Body Mass Index (BMI), blood pressure, cholesterol and fitness level. With PUSH, it’s not enough for an employee to exercise; they have to show real measurable results or there’s no pay out.
The big health insurance companies are getting in on the act, too. Last month, Aetna unveiled Passage, a fitness app it developed with Microsoft, that allows people to feel like they’re running or biking in some of the world’s great cities–Rome, New York, or Barcelona, for instance.
Also last month, Cigna announced that it has made available, for free, to the first 20,000 people who download them, four apps bundled together as the “Healthy Living App Pack. One is designed to track your workouts, another to get you to relax, another the help you sleep. The fourth, Fooducate, is a food nutrition app designed to make you health savvy when you’re food shopping.
When sensors speak
Here are five other health devices that made a splash at CES last week:
- Would your wrist lie to you?: Another health wristband is coming on the market soon. Called Fitbit Flex, it will be able to track your daily activity–steps taken, calories burned–and also how you’ve slept, plus wake you up with a little buzz in the morning. For motivation, a display of four LED lights shows how far along you are in meeting that day’s goal. And at $100, it will be less expensive than the competitors already out there, Nike Fuel and Jawbone’s Up.
- Keep running or we’ll play “Gangham Style:” Or you can let little earbuds do the monitoring work. Coming out this spring are iRiver On headphones equipped with PerformTek Precision Biometrics technology that measures a range of body metrics, including heart rate, distance traveled, steps taken, respiration rate, speed, metabolic rate, energy expenditure, calories burned and recovery time.
- It was so much easier when pills looked like the Flintstones: For those dealing with a daily dose of multiple meds, there’s the uBox. The little box reminds people when it’s time to take their pills with a combination of beeps, blinking lights and smart phone reminders. And if you’ve already taken your meds, the box remains locked until it’s time for another set–the better to keep forgetful seniors from double dosing. It even lets other family members know if grandpa’s missed a med.
- Giving new meaning to “Let me hear your body talk”: Then there’s Metria, a small patch a person wears on their chest that measures heartbeat, skin hydration, breathing, steps taken and sleep patterns. (It records the duration and quality of sleep based on how much you’ve tossed and turned.) Each patch gathers information for seven days and can send it to a phone or tablet anywhere in the world. Metria’s designed primarily for elderly people who live alone, but the U.S. Air Force reportedly may use it to monitor pilots.
- Will walk for prizes: And bringing us back full circle to obesity is the ibitz PowerKey, a pedometer for kids. It doesn’t just track their activity, but rewards them with games, apps, shows and prizes for staying on the move. And yes, parents can check in on their kids’ progress on their own smart phones.
Video bonus: See why Stephen Colbert thinks the HapiFork is “unAmerican.”
More from Smithsonian.com