May 22, 2013
As much time as we spend with our cell phones and laptops and tablets, it’s still pretty much a one-way relationship. We act, they respond. Sure, you can carry on a conversation with Siri on your iPhone, and while she is quick, it hardly qualifies as playful bantering. You ask questions, she gives answers.
But what if these devices could really read our emotions? What if they could interpret every little gesture, every facial cue so that they can gauge our feelings as well as–maybe better than–our best friends? And then they respond, not with information, but what might pass for empathy.
We’re not there yet, but we’re quickly moving in that direction, driven by a field of science known as affective computing. It’s built around software that can measure, interpret and react to human feelings. This might involve capturing your face on camera and then applying algorithms to every aspect of your expressions to try to make sense of each smirk and chin rub. Or it might involve reading your level of annoyance or pleasure by tracking how fast or with how much force you tap out a text or whether you use emoticons. And if you seem too agitated–or drunk–you could get a message suggesting that you might want to hold off pressing the send icon.
Seeing how difficult it is for us humans to make sense of other humans, this notion of programming machines to read our feelings is no small challenge. But it’s picking up speed, as scientists sharpen their focus on teaching devices emotional intelligence.
Every move you make
One of the better examples of how affective computing can work is the approach of a company called, appropriately, Affectiva. It records expressions and then, using proprietary algorithms, scrutinizes facial cues, tapping into a database of almost 300 million frames of elements of human faces. The software has been refined to the point where it can associate various combinations of those elements with different emotions.
When it was developed at M.I.T’s Media Lab by two scientists, Rosalind Picard and Rana el Kaliouby, the software, known as Affdex, was designed with the purpose of helping autistic children communicate better. But it clearly had loads of potential in the business world, and so M.I.T. spun the project off into a private company. It has since raised $21 million from investors.
So how is Affdex being used? Most often, it’s watching people watching commercials. it records people as they view ads on their computers–don’t worry, you need to opt in for this–and then, based on its database of facial cues, evaluates how the viewers feel about what they’ve seen. And the software doesn’t provide just an overall positive or negative verdict; it breaks down the viewers’ reactions second by second, which enables advertisers to identify, with more precision than ever before, what works in a commercial and what doesn’t.
It also is able to see that while people say one thing, their faces can say another. During an interview with the Huffington Post, el Kaliouby gave the example of the response to an ad for body lotion that aired in India. During the commercial, a husband playfully touches his wife’s exposed stomach. Afterwards, a number of women who had watched it said they found that scene offensive. But, according to el Kaliouby, the videos of the viewers showed that every one of the women responded to the scene with what she called an “enjoyment smile.”
She sees opportunities beyond the world of advertising. Smart TVs could be that much smarter about what kind of programs we like if they’re able to develop a memory bank of our facial expressions. And politicians would be able to get real-time reactions to each line they utter during a debate and be able to adapt their messages on the fly. Plus, says el Kaliouby, there could be health applications. She says it’s possible to read a person’s heart rate with a webcam by analyzing the blood flow in his or her face.
“Imagine having a camera on all the time monitoring your heart rate,” she told the Huffington Post, “so that it can tell you if something’s wrong, if you need to get more fit, or if you’re furrowing your brow all the time and need to relax.”
So what do you think, creepy or cool?
Here are five other ways machines are reacting to human emotions:
- And how was my day?: Researchers at the University of Cambridge have developed an Android mobile app that monitors a person’s behavior throughout the day, using incoming calls and texts, plus social media posts to track their mood. The app, called “Emotion Sense,” is designed to create a “journey of discovery,” allowing users to have a digital record of the peaks and valleys of their daily lives. The data can be stored and used for therapy sessions.
- And this is me after the third cup of coffee: Then there’s Xpression, another mood-tracking app created by a British company called EI Technologies. Instead of relying on people in therapy to keep diaries of their mood shifts, the app listens for changes in a person’s voice to determine if they are in one of five emotional states: calm, happy, sad, angry or anxious/frightened. It then keeps a list of a person’s moods and when they change. And, if the person desires, this record can automatically be sent to a therapist at the end of every day.
- What if you just hate typing on a phone? : Scientists at Samsung are working on software that will gauge your frame of mind by how you type out your tweets on your smartphone. By analyzing how fast you type, how much the phone shakes, how often you backspace mistakes, and how many emoticons you use, the phone should be able to determine if you’re angry, surprised, happy, sad, fearful, or disgusted. And based on what conclusion it draws, it could include with your tweet the appropriate emoticon to tip off your followers to your state of mind.
- Just don’t invite your friends over to watch: Using a sensor worn on the wrist and a smartphone camera worn around the neck, researchers at M.I.T. have created a “lifelogging” system that collects images and data designed to show a person which events represented their emotional highs and lows. The system, called Inside-Out, includes a bio-sensor in a wristband that tracks heightened emotions through electrical charges in the skin while the smartphone tracks the person’s location and takes several photos a minute. Then, at the end of the day, the user can view their experiences, along with all the sensor data.
- Your brow says you have issues: This probably was inevitable. Researchers at the University of Southern California have created a robotic therapist that not only is programmed to encourage patients with well-timed “Uh-huhs,” but also is expert, using motion sensors and voice analysis, at interpreting a patient’s every gesture and voice inflection during a therapy session.
Video bonus: Want to see how bizarre this trend of devices reading human emotions can get? Check out this promotion of Tailly, a mechanical tail that picks up your level of excitement by tracking your heart rate and then wags appropriately.
More from Smithsonian.com
February 25, 2013
A year and a half into his presidency, John F. Kennedy challenged U.S. scientists to get Americans to the moon by the end of the decade. At his recent State of the Union address, Barack Obama hinted at what could become his version of reaching for the moon–he’d like scientists to solve the mystery of the brain.
Obama’s mission would be a heavier lift.
He didn’t go into much detail, other than citing brain research as a stellar example of how government can “invest in the best ideas.” But last week a story in the New York Times by John Markoff filled in a lot of the blanks. Obama’s grand ambition is something called the Brain Activity Map–it’s already being referred to simply as BAM–and it would require a massive collaborative research effort involving neuroscientists, government agencies, private foundations and tech companies, with the truly daunting goal of figuring out how the brain actually generates thoughts, memories and consciousness.
An answer for Alzheimer’s?
The White House is expected to officially unveil its big plan as early as next month as part of its budget proposal. The speculation is that it could cost as much as $3 billion over the next 10 years.
Now, it may seem a strange time to be pitching projects with a $300 million-a-year price tag, what with the budget-hacking sequestration expected to kick in later this week. That’s why even though Obama was light on the details, he did make a point of comparing the brain-mapping mission to the Human Genome Project–a major research initiative financed by the federal government to map all of the genes in human DNA. It ultimately cost $3.8 billion, but it reached its goal two years early, in 2003, and through 2010, according to an impact study, returned $800 billion to the economy.
No question that BAM could have a profound impact in helping scientists understand what goes on in the brain to cause depression or schizophrenia or autism. And it certainly could be a boon to pharmaceutical companies that have spent billions, without luck, to find a cure for Alzheimer’s disease. Since 1998, there have been more than 100 unsuccessful attempts to find a treatment for Alzheimer’s, which by 2050, is expected to affect 115 million people around the world.
It’s all about the tools
Clearly there are plenty of medical reasons to try to unravel the brain, but what, realistically, are the prospects? Sure, brain scans have helped scientists see which parts of the brain are more active during different types of behavior, but that’s a 30,000-foot view. It tells them next to nothing about how individual brain cells transmit information and even less about how neural networks transform that into behavior.
In recent years, researchers have made big strides in understanding how the brain is organized through the Human Connectome Project, funded by the National Institutes of Health. But that’s designed to create more of a static map of neural connections.
The next crucial step is to be able to see, in real time, how information is processed through those connections and which different neurons become part of that process. Or as Harvard biologist George Church, one of the scientists who proposed BAM in a paper last year, has explained it: “We don’t just want to see the wires, but also the messages going over the wires.”
The key is how quickly technology can be developed that will allow scientists to follow a thought process by recording every blip of every one of the thousands, and possibly millions, of neurons involved. Current technology enables them to record the activity of roughly 100 neurons at a time, way too small a slice of the neural network to help explain much of anything. But, as Greg Miller noted in a recent piece on the Wired website, several cutting-edge biological or nano-tools are in the works, including one that could “pack hundreds of thousands of nanowire electrodes into flexible sheets that conform to the surface of the brain and eavesdrop on neurons with minimal tissue damage.”
Is bigger really better?
A lot of neuroscientists will be thrilled if BAM gets funded. But not all. Some have already pointed out that you really can’t compare it to the Human Genome Project, nor the mission to the moon, for that matter. Both of those endeavors, while very challenging, had clearly definable goals. But how do you identify success for BAM? Would being able to record the activity of hundreds of thousands of neurons really explain how thinking happens? No one really knows.
Other scientists are concerned that BAM, with its high profile, could drain dollars from other neuroscience research. Some writers have even raised the specter of mind control, particularly since one of the government agencies that would be involved is DARPA, the Defense Department’s agency that funds experimental technology.
Gary Marcus, writing in the The New Yorker, makes the case that a project like BAM might be more effective if it wasn’t so monolithic. He argues that it should be broken up into five smaller projects, each one focused on a different aspect of brain function.
But he also warns that should Congress balk at ponying up the money for a major neuroscience project, it runs the risk of sparking, ironically, a brain drain. In January, a group of European countries committed more than $1 billion to their own huge neuroscience endeavor called the Human Brain Project , which will try to simulate all the processes of a brain within a computer.
“Whether it meets its grand goal or not, the European project will certainly lead to a significant number of smaller scientific advances. If the U.S. doesn’t follow suit, we will lose our lead in neuroscience, and will likely be left playing catch-up in some of the biggest game-changing industries on the horizon, like human-level artificial intelligence and direct brain-computer interfaces–even though both fields originated in the United States.”
Here are some other recent findings from brain research:
- Of mice and men watching mice: Researchers at Stanford were able to follow the brain activity of mice in real time after lacing their brains with fluorescent proteins. They were able to watch which parts of their brains glowed as they ran around a cage.
- Does that mean a bird can get a song stuck in its head?: And a team of scientists at Duke University found that birds that can sing and mimic sounds have genes in their brains that can turn on and off in ways similar to human brains.
- She lights up a womb: For the first time, MRIs of developing human fetuses showed communication signals between different parts of their brains. Scientists at Wayne State University in Michigan hope their research will lead to early treatments for autism and ADHD.
- Nothing yet, though, on how foot gets in mouth: Researchers at the University of California, San Francisco, had mapped the process of speech, laying out the neural network that makes it happen, from the nerves that control the jaws, lips and tongue to those that manipulate the larynx.
- Talk about a protein boost: There’s a biological explanation for why women talk more than men. Studies have shown that women speak an average of 20,000 words a day, while men average about 7,000. According to a study published in the Journal of Neuroscience last week, it may be because they tend to have higher levels of a protein in their brain that’s been linked to verbal communication.
Video bonus: A BBC journalist gets a tour of the wiring on his own brain.
More from Smithsonian.com
January 7, 2013
Here in Washington we have heard of this thing you call “advance planning,” but we are not yet ready to embrace it. A bit too futuristic.
Still, we can’t help but admire from afar those who attempt to predict what could happen more than a month from now. So I was impressed a few weeks ago when the big thinkers at IBM imagined the world five years hence and identified what they believe will be five areas of innovation that will have the greatest impact on our daily lives.
They’ve been doing this for a few years now, but this time the wonky whizzes followed a theme--the five human senses. Not that they’re saying that by 2018, we’ll all be able to see, hear and smell better, but rather that machines will–that by using quickly-evolving sensory and cognitive technologies, computers will accelerate their transformation from data retrieval and processing engines to thinking tools.
See a pattern?
Today, let’s deal with vision. It’a logical leap to assume that IBM might be referring to Google’s Project Glass. No question that it has redefined the role of glasses, from geeky accessory that helps us see better to combo smartphone/data dive device we’ll someday wear on our faces.
But that’s not what the IBMers are talking about. They’re focused on machine vision, specifically pattern recognition, whereby, through repeated exposure to images, computers are able to identify things.
As it turns out, Google happened to be involved in one of last year’s more notable pattern recognition experiments, a project in which a network of 1,000 computers using 16,000 processors was, after examining 10 million images from YouTube videos, able to teach itself what a cat looked like.
What made this particularly impressive is that the computers were able to do so without any human guidance about what to look for. All the learning was done through the machines working together to decide which features of cats merited their attention and which patterns mattered.
And that’s the model for how machines will learn vision. Here’s how John Smith, a senior manager in IBM’s Intelligent Information Management, explains it:
“Let’s say we wanted to teach a computer what a beach looks like. We would start by showing the computer many examples of beach scenes. The computer would turn those pictures into distinct features, such as color distributions, texture patterns, edge information, or motion information in the case of video. Then, the computer would begin to learn how to discriminate beach scenes from other scenes based on these different features. For instance, it would learn that for a beach scene, certain color distributions are typically found, compared to a downtown cityscape.”
How smart is smart?
Good for them. But face it, identifying a beach is pretty basic stuff for most of us humans. Could we be getting carried away about how much thinking machines will be able to do for us?
Gary Marcus, a psychology professor at New York University, thinks so. Writing recently on The New Yorker’s website, he concludes that while much progress has been made in what’s become known as “deep learning,” machines still have a long way to go before they should be considered truly intelligent.
“Realistically, deep learning is only part of the larger challenge of building intelligent machines. Such techniques lack ways of representing causal relationships (such as between diseases and their symptoms), and are likely to face challenges in acquiring abstract ideas like “sibling” or “identical to.” They have no obvious ways of performing logical inferences, and they are also still a long way from integrating abstract knowledge, such as information about what objects are, what they are for, and how they are typically used.”
The folks at IBM would no doubt acknowledge as much. Machine learning comes in steps, not leaps.
But they believe that within five years, deep learning will have taken enough forward steps that computers will, for instance, start playing a much bigger role in medical diagnosis, that they could actually become better than doctors when it comes to spotting tumors, blood clots or diseased tissue in MRIs, X-rays or CT scans.
And that could make a big difference in our lives.
Seeing is believing
Here are more ways machine vision is having an impact on our lives:
- Putting your best arm forward: Technology developed at the University of Pittsburgh uses pattern recognition to enable paraplegics to control a robotic arm with their brains.
- Your mouth says yes, but your brain says no: Researchers at Stanford found that using pattern recognition algorithms on MRI scans of brains could help them determine if someone actually had lower back pain or if they were faking it.
- When your moles are ready for their close ups: Last year a Romanian startup named SkinVision launched an iPhone app that allows people to take a picture of moles on their skin and then have SkinVision’s recognition software identify any irregularities and point out the risk level–without offering an actual diagnosis. Next step is to make it possible for people to send images of their skin directly to their dermatologist.
- Have I got a deal for you: Now under development is a marketing technology called Facedeals. It works like this: Once a camera at a store entrance recognizes you, you’re sent customized in-store deals on your smart phone. And yes, you’d have to opt in first.
- I’d know that seal anywhere: A computerized photo-ID system that uses pattern recognition is helping British scientists track gray seals, which have unique markings on their coats.
Video bonus: While we’re on the subject of artificial intelligence, here’s a robot swarm playing Beethoven, compliments of scientists at Georgia Tech. Bet you didn’t expect to see that today.
More from Smithsonian.com
December 14, 2012
There are times when I wonder why so many scientists are spending so much time trying to recreate something as fickle and full of fogginess as the human brain.
But who am I kidding? Those dyspeptic moments inevitably pass, as anyone who’s been following this blog knows. Every few months, it seems, I’m back writing about the latest attempt to build machines that can learn to recognize objects or even develop cognitive skills.
And now there’s Spaun.
Staying on task
Its full name is the Semantic Pointer Architecture Unified Network, but Spaun sounds way more epic. It’s the latest version of a techno brain, the creation of a Canadian research team at the University of Waterloo.
So what makes Spaun different from a mindboggingly smart artificial brain like IBM’s Watson? Put simply, Watson is designed to work like a supremely powerful search engine, digging through an enormous amount of data at breakneck speed and using complex algorithms to derive an answer. It doesn’t really care about how the process works; it’s mainly about mastering information retrieval.
But Spaun tries to actually mimic the human brain’s behavior and does so by performing a series of tasks, all different from each other. It’s a computer model that can not only recognize numbers with its virtual eye and remember them, but also can manipulate a robotic arm to write them down.
Spaun’s “brain” is divided into two parts, loosely based on our cerebral cortex and basal ganglia and its simulated 2.5 million neurons–our brains have 100 billion–are designed to mimic how researchers think those two parts of the brain interact.
Say, for instance, that its “eye” sees a series of numbers. The artificial neurons take that visual data and route it into the cortex where Spaun uses it to perform a number of different tasks, such as counting, copying the figures, or solving number puzzles.
Soon it will be forgetting birthdays
But there’s been an interesting twist to Spaun’s behavior. As Francie Diep wrote in Tech News Daily, it became more human than its creators expected.
Ask it a question and it doesn’t answer immediately. No, it pauses slightly, about as long as a human might. And if you give Spaun a long list of numbers to remember, it has an easier time recalling the ones it received first and last, but struggles a bit to remember the ones in the middle.
“There are some fairly subtle details of human behavior that the model does capture,” says Chris Eliasmith, Spaun’s chief inventor. “It’s definitely not on the same scale. But it gives a flavor of a lot of different things brains can do.”
The fact that Spaun can move from one task to another brings us one step closer to being able to understand how our brains are able to shift so effortlessly from reading a note to memorizing a phone number to telling our hand to open a door.
And that could help scientists equip robots with the ability to be more flexible thinkers, to adjust on the fly. Also, because Spaun operates more like a human brain, researchers could use it to run health experiments that they couldn’t do on humans.
Recently, for instance, Eliasmith ran a test in which he killed off the neurons in a brain model at the same rate that neurons die in people as they age. He wanted to see how the loss of neurons affected the model’s performance on an intelligence test.
One thing Eliasmith hasn’t been able to do is to get Spaun to recognize if it’s doing a good or a bad job. He’s working on it.
Here are a few other recent developments in brain research and artificial intelligence:
- I can’t get this song out of your head: Scientists in Berlin wired guitarists playing a duet with electrodes and found that when they had to closely coordinate their playing, their brain activity became synchronized. But when they weren’t coordinated, when one was leading and the other following, their brain activity was distinctly different.
- One day the brain may actually understand itself: A team of MIT neuroscientists has developed a way to monitor how brain cells coordinate with each other to control specific behaviors, such as telling the body to move. Not only could this help them map brain circuits to see how tasks are carried out, but it also may provide insight into how psychiatric diseases develop.
- Deep thinking is so yesterday: The top prize in a recent competition sponsored by pharmaceutical giant Merck went to a team of researchers from the University of Toronto who used a form of artificial intelligence known as deep learning to help discover molecules that could become new drugs.
- So robots will learn how to stare at smart phones?: To teach robots how to function in social situations, scientists at Carnegie-Mellon University are tracking groups of people with head-mounted cameras to see when and where their eyes converge in social settings.
- Unfortunately, they keep trying to hide nuts: By using the deceptive behavior of birds and squirrels as a model, researchers at Georgia Tech have been able to develop robots that can trick each other.
Video bonus: Check out a demo of Spaun in action.
More from Smithsonian.com
June 21, 2012
For all the speech lines we hear about jobs these days, rarely does anyone mention robots.
They do occasionally, but usually it’s saved for the “innovation” speeches. This is understandable. If you’re running for office, better to keep the two ideas separated, because while jobs are good because they’re, well, jobs, and robots are good because they mean progress, mix the two together and soon enough people will start asking how you’ll be able to create a lot of jobs if these really smart machines are doing more and more of the work.
No, I’m not going all Luddite on you. I’m in awe of machines and the remarkable things they can now do. But that’s the point. We’re not talking about the technology of the past, which clearly made humans more productive and allowed us to move into better-paying jobs requiring more specialized skills.
Now we’re creating machines that are much more than tools. They’re learning to think and adapt, and technologists such as Martin Ford, author of Lights in the Tunnel: Automation, Accelerating Technology and the Economy of the Future, believe that within five to ten years, machines will be able to surpass the ability of humans to do routine work. As he told The Fiscal Times: “It’s the first time we’ve had this level of technology that allows machines to solve problems on their own, to interact with their environment, to analyze visual imagines, and to manipulate their environment based on that.”
Do robots know “Kumbaya?”
There are those, of course, who feel that Ford and other techno-downers have the human-robot thing all wrong. Futurist Ray Kurzweil, for one, is convinced that by mid-century, humans and robots will merge in some form. Maybe we’ll be able to live forever in a body of artificial parts. Or our consciousness will live on inside a computer, a kind of humanoid software. Whatever shape it takes, Kurzweil already has a name for it– singularity.
Kurzweil’s take is that machines are gaining intelligence so quickly that it won’t be that long before they’re considerably more intelligent than humans. And he says we should be encouraged by this, not threatened. Technology will only continue to make our lives better, he contends, in ways we can’t yet imagine.
Five years ago, he likes to point out, who would have thought that hundreds of millions of people around the world would be walking around with devices as powerful as smart phones. Or that almost half a million people could have jobs in the business of making mobile apps.
Still, all of this doesn’t seem to bode well for people who don’t have the skills to play in that world. Earlier this month, Forbes, in an article titled, “Is Your Job Robot-Proof?” noted that, “Today America needs 5 million fewer workers to produce a greater value of goods and services than it did in December 2007 when the recession began.”
And other recent news from the robot front provides more grist for worriers like Ford. Canon just announced that it has begun phasing out human workers in its plants, and that in a few years its cameras will be made solely by robots. Earlier this month DARPA, the R&D arm of the Pentagon, awarded $1.2 million to a Georgia start-up to develop machines that would allow U.S. factories to “produce garments with zero direct labor.” That might allow American clothing factories to actually undercut the costs of cheap labor in China.
Or maybe not. Foxconn, the giant Chinese company known both for manufacturing Apple products and for worker suicides, announced last year that it will create a “robot kingdom” of more than 1 million robots within the next few years.
If you could read my mind
But there’s been at least one recent development that’s more in line with Kurzweil’s vision of robot-human togetherness. Researchers at MIT say they’ve developed an algorithm that will enable robots to work side-by-side with humans. The software apparently will allow robots to learn the preferences of their human partners and anticipate their needs out on the factory floor. And if the machine has to move on to help another worker, it would be able to quickly adapt to him or her.
Julie Shah, head of the MIT research team, put it this way: “It’s an interesting machine-learning human-factors problem. Using this algorithm, we can significantly improve the robot’s understanding of what the person’s next likely actions are.”
Wonder if they’ll be able to smell fear?
Mo’ better machines
Here are more reports on robots rising:
- Snakes on a vein: Scientists at Carnegie-Mellon University in Pittsburgh have created tiny, snake-like robots that, armed with cameras, scissors, forceps and sensors, are helping doctors perform surgery on hearts and cancer tumors.
- The barista will not respond to lame attempts at flirtation: A start-up at the University of Texas has installed a coffee kiosk run by a robot barista in the campus’ academic center. Students can order their drink online or on their phone and receive a text when it’s good to go.
- So much for the career in sushi: Yes, the Japanese have been on to the robots in restaurants thing for awhile. But now food machine manufacturer Suzumo has developed a sushi-making robot that can crank out 2,500 pieces in an hour.
- Don’t even think about pulling my finger: Researchers at the University of Southern California have given robots a sense of touch–one, in fact, that’s actually more sensitive than a human’s finger. Sensors can even tell where and in which direction forces are applied to robot’s fingertip.
- That’s nice, but it still doesn’t do windows: Roomba, the king of household robots, is going wireless. iRobot announced earlier this week that its new Roomba 790, which retails for a mere $699, will come with a “wireless command center” that, among other things, will allow you to schedule it to clean your house while you’re not home.
Video bonus: From the land that nailed Robot Cute long ago comes i-SODOG, a robot pup that shakes, dances, responds to voice commands and can be trained through your smartphone. Ignore the background din of little awestruck Japanese kids. This is a toy you’ll want to take to work, to dates, to job interviews. How could they not hire you?