November 1, 2013
All of us have had a teacher who had eyes in the back of his or her head. Even while facing the blackboard, they saw everything—every note being passed, every answer being copied, every face being made.
Or at least it seemed that way. All they really had to do was guess right a few times about what was going on behind their backs and, well, that is how classroom legends are made.
But what if you took all the guessing out of the picture? What if cameras focused on every kid in the class? That’s what a New York company named SensorStar Labs has in mind, although the point would not be to catch miscreants, but rather to help teachers determine when they’ve lost the class.
Here’s how it would work. Using facial recognition software called EngageSense, computers would apply algorithms to what the cameras have recorded during a lecture or discussion to interpret how engaged the students have been. Were the kids’ eyes focused on the teacher? Or were they looking everywhere but the front of the class? Were they smiling or frowning? Or did they just seem confused? Or bored?
Teachers would be provided a report that, based on facial analysis, would tell them when student interest was highest or lowest. Says SensorStar co-founder Sean Montgomery, himself a former teacher: “By looking at maybe just a couple of high points and a couple of low points, you get enough takeaway. The next day you can try to do more of the good stuff and less of the less-good stuff.”
No doubt some parents are going to have a lot of questions about what happens to all that video of their kids’ faces. But Montgomery is confident that most will agree to let their children be videotaped when they see how much it helps teachers polish their skills.
He’s convinced that in five years, teachers all over the country will be using it. First, though, he has to prove that the SensorStar algorithms can truly interpret the workings of young minds based simply on eye movement and facial expression.
That, of course, assumes teachers will jump right on board. Which is hardly a sure thing, given the response last year to a report that the Bill and Melinda Gates Foundation is helping to fund the development of sensor bracelets that could, in theory at least, track a student’s engagement level.
The wrist devices are designed to send a small current across the skin and then measure subtle changes in electrical charges as the nervous system responds to stimuli. These bracelets have been used in tests to gauge how consumers respond to advertising, and the thinking goes that if they can tell you how excited someone gets while watching a car ad, they can give you a sense of how jazzed a kid can get about fractions. (Or not.)
Not so fast, snapped skeptics. They were quick to point out that just because a second grader is excited doesn’t mean he or she is learning something. And while the bracelets’ boosters argue that their purpose is to help teachers, critics say that no one should be surprised if the sensors eventually are used to evaluate them. Some teachers suggested that they might have to work random screams into their lesson plans to keep the excitement level high.
Ultimately, it comes down to whether, like Bill Gates, you believe that accumulating and analyzing data from classroom behavior is the key to applying science to the learning process. Or, if you think that teaching is more art than science, and that the connection between teachers and students is too complex and nuanced to be measured through a collection of data points.
Who’s your data?
- And you will not eat a salad your first six months in college: More and more colleges are using predictive analysis to give students a good idea of how they’ll fare in a class before they even sign up for it. By using data from a student’s own academic performance and from others who have already taken the class, advisers can predict with increasing accuracy how likely it is that a particular student will succeed or fail.
- Please like this investment: Last week Facebook founder Mark Zuckerberg made his first investment in a startup company—he joined a team of investors putting $4 million in seed money behind a Massachusetts company named Panorama Education. It crunches data from surveys it does for schools from K to 12, ranging from subjects such as why some promising students end up failing to why bullying is particularly prominent among ninth grade boys.
- Acing the tests: A smartphone app called Quick Key has an optical scanner that can quickly grade SAT-style bubble answer sheets. Then it uploads the results to teachers’ electronic grade books and analyzes the data.
- Apple-picking time: Earlier this week, Apple CEO Tim Cook announced that iPads make up 94 percent of the tablets now used in schools. The company’s sales have slowed in the consumer market, so it’s been making a big push into education by offering discounts for bulk purchases.
- And they probably drew outside the lines: A new study from Michigan State University found that people who were involved in artistic activities while they were in school tended to be more innovative when they grew up—specifically that they were more likely to generate patents and launch businesses as adults.
Video bonus: Bill Gates offers his take on how he thinks teachers should be given feedback.
Video bonus bonus: Here’s a different twist on facial recognition in the classroom.
More from Smithsonian.com
July 11, 2013
“There is no doubt that over time, people are going to rely less and less on passwords. People use the same password on different systems, they write them down and they really don’t meet the challenge for anything you want to secure.”
None other than Bill Gates said this. Back in 2004.
People in the business of keeping data secure will tell you that passwords should have gone the way of dial-up Internet by now. Sure, back in the day, when we only needed them for two or three websites and hackers weren’t nearly so diabolical, we could get away with using the same “123456″ password for everything, without worrying that someone on the other side of the world was a click away from emptying our bank accounts.
Ah, sweet innocence. Now, we have an average of 24 different online accounts, for which we use at least six different passwords. And we need them for tablets and smartphones, too. If we’ve heeded the security gods—although most of us haven’t—we’ve abandoned the memorably quaint for strange, long combos of numbers, letters—capital and lower case—and symbols that dare to be remembered. (Then again, most of us don’t seem to have a knack for this passwords thing, considering that year after year, the world’s most popular password is still the word “password.”)
Not that conjuring up the perfect password guarantees immunity from code crackers. Just last week the giant game company Ubisoft admitted that its database had been breached and advised those with Ubisoft accounts to change their passwords immediately. Last summer’s big cybersecurity caper was a hack of LinkedIn, in which more than 6 million encrypted passwords were exposed.
It’s time, it would seem, for a better idea.
So, who figures to make the first big splash in the post-password world? Right now, a lot of the betting is on Apple, with speculation that the killer feature of the iPhone 5S coming out later this year will be a fingerprint scanner, perhaps embedded under the home button. Some Apple watchers think the iWatch, also expected on the market by the end of 2013, will likewise come with scanner capabilities that would allow the device to verify the user’s identity. Apple tipped its hand last year when it paid $356 million for AuthenTec, a company that develops fingerprint scanners.
Other big names pushing for the password’s demise are Google and PayPal, two of the key players in an industry group known as FIDO, which stands for Fast IDentity Online Alliance. FIDO isn’t boosting any particular approach to identity recognition; mainly it plans to set industry standards. But it is promoting what’s known as two-step verification as a move in the right direction.
This is when you’d be identified by a combination of “something you know”—such as a password—with “something you have”—such as a token that plugs into your device’s USB port—or “something you are”—such as your fingerprint. This combo of a password and a device you carry around with you—Google security experts have suggested a log-in finger ring—would be a lot safer than a simple password, and would let you use an easy-to-remember password, since the account can’t be hacked without your ring or your fingerprint.
And once fingerprint sensors or face and voice recognition software become more common, it will be that much easier for passwords to simply fade away.
That feels inevitable to Michael Barrett, chief information-security officer of PayPal and president of FIDO. “Consumers want something that’s easy to use and secure,” he says. “Passwords are neither.”
A fingerprint scanner on your phone is only the beginning. There are a number of other inventive, and yes, even bizarre ideas for replacing passwords. Among them:
- Coming soon to a stomach near you: Let’s start strange. At a conference in late May, Regina Dugan, head of advanced research at Motorola, suggested that one day you’ll be able to take a pill every day that would verify your identity to all of your devices. The pill would have a tiny chip inside and when you swallow it, the acids in your stomach would power it up. That creates a signal in your body, which, in essence becomes the password. You could touch your phone or your laptop and be “authenticated in.” No, it’s not happening any day now, but the FDA has already approved its precursor—a pill that can send information to your doctor from inside your body. In other words, it’s a lot more plausible than it sounds.
- So, how about a tattoo that spells “password:” But that’s not all Dugan projected for the future. She also showed off an electronic tattoo. Motorola, now owned by Google, is working with a company named MC10, which has developed this “stretchable” tattoo with its own antenna and sensors embedded in it. It’s so thin, it can flex with your skin. And it would serve as your password, communicating with your devices and verifying that you are who you say you are.
- Now, what are all these keys for?: Back to the present. A Canadian company called PasswordBox is now offering a free app that remembers and automatically enters all your passwords across all your platforms. It signs you into websites, logs into apps, and enables you to securely share your digital keys with friends and loved ones—all through an app for your smartphone and a Chrome browser extension for your desktop. Its pitch is one-click login everywhere.
- Would my heart lie?: Another Canadian company called Bionym is building its business around the fact that heartbeats, like fingerprints, are unique. Its approach is to turn your heartbeat into a biometric pass code that’s embedded in a wrist band which, in turn, uses Bluetooth to let your machines know you’re the real deal.
Video bonus: Let’s go back to the future with John Chuang, a researcher at the UC Berkeley School of Information. He’s working on the idea of allowing people to verify their identities through their brain waves. Okay, at least hear him out.
Video bonus bonus: The Internet Password Minder is a stroke of…something. Even Ellen DeGeneres was impressed, in a funny way.
More from Smithsonian.com
How You Type Could Become Your New Password
May 22, 2013
As much time as we spend with our cell phones and laptops and tablets, it’s still pretty much a one-way relationship. We act, they respond. Sure, you can carry on a conversation with Siri on your iPhone, and while she is quick, it hardly qualifies as playful bantering. You ask questions, she gives answers.
But what if these devices could really read our emotions? What if they could interpret every little gesture, every facial cue so that they can gauge our feelings as well as–maybe better than–our best friends? And then they respond, not with information, but what might pass for empathy.
We’re not there yet, but we’re quickly moving in that direction, driven by a field of science known as affective computing. It’s built around software that can measure, interpret and react to human feelings. This might involve capturing your face on camera and then applying algorithms to every aspect of your expressions to try to make sense of each smirk and chin rub. Or it might involve reading your level of annoyance or pleasure by tracking how fast or with how much force you tap out a text or whether you use emoticons. And if you seem too agitated–or drunk–you could get a message suggesting that you might want to hold off pressing the send icon.
Seeing how difficult it is for us humans to make sense of other humans, this notion of programming machines to read our feelings is no small challenge. But it’s picking up speed, as scientists sharpen their focus on teaching devices emotional intelligence.
Every move you make
One of the better examples of how affective computing can work is the approach of a company called, appropriately, Affectiva. It records expressions and then, using proprietary algorithms, scrutinizes facial cues, tapping into a database of almost 300 million frames of elements of human faces. The software has been refined to the point where it can associate various combinations of those elements with different emotions.
When it was developed at M.I.T’s Media Lab by two scientists, Rosalind Picard and Rana el Kaliouby, the software, known as Affdex, was designed with the purpose of helping autistic children communicate better. But it clearly had loads of potential in the business world, and so M.I.T. spun the project off into a private company. It has since raised $21 million from investors.
So how is Affdex being used? Most often, it’s watching people watching commercials. it records people as they view ads on their computers–don’t worry, you need to opt in for this–and then, based on its database of facial cues, evaluates how the viewers feel about what they’ve seen. And the software doesn’t provide just an overall positive or negative verdict; it breaks down the viewers’ reactions second by second, which enables advertisers to identify, with more precision than ever before, what works in a commercial and what doesn’t.
It also is able to see that while people say one thing, their faces can say another. During an interview with the Huffington Post, el Kaliouby gave the example of the response to an ad for body lotion that aired in India. During the commercial, a husband playfully touches his wife’s exposed stomach. Afterwards, a number of women who had watched it said they found that scene offensive. But, according to el Kaliouby, the videos of the viewers showed that every one of the women responded to the scene with what she called an “enjoyment smile.”
She sees opportunities beyond the world of advertising. Smart TVs could be that much smarter about what kind of programs we like if they’re able to develop a memory bank of our facial expressions. And politicians would be able to get real-time reactions to each line they utter during a debate and be able to adapt their messages on the fly. Plus, says el Kaliouby, there could be health applications. She says it’s possible to read a person’s heart rate with a webcam by analyzing the blood flow in his or her face.
“Imagine having a camera on all the time monitoring your heart rate,” she told the Huffington Post, “so that it can tell you if something’s wrong, if you need to get more fit, or if you’re furrowing your brow all the time and need to relax.”
So what do you think, creepy or cool?
Here are five other ways machines are reacting to human emotions:
- And how was my day?: Researchers at the University of Cambridge have developed an Android mobile app that monitors a person’s behavior throughout the day, using incoming calls and texts, plus social media posts to track their mood. The app, called “Emotion Sense,” is designed to create a “journey of discovery,” allowing users to have a digital record of the peaks and valleys of their daily lives. The data can be stored and used for therapy sessions.
- And this is me after the third cup of coffee: Then there’s Xpression, another mood-tracking app created by a British company called EI Technologies. Instead of relying on people in therapy to keep diaries of their mood shifts, the app listens for changes in a person’s voice to determine if they are in one of five emotional states: calm, happy, sad, angry or anxious/frightened. It then keeps a list of a person’s moods and when they change. And, if the person desires, this record can automatically be sent to a therapist at the end of every day.
- What if you just hate typing on a phone? : Scientists at Samsung are working on software that will gauge your frame of mind by how you type out your tweets on your smartphone. By analyzing how fast you type, how much the phone shakes, how often you backspace mistakes, and how many emoticons you use, the phone should be able to determine if you’re angry, surprised, happy, sad, fearful, or disgusted. And based on what conclusion it draws, it could include with your tweet the appropriate emoticon to tip off your followers to your state of mind.
- Just don’t invite your friends over to watch: Using a sensor worn on the wrist and a smartphone camera worn around the neck, researchers at M.I.T. have created a “lifelogging” system that collects images and data designed to show a person which events represented their emotional highs and lows. The system, called Inside-Out, includes a bio-sensor in a wristband that tracks heightened emotions through electrical charges in the skin while the smartphone tracks the person’s location and takes several photos a minute. Then, at the end of the day, the user can view their experiences, along with all the sensor data.
- Your brow says you have issues: This probably was inevitable. Researchers at the University of Southern California have created a robotic therapist that not only is programmed to encourage patients with well-timed “Uh-huhs,” but also is expert, using motion sensors and voice analysis, at interpreting a patient’s every gesture and voice inflection during a therapy session.
Video bonus: Want to see how bizarre this trend of devices reading human emotions can get? Check out this promotion of Tailly, a mechanical tail that picks up your level of excitement by tracking your heart rate and then wags appropriately.
More from Smithsonian.com
January 11, 2013
Since the beginning of mankind, we’ve wanted our kids to get smarter. Since the beginning of the 21st century, we’ve wanted our phones to get smarter.
So when are we going start wanting our TVs to get smarter? Or will we always be content with them being dumb, as long as they’re big and dumb? Okay, maybe not dumb, but most of us don’t yet feel a compelling need to have our TVs think like computers, as long as the picture looks pretty up there on the wall.
Which always makes things interesting at the Great Gadgetpalooza also known as the Consumer Electronics Show (CES). For the past several years, the big electronics companies that focus on hardware, such as Samsung and Panasonic, and the big tech companies that focus on software, such as Google, have been rolling out nifty products at the annual Las Vegas spectacle with the promise that this is the year that Smart TV goes mainstream.
Boob tube no more
And so it’s been at this year’s version of CES, which ends today. Samsung has done its part to convince us that the time has come for us to love TVs for their brains by unveiling what it calls its S-Recommendation engine.
It’s software that, as Samsung puts it, not only understands what you like, but recommends things it thinks you’ll like. (Sure, Amazon’s been doing this for years, but this is your big, dumb TV we’re talking about.) And it doesn’t just suggest TV shows, but could throw in streaming programs options from the Web, or even video you’ve shot on your smartphone.
The goal ultimately is to get you to do all those things you’re now doing on your smartphone or your tablet–say, watch Hulu or Skype with a family member or check out your Facebook page–on your TV instead. To encourage that behavior, Samsung has revamped its Smart Hub so you can flip through all of your entertainment options in five different index screens–one that tells you what’s on regular old TV now or soon, another that lists movies and on-demand TV, a third that pulls in photos or music or video stored on any other devices around the house, a fourth where you can Skype or pull up Facebook and a fifth that provides access to any apps you’ve downloaded.
And neither of the above requires pushing a lot of buttons on a remote. The S-Recommendation engine responds to voice commands and the Smart Hub is designed to be controlled with hand gestures.
For its part, Panasonic has rolled out a feature it calls My Home Screen, which allows each member of your family to create his or her own homepage on the TV, where easy access is provided to their favorite digital content, streaming video and apps. Some of the company’s Viera models actually come with their own cameras that tell the TV who turned it on. And as a smart TV should, it dutifully brings up that person’s home screen.
Plus, Panasonic unveiled “Swipe and Share 2.0″, which lets users move photos from a tablet or phone to a big TV screen, where they can then be edited with a touch pen.
But can you love a TV?
So that seals it, right? This must be the year when TVs take back center stage, especially now that they’re finally learning to care about our needs, right?
Maybe not. We’ve built some pretty strong personal connections to our cell phones and tablets. And a lot of people think it’s going to take a while for us to develop that kind of bond with a TV, no matter how smart it is.
As Greg Stuart, CEO of the Mobile Marketing Association told Ad Age earlier this week: “”People don’t have that kind of interactive relationship with their TV. The TV on the wall is a family device. It’s a multi-user device. If I want to share something, its going to be with a personal device, and that’s going to be my tablet or my mobile.”
TV or Not TV?
Here are other recent TV innovations:
- Robert, 6th Earl of Grantham, meet Tony Soprano: One day, thanks to Samsung, two people will be able to watch full-screen versions of Downton Abbey and Sopranos reruns at the same time. By adapting 3D technology, the company has created a TV that can display a different and full resolution image to each viewer depending on whether they’re sitting to the left or the right of the screen. Of course, both people would have to wear special glasses that come with headphones so you can hear only the sound for your show, but is that such a big price to pay for domestic peace?
- Read my lips. No more Gangham style: LG, the other South Korean TV giant, has upgraded its “Magic Remote” so that it now responds to natural language. You say the name of a show or even something like “videos with Gangham-style dancing,” and your choice pops up on the screen.
- I got my MoVo workin’: Also at CES, the Chinese TV manufacturer TCL showed off an HD TV called MoVo that uses facial recognition software to identify who’s watching and then make programming suggestions customized for that person.
- Okay, who blinked?: Meanwhile, Haier, another Chinese company, has developed a technology it calls Eye Control TV where, yes, you can change channels by moving your eyes.
- Ah, to be 65 and only see ads for meds: It was only a matter of time. A company called Gracenote will soon begin trials on a technology that, based on your viewing habits and personal data, will personalize the TV ads you see. Isn’t that special?
Video bonus: You didn’t make it to the big electronics show this year? Not to worry. Here’s the Samsung demo of its S-Recommendation engine. Remember, people tend to gush a lot at CES.
More from Smithsonian.com
January 7, 2013
Here in Washington we have heard of this thing you call “advance planning,” but we are not yet ready to embrace it. A bit too futuristic.
Still, we can’t help but admire from afar those who attempt to predict what could happen more than a month from now. So I was impressed a few weeks ago when the big thinkers at IBM imagined the world five years hence and identified what they believe will be five areas of innovation that will have the greatest impact on our daily lives.
They’ve been doing this for a few years now, but this time the wonky whizzes followed a theme--the five human senses. Not that they’re saying that by 2018, we’ll all be able to see, hear and smell better, but rather that machines will–that by using quickly-evolving sensory and cognitive technologies, computers will accelerate their transformation from data retrieval and processing engines to thinking tools.
See a pattern?
Today, let’s deal with vision. It’a logical leap to assume that IBM might be referring to Google’s Project Glass. No question that it has redefined the role of glasses, from geeky accessory that helps us see better to combo smartphone/data dive device we’ll someday wear on our faces.
But that’s not what the IBMers are talking about. They’re focused on machine vision, specifically pattern recognition, whereby, through repeated exposure to images, computers are able to identify things.
As it turns out, Google happened to be involved in one of last year’s more notable pattern recognition experiments, a project in which a network of 1,000 computers using 16,000 processors was, after examining 10 million images from YouTube videos, able to teach itself what a cat looked like.
What made this particularly impressive is that the computers were able to do so without any human guidance about what to look for. All the learning was done through the machines working together to decide which features of cats merited their attention and which patterns mattered.
And that’s the model for how machines will learn vision. Here’s how John Smith, a senior manager in IBM’s Intelligent Information Management, explains it:
“Let’s say we wanted to teach a computer what a beach looks like. We would start by showing the computer many examples of beach scenes. The computer would turn those pictures into distinct features, such as color distributions, texture patterns, edge information, or motion information in the case of video. Then, the computer would begin to learn how to discriminate beach scenes from other scenes based on these different features. For instance, it would learn that for a beach scene, certain color distributions are typically found, compared to a downtown cityscape.”
How smart is smart?
Good for them. But face it, identifying a beach is pretty basic stuff for most of us humans. Could we be getting carried away about how much thinking machines will be able to do for us?
Gary Marcus, a psychology professor at New York University, thinks so. Writing recently on The New Yorker’s website, he concludes that while much progress has been made in what’s become known as “deep learning,” machines still have a long way to go before they should be considered truly intelligent.
“Realistically, deep learning is only part of the larger challenge of building intelligent machines. Such techniques lack ways of representing causal relationships (such as between diseases and their symptoms), and are likely to face challenges in acquiring abstract ideas like “sibling” or “identical to.” They have no obvious ways of performing logical inferences, and they are also still a long way from integrating abstract knowledge, such as information about what objects are, what they are for, and how they are typically used.”
The folks at IBM would no doubt acknowledge as much. Machine learning comes in steps, not leaps.
But they believe that within five years, deep learning will have taken enough forward steps that computers will, for instance, start playing a much bigger role in medical diagnosis, that they could actually become better than doctors when it comes to spotting tumors, blood clots or diseased tissue in MRIs, X-rays or CT scans.
And that could make a big difference in our lives.
Seeing is believing
Here are more ways machine vision is having an impact on our lives:
- Putting your best arm forward: Technology developed at the University of Pittsburgh uses pattern recognition to enable paraplegics to control a robotic arm with their brains.
- Your mouth says yes, but your brain says no: Researchers at Stanford found that using pattern recognition algorithms on MRI scans of brains could help them determine if someone actually had lower back pain or if they were faking it.
- When your moles are ready for their close ups: Last year a Romanian startup named SkinVision launched an iPhone app that allows people to take a picture of moles on their skin and then have SkinVision’s recognition software identify any irregularities and point out the risk level–without offering an actual diagnosis. Next step is to make it possible for people to send images of their skin directly to their dermatologist.
- Have I got a deal for you: Now under development is a marketing technology called Facedeals. It works like this: Once a camera at a store entrance recognizes you, you’re sent customized in-store deals on your smart phone. And yes, you’d have to opt in first.
- I’d know that seal anywhere: A computerized photo-ID system that uses pattern recognition is helping British scientists track gray seals, which have unique markings on their coats.
Video bonus: While we’re on the subject of artificial intelligence, here’s a robot swarm playing Beethoven, compliments of scientists at Georgia Tech. Bet you didn’t expect to see that today.
More from Smithsonian.com