January 30, 2013
Admittedly, it’s a little hard to imagine smell scientists, but research published earlier this week has those who study the sense of smell taking sides.
It comes down to how our noses detect odors. The long-standing explanation is that our noses have receptors that respond based on the shapes of odor molecules. Different molecules fit together with different receptors, the thinking goes, and when a match is made, the receptor tips off the brain that our nose has picked up a whiff of coffee or perhaps a very different smell emanating from the bottom of our shoe.
But a conflicting and more exotic theory received a boost in the new study by researchers in Greece. It holds that we can also sense smells through quantum physics, in this case the vibration of odor molecules. As Mark Anderson posits at Scientific American, “Does the nose, in other words, read off the chemical makeup of a mystery odorant—say, a waft of perfume or the aroma of wilted lettuce—by ‘ringing’ it like a bell?”
I know what you’re thinking: What difference does this make as long as I can still smell bacon?
Sniffing out trouble
But actually it does matter, because the more we understand the process of smelling, the more effective we can be at recreating it in machines. In fact, just last month IBM, in its annual “5 in 5″ forecast–a list of technologies it believes will hit the mainstream in five years–focused exclusively on the development of the five human senses in machines.
To mimic smelling, tiny sensors would be integrated into smartphones or other mobile devices and, as a breathalyzer can determine alcohol levels, they would gather data from the smell of your breath by detecting chemicals that humans wouldn’t perceive and send it to a computer in your doctor’s office. The thinking is that eventually this would be a core component of home health care–the ability to “smell” diseases remotely, such as liver or kidney ailments, asthma or diabetes.
Or on a more basic level, as IBM’s Hendrik Hamann put it: “Your phone might know you have a cold before you do.”
IBM is also working with health care organizations to equip patient and operating rooms with sensors that can help address one of the biggest problems hospitals face today–how do you keep them hygienic? Hundreds of sensors will basically sniff for cleanliness, identifying the chemical compounds that create odors, some of which are undetectable by humans. The staff can say they cleaned a room; the sensors will know if and when they did.
Every breath you take
The smell tests might even detect cancer. Last fall, in a study in the Journal of Thoracic Oncology, researchers from Israel and Colorado reported that breath analysis could distinguish between benign and malignant lung tumors with 88 percent accuracy. Plus, the breath test could determine the specific type and stage of the lung cancers.
And at the Cleveland Clinic, Dr. Peter Mazzone, director of the lung cancer program, is testing a sensor array that changes color when a patient’s breath passes over it. In a study of 229 patients, the test, using a machine developed by the California firm Metabolomx, was able to distinguish those with lung cancer with more than 80 percent accuracy.
Meanwhile, Mazzone and his team are collecting as many breath samples as possible from patients, both with and without lung cancer. The goal is match breath patterns with physical conditions. “My vision,” Mazzone told the Wall Street Journal, “is being able to say, ‘This is a 60-year old with emphysema who smoked for 30 years—what’s the chance of there being cancer there?’ But we have to teach the device what it looks like first.”
Or, perhaps more accurately, what it smells like.
Here are other recent discoveries scientists have made about smell:
- Me, my smell and I: Research in Germany concluded that not only can we identify our own body odor, but that we prefer it. For the study, women were asked to select which of their armpit odors they liked more. They showed a clear preference for the one perfumed with a solution that included elements of their own scent.
- Can robots wear Axe?: The U.S. Navy is looking to use scent-sniffing robots to move 1,000-pound bombs on ships. The idea is that a human would control the lead robot and it would dispense the equivalent of a robot pheromone that a swarm of other robots would follow like army ants.
- I love the smell of gridlock in the morning: When people are anxious, their sense of smell becomes more acute, according to a recent study at the University of Wisconsin-Madison.
- Why your dog can sniff out a chicken leg from a block away: And from the University of Chicago comes research finding that animals are able to focus their sense of smell much like humans can focus our eyes. Through their finely-honed sniffing techniques, they apparently can bring scents to receptors in different parts of the nose.
- There’s the rub: And finally, a study in the U.K. has found that thanks to a genetic variation, two percent of the population never has underarm body odor. Yet more than three-quarters of them still use deodorant because, well, that’s what people do.
Video bonus: Stuart Firestein, chairman of the biology department at Columbia University, tells you all you want to know about how our nose does its job.
Video bonus bonus: A Chinese airline that checks out the armpit odors of people interviewing to be pilots.
More from Smithsonian.com
January 7, 2013
Here in Washington we have heard of this thing you call “advance planning,” but we are not yet ready to embrace it. A bit too futuristic.
Still, we can’t help but admire from afar those who attempt to predict what could happen more than a month from now. So I was impressed a few weeks ago when the big thinkers at IBM imagined the world five years hence and identified what they believe will be five areas of innovation that will have the greatest impact on our daily lives.
They’ve been doing this for a few years now, but this time the wonky whizzes followed a theme--the five human senses. Not that they’re saying that by 2018, we’ll all be able to see, hear and smell better, but rather that machines will–that by using quickly-evolving sensory and cognitive technologies, computers will accelerate their transformation from data retrieval and processing engines to thinking tools.
See a pattern?
Today, let’s deal with vision. It’a logical leap to assume that IBM might be referring to Google’s Project Glass. No question that it has redefined the role of glasses, from geeky accessory that helps us see better to combo smartphone/data dive device we’ll someday wear on our faces.
But that’s not what the IBMers are talking about. They’re focused on machine vision, specifically pattern recognition, whereby, through repeated exposure to images, computers are able to identify things.
As it turns out, Google happened to be involved in one of last year’s more notable pattern recognition experiments, a project in which a network of 1,000 computers using 16,000 processors was, after examining 10 million images from YouTube videos, able to teach itself what a cat looked like.
What made this particularly impressive is that the computers were able to do so without any human guidance about what to look for. All the learning was done through the machines working together to decide which features of cats merited their attention and which patterns mattered.
And that’s the model for how machines will learn vision. Here’s how John Smith, a senior manager in IBM’s Intelligent Information Management, explains it:
“Let’s say we wanted to teach a computer what a beach looks like. We would start by showing the computer many examples of beach scenes. The computer would turn those pictures into distinct features, such as color distributions, texture patterns, edge information, or motion information in the case of video. Then, the computer would begin to learn how to discriminate beach scenes from other scenes based on these different features. For instance, it would learn that for a beach scene, certain color distributions are typically found, compared to a downtown cityscape.”
How smart is smart?
Good for them. But face it, identifying a beach is pretty basic stuff for most of us humans. Could we be getting carried away about how much thinking machines will be able to do for us?
Gary Marcus, a psychology professor at New York University, thinks so. Writing recently on The New Yorker’s website, he concludes that while much progress has been made in what’s become known as “deep learning,” machines still have a long way to go before they should be considered truly intelligent.
“Realistically, deep learning is only part of the larger challenge of building intelligent machines. Such techniques lack ways of representing causal relationships (such as between diseases and their symptoms), and are likely to face challenges in acquiring abstract ideas like “sibling” or “identical to.” They have no obvious ways of performing logical inferences, and they are also still a long way from integrating abstract knowledge, such as information about what objects are, what they are for, and how they are typically used.”
The folks at IBM would no doubt acknowledge as much. Machine learning comes in steps, not leaps.
But they believe that within five years, deep learning will have taken enough forward steps that computers will, for instance, start playing a much bigger role in medical diagnosis, that they could actually become better than doctors when it comes to spotting tumors, blood clots or diseased tissue in MRIs, X-rays or CT scans.
And that could make a big difference in our lives.
Seeing is believing
Here are more ways machine vision is having an impact on our lives:
- Putting your best arm forward: Technology developed at the University of Pittsburgh uses pattern recognition to enable paraplegics to control a robotic arm with their brains.
- Your mouth says yes, but your brain says no: Researchers at Stanford found that using pattern recognition algorithms on MRI scans of brains could help them determine if someone actually had lower back pain or if they were faking it.
- When your moles are ready for their close ups: Last year a Romanian startup named SkinVision launched an iPhone app that allows people to take a picture of moles on their skin and then have SkinVision’s recognition software identify any irregularities and point out the risk level–without offering an actual diagnosis. Next step is to make it possible for people to send images of their skin directly to their dermatologist.
- Have I got a deal for you: Now under development is a marketing technology called Facedeals. It works like this: Once a camera at a store entrance recognizes you, you’re sent customized in-store deals on your smart phone. And yes, you’d have to opt in first.
- I’d know that seal anywhere: A computerized photo-ID system that uses pattern recognition is helping British scientists track gray seals, which have unique markings on their coats.
Video bonus: While we’re on the subject of artificial intelligence, here’s a robot swarm playing Beethoven, compliments of scientists at Georgia Tech. Bet you didn’t expect to see that today.
More from Smithsonian.com
December 27, 2012
In the spirit of the post-holiday season, allow me to present my final list of 2012: six innovators who are pushing technology in fresh directions, some to solve stubborn problems, others to make our lives a little fuller.
Watch for more from all of them in the new year.
1. Keep your hands off my robot: We’ve all seem videos of adorably cute robots,, but when you actually have to work with one, they apparently can be less than lovable. That’s where Leila Takayama comes in. She’s a social scientist with Willow Garage, a San Francisco area company that develops robots, and her job is to figure out how to get humans to connect with mechanical co-workers.
She’s seen cases where robots have gotten on people’s nerves so much that they park them in a closet. One of the keys, she’s found, is to make robots seem more fallible. Like having them shake their heads when they fail at something. Oddly enough, Takayama says, a reaction like that can make a robot “seem more competent.”
She’s worked on robots designed to help elderly people, recommending that the number of cameras on the robots’ heads be reduced because too many could make people uneasy. More recently, she’s been analyzing a robot called Project Texai, which is operated directly by humans, rather than running on its own. And she’s discovered some interesting things, such as how people who operate the robot don’t like it when other people stand too close to it or touch its buttons. “There comes a point for a lot of people when they feel as if the robot is their body.”
Another key question she’s wrestling with: Is it better to have a robot at eye level with a person when he or she is sitting or standing?
2. One day even lamp posts won’t be dumb: As Chris Harrison sees it, the world is full of surfaces, so why are we spending so much time touching little screens or tapping on cramped keyboards. Harrison, a researcher at Carnegie-Mellon University, has been a leader in finding ways to turn everyday objects–a couch, a doorknob, a glass of water–into interactive devices.
His approach is to use the natural conductivity of objects–or attach electrodes to those that aren’t–and connect them to a controller that responds to different types of signals. A couch, for instance, could be wired to turn on the TV if someone sits on it in a certain spot. Or you could turn off all the lights in your place by twisting the doorknob or tapping on a table. Almost anything with a surface could be connected to a computer and allow you to make things happen with simple gestures or touches.
3. Finally, a tatt for Grandma: There’s no questions that health tech is booming–although that’s not always a good thing considering that health apps don’t always live up to their hype. But Nanshu Lu, an engineering professor at the University of Texas, has created a product that could have a huge impact on how we monitor what’s going on inside our bodies.
She has refined what are known as “epidermal electronics,” but basically they’re electronic tattoos that can track your vital signs, including your temperature, heart beat and brain and muscle activity. Lu has managed to develop ultra-thin, water-soluble silicon patches that contain tiny sensors and can actually bond with skin. No adhesives necessary. They last through showers and exercise, never losing their ability to gather your most personal data. The hope is that one day her tattoos will be able to treat diseases.
4. In phones we trust: When you’re out on the road or on vacation in a new place, it can get frustrating to have to search for info on your smart phone. Really, if your phone is so smart, shouldn’t it be able to anticipate your needs and feed you info as you need it, based on where you are and what time of day it is?
That’s the premise behind the mobile apps software developed by Flybits, brainchild of Hossein Rahnama, director of the Digital Media Zone at Toronto’s Ryerson University. Flybits is already being used at several Canadian airports and Toronto’s transit system to coordinate with a traveler’s itinerary and provide information that’s both personalized and contextually relevant, such as directions to the car rental counters or the gate to your connecting flight after you get off a plane.
The company has also developed software it calls Flybits Lite, which lets you know friends and other contacts who are taking in the same concert or watching the same movie you are.
5. Do you really want to know how many times you’ve ordered donuts?: It would be easy to dismiss the Memoto Camera as the epitome of 21st century self-indulgence. It’s a postage-stamp sized wearable camera that documents your life by taking two photos every minute, or roughly 2,000 pictures a day.
For most of us that’s one big load of digital tedium. Martin Kallstrom, the man behind the concept and CEO of the Swedish startup Memoto, would acknowledge as much. But he also knows how many memorable moments are missed–”the day your daughter took her first step, or that night you laughed the night away with friends.”
Clearly, he’s not alone in believing that a “lifelogging” camera is an idea whose time has come. He and his partners had hoped to raise $75,000 on Kickstarter. By the time the fundraising campaign ended earlier this month, online backers had pledged more than $550,000.
6. And no, it won’t fetch you a beer: For several years now, Steve Castellotti has been all about brain-powered machines. But his latest innovation, Puzzlebox Orbit, is taking the concept to the public. It’s a little helicopter you control with your mind.
Given that this is not something we do every day, it comes enclosed in a protective sphere so the rotor blades don’t chop up the furniture. It also comes with a device called the Puzzlebox Pyramid, which serves as a combination base/remote control unit for the chopper. But since your mind is doing the controlling, the Pyramid’s role is to wirelessly transmit your brain activity from a headset you wear. It also lets you know how you’re doing–a circle of LED lights on the Pyramid’s face is designed to reflect your level of concentration or relaxation.
Thanks to a funding boost from Kickstarter, Castellotti and his chief engineer and partner Hao Zhang plan to start selling the Puzzlebox Orbit for about $90 next year. But Castellotti believes it won’t become just another pricey tool that ends up in the basement. He sees it as teaching tool that can be used in schools to introduce kids to neuroscience and also as a way for people for people to start to become familiar with the potential of biofeedback.
To spur that process, the company will make its source code and hardware schematics available and encourage developers to hack away. For example, says Castellotti, a “motivated experimenter” might hack the Puzzlebox system so his TV would automatically change channels when his concentration level stays too low for too long. Say so long to vegging out.
Video bonus: Take at look at Chris Harrison’s most recent project, called Skinput, It involves the use of an armband with bio-acoustic sensors that can turn a body into a touch screen.
Video bonus bonus: And here’s a Puzzlebox Orbit tutorial that was part of the Kickstarter pitch for its nifty brain-controlled toy.
More from Smithsonian.com
December 14, 2012
There are times when I wonder why so many scientists are spending so much time trying to recreate something as fickle and full of fogginess as the human brain.
But who am I kidding? Those dyspeptic moments inevitably pass, as anyone who’s been following this blog knows. Every few months, it seems, I’m back writing about the latest attempt to build machines that can learn to recognize objects or even develop cognitive skills.
And now there’s Spaun.
Staying on task
Its full name is the Semantic Pointer Architecture Unified Network, but Spaun sounds way more epic. It’s the latest version of a techno brain, the creation of a Canadian research team at the University of Waterloo.
So what makes Spaun different from a mindboggingly smart artificial brain like IBM’s Watson? Put simply, Watson is designed to work like a supremely powerful search engine, digging through an enormous amount of data at breakneck speed and using complex algorithms to derive an answer. It doesn’t really care about how the process works; it’s mainly about mastering information retrieval.
But Spaun tries to actually mimic the human brain’s behavior and does so by performing a series of tasks, all different from each other. It’s a computer model that can not only recognize numbers with its virtual eye and remember them, but also can manipulate a robotic arm to write them down.
Spaun’s “brain” is divided into two parts, loosely based on our cerebral cortex and basal ganglia and its simulated 2.5 million neurons–our brains have 100 billion–are designed to mimic how researchers think those two parts of the brain interact.
Say, for instance, that its “eye” sees a series of numbers. The artificial neurons take that visual data and route it into the cortex where Spaun uses it to perform a number of different tasks, such as counting, copying the figures, or solving number puzzles.
Soon it will be forgetting birthdays
But there’s been an interesting twist to Spaun’s behavior. As Francie Diep wrote in Tech News Daily, it became more human than its creators expected.
Ask it a question and it doesn’t answer immediately. No, it pauses slightly, about as long as a human might. And if you give Spaun a long list of numbers to remember, it has an easier time recalling the ones it received first and last, but struggles a bit to remember the ones in the middle.
“There are some fairly subtle details of human behavior that the model does capture,” says Chris Eliasmith, Spaun’s chief inventor. “It’s definitely not on the same scale. But it gives a flavor of a lot of different things brains can do.”
The fact that Spaun can move from one task to another brings us one step closer to being able to understand how our brains are able to shift so effortlessly from reading a note to memorizing a phone number to telling our hand to open a door.
And that could help scientists equip robots with the ability to be more flexible thinkers, to adjust on the fly. Also, because Spaun operates more like a human brain, researchers could use it to run health experiments that they couldn’t do on humans.
Recently, for instance, Eliasmith ran a test in which he killed off the neurons in a brain model at the same rate that neurons die in people as they age. He wanted to see how the loss of neurons affected the model’s performance on an intelligence test.
One thing Eliasmith hasn’t been able to do is to get Spaun to recognize if it’s doing a good or a bad job. He’s working on it.
Here are a few other recent developments in brain research and artificial intelligence:
- I can’t get this song out of your head: Scientists in Berlin wired guitarists playing a duet with electrodes and found that when they had to closely coordinate their playing, their brain activity became synchronized. But when they weren’t coordinated, when one was leading and the other following, their brain activity was distinctly different.
- One day the brain may actually understand itself: A team of MIT neuroscientists has developed a way to monitor how brain cells coordinate with each other to control specific behaviors, such as telling the body to move. Not only could this help them map brain circuits to see how tasks are carried out, but it also may provide insight into how psychiatric diseases develop.
- Deep thinking is so yesterday: The top prize in a recent competition sponsored by pharmaceutical giant Merck went to a team of researchers from the University of Toronto who used a form of artificial intelligence known as deep learning to help discover molecules that could become new drugs.
- So robots will learn how to stare at smart phones?: To teach robots how to function in social situations, scientists at Carnegie-Mellon University are tracking groups of people with head-mounted cameras to see when and where their eyes converge in social settings.
- Unfortunately, they keep trying to hide nuts: By using the deceptive behavior of birds and squirrels as a model, researchers at Georgia Tech have been able to develop robots that can trick each other.
Video bonus: Check out a demo of Spaun in action.
More from Smithsonian.com
November 5, 2012
A vote in tomorrow’s presidential election could be viewed one of two ways.
It’s either the culmination of months of weighing the arguments on countless issues and making a choice based on a commingling of knowledge and personal principle.
Or you voted Republican or Democratic because, to paraphrase accidental pundit Lady Gaga, you were born that way.
Okay, in the spirit of punditry, the latter is a bit of an oversimplification, but it does reflect the thinking of an emerging field called political neuroscience. Its focus has been on using brain scans to see if people of different political persuasions are different all the way done to their genes.
Or put more bluntly, do their brains work differently?
Right brain, left brain
The latest research came out last week, a study at the University of South Carolina that concluded that the brains of self-identified Democrats and Republicans aren’t hard-wired the same.
Specifically, the scientists found more neural activity in areas of the brain believed to be linked with broad social connectedness in Democrats (friends, the world at-large) and more activity in areas linked with tight social connectedness in the Republicans (family, country).
This was in line with what previous studies have suggested, that people who say they’re Democrats tend to take a more global view on issues while those who call themselves Republicans tend to see things through more of an American filter.
But the findings also ran counter to previous research suggesting Democrats are, by biological nature, more empathetic souls than Republicans. Not so, according to the South Carolina study; it’s just that Republicans are more likely to focus their empathy on family members or people they know.
That’s your amygdala talking
If case you missed it, a study that stirred up much debate last year–done at University College in London–likewise zeroed in on apparent links between political beliefs and brain biology. It found that research subjects who considered themselves conservative tended to have larger amygdala, the section of the brain in the temporal lobes that plays a major role in the processing of emotions.
Self-defined liberals, meanwhile, generally had a larger volume of gray matter in the anterior cingulate cortex, a part of the brain associated with coping with uncertainty and handling conflicting information.
One of the study’s authors, Ryota Kanai, cautioned against jumping to conclusions. The scientists found nothing to indicate that political orientation is encoded in the brain, or that brain structure alone can shape the way you vote. But this kind of research, he noted, suggests political beliefs may not develop solely from social experience, that they also could have biological roots.
I think, therefore I scan
Of course, not everyone is impressed with this line of inquiry. Plenty of critics deride the increasing value given to brain scans as scientific evidence. Steven Poole, writing recently on the New Statesman website, referred to it as “neurobollocks.” He argued:
“The human brain, it is said, is the most complex object in the known universe. That a part of it “lights up” on an fMRI scan does not mean the rest is inactive; nor is it obvious what any such lighting-up indicates; nor is it straightforward to infer general lessons about life from experiments conducted under highly artificial conditions.”
And you can guess how this kind of research plays out in the political arena–from liberals claiming it shows they are, by biological definition, rational thinkers who embrace science and are open to outsiders and new ideas, to conservatives saying it reinforces their belief that their principles are deep and heartfelt and that they really are born patriots.
Or consider Chris Mooney, a writer who’s been tilling this ground for a while. Generally, he makes the case, as he did in a piece for The Atlantic earlier this year, that it’s important to understand that our brain wiring plays a role in why we can seem “impervious to facts, logic, and reason” from the other side. But then his book on the subject came out with a title that didn’t exactly genuflect to scientific impartiality: The Republican Brain: The Science of Why They Deny Science–and Reality.
Can’t we all just get along?
Okay, so let’s assume that there is a connection between biology and belief. You can bet political consultants will be all over this, looking for ways to refine messages and ads so they tap right into the brains of their party faithful.
Andrea Kuszewski, a researcher who has written about political neuroscience, would rather put a positive spin on what it could mean for politics. She says this kind of knowledge could help open up communication, or at least ease hostility between the country’s two major political parties.
“Each side is going to have to recognize that not everyone thinks like them, processes information like them, or values the same types of things,” she wrote last week. “With the state our country is in right now, I don’t think we have any choice but to cowboy up and do whatever needs to be done in order to reach some common ground.”
But Roger Newman-Norlund, author of the South Carolina study mentioned above, believes that while having people of opposing parties understand why they don’t think alike is a good start, he’s not expecting a kumbaya moment any time soon.
“The brain differences could be a result of genetics, experiences, or a combination of both,” he said. “It takes a lot of effort to see the other side and we’re not going to wake up one day and all start getting along.”
Here’s other recent research into the psychology and politics:
- Just don’t shake their baby’s hand: According to a new study published in the Journal of Cognitive Neuroscience, scientific evidence supports what every politician knows — people like and trust you more if you shake their hand.
- Why didn’t someone tell the campaigns about this?: A study at the University of Miami came to the conclusion that negative ads are most effective if they’re used in moderation. If they air too often, at least according to the research, they can bring a backlash.
- Text me maybe: Ten percent of people who donated to the presidential campaigns this year did so on their cell phones. Analysis by the Pew Research Center’s Internet and American Life Project also found that also found that Democrats were more likely to contribute to President Obama’s campaign either online or through a cellphone, while supporters of Republican Mitt Romney tended to donate through traditional mail, by phone or in person.
- Right face: Using a computer program called FaceGen, UCLA researchers concluded that Republican congresswomen look more “feminine” than their Democratic counterparts.
- The height stuff: In case you didn’t realize it, the taller presidential candidate has won 58 percent of the time. Mitt Romney is 6’2, Barack Obama is 6’1.
Video bonus:When was the last time your toured your brain? Here’s a chance to get inside your head.
Video bonus bonus: And here are 10 things the brain does that it won’t help you understand.
More from Smithsonian.com