February 25, 2013
A year and a half into his presidency, John F. Kennedy challenged U.S. scientists to get Americans to the moon by the end of the decade. At his recent State of the Union address, Barack Obama hinted at what could become his version of reaching for the moon–he’d like scientists to solve the mystery of the brain.
Obama’s mission would be a heavier lift.
He didn’t go into much detail, other than citing brain research as a stellar example of how government can “invest in the best ideas.” But last week a story in the New York Times by John Markoff filled in a lot of the blanks. Obama’s grand ambition is something called the Brain Activity Map–it’s already being referred to simply as BAM–and it would require a massive collaborative research effort involving neuroscientists, government agencies, private foundations and tech companies, with the truly daunting goal of figuring out how the brain actually generates thoughts, memories and consciousness.
An answer for Alzheimer’s?
The White House is expected to officially unveil its big plan as early as next month as part of its budget proposal. The speculation is that it could cost as much as $3 billion over the next 10 years.
Now, it may seem a strange time to be pitching projects with a $300 million-a-year price tag, what with the budget-hacking sequestration expected to kick in later this week. That’s why even though Obama was light on the details, he did make a point of comparing the brain-mapping mission to the Human Genome Project–a major research initiative financed by the federal government to map all of the genes in human DNA. It ultimately cost $3.8 billion, but it reached its goal two years early, in 2003, and through 2010, according to an impact study, returned $800 billion to the economy.
No question that BAM could have a profound impact in helping scientists understand what goes on in the brain to cause depression or schizophrenia or autism. And it certainly could be a boon to pharmaceutical companies that have spent billions, without luck, to find a cure for Alzheimer’s disease. Since 1998, there have been more than 100 unsuccessful attempts to find a treatment for Alzheimer’s, which by 2050, is expected to affect 115 million people around the world.
It’s all about the tools
Clearly there are plenty of medical reasons to try to unravel the brain, but what, realistically, are the prospects? Sure, brain scans have helped scientists see which parts of the brain are more active during different types of behavior, but that’s a 30,000-foot view. It tells them next to nothing about how individual brain cells transmit information and even less about how neural networks transform that into behavior.
In recent years, researchers have made big strides in understanding how the brain is organized through the Human Connectome Project, funded by the National Institutes of Health. But that’s designed to create more of a static map of neural connections.
The next crucial step is to be able to see, in real time, how information is processed through those connections and which different neurons become part of that process. Or as Harvard biologist George Church, one of the scientists who proposed BAM in a paper last year, has explained it: “We don’t just want to see the wires, but also the messages going over the wires.”
The key is how quickly technology can be developed that will allow scientists to follow a thought process by recording every blip of every one of the thousands, and possibly millions, of neurons involved. Current technology enables them to record the activity of roughly 100 neurons at a time, way too small a slice of the neural network to help explain much of anything. But, as Greg Miller noted in a recent piece on the Wired website, several cutting-edge biological or nano-tools are in the works, including one that could “pack hundreds of thousands of nanowire electrodes into flexible sheets that conform to the surface of the brain and eavesdrop on neurons with minimal tissue damage.”
Is bigger really better?
A lot of neuroscientists will be thrilled if BAM gets funded. But not all. Some have already pointed out that you really can’t compare it to the Human Genome Project, nor the mission to the moon, for that matter. Both of those endeavors, while very challenging, had clearly definable goals. But how do you identify success for BAM? Would being able to record the activity of hundreds of thousands of neurons really explain how thinking happens? No one really knows.
Other scientists are concerned that BAM, with its high profile, could drain dollars from other neuroscience research. Some writers have even raised the specter of mind control, particularly since one of the government agencies that would be involved is DARPA, the Defense Department’s agency that funds experimental technology.
Gary Marcus, writing in the The New Yorker, makes the case that a project like BAM might be more effective if it wasn’t so monolithic. He argues that it should be broken up into five smaller projects, each one focused on a different aspect of brain function.
But he also warns that should Congress balk at ponying up the money for a major neuroscience project, it runs the risk of sparking, ironically, a brain drain. In January, a group of European countries committed more than $1 billion to their own huge neuroscience endeavor called the Human Brain Project , which will try to simulate all the processes of a brain within a computer.
“Whether it meets its grand goal or not, the European project will certainly lead to a significant number of smaller scientific advances. If the U.S. doesn’t follow suit, we will lose our lead in neuroscience, and will likely be left playing catch-up in some of the biggest game-changing industries on the horizon, like human-level artificial intelligence and direct brain-computer interfaces–even though both fields originated in the United States.”
Here are some other recent findings from brain research:
- Of mice and men watching mice: Researchers at Stanford were able to follow the brain activity of mice in real time after lacing their brains with fluorescent proteins. They were able to watch which parts of their brains glowed as they ran around a cage.
- Does that mean a bird can get a song stuck in its head?: And a team of scientists at Duke University found that birds that can sing and mimic sounds have genes in their brains that can turn on and off in ways similar to human brains.
- She lights up a womb: For the first time, MRIs of developing human fetuses showed communication signals between different parts of their brains. Scientists at Wayne State University in Michigan hope their research will lead to early treatments for autism and ADHD.
- Nothing yet, though, on how foot gets in mouth: Researchers at the University of California, San Francisco, had mapped the process of speech, laying out the neural network that makes it happen, from the nerves that control the jaws, lips and tongue to those that manipulate the larynx.
- Talk about a protein boost: There’s a biological explanation for why women talk more than men. Studies have shown that women speak an average of 20,000 words a day, while men average about 7,000. According to a study published in the Journal of Neuroscience last week, it may be because they tend to have higher levels of a protein in their brain that’s been linked to verbal communication.
Video bonus: A BBC journalist gets a tour of the wiring on his own brain.
More from Smithsonian.com
January 7, 2013
Here in Washington we have heard of this thing you call “advance planning,” but we are not yet ready to embrace it. A bit too futuristic.
Still, we can’t help but admire from afar those who attempt to predict what could happen more than a month from now. So I was impressed a few weeks ago when the big thinkers at IBM imagined the world five years hence and identified what they believe will be five areas of innovation that will have the greatest impact on our daily lives.
They’ve been doing this for a few years now, but this time the wonky whizzes followed a theme--the five human senses. Not that they’re saying that by 2018, we’ll all be able to see, hear and smell better, but rather that machines will–that by using quickly-evolving sensory and cognitive technologies, computers will accelerate their transformation from data retrieval and processing engines to thinking tools.
See a pattern?
Today, let’s deal with vision. It’a logical leap to assume that IBM might be referring to Google’s Project Glass. No question that it has redefined the role of glasses, from geeky accessory that helps us see better to combo smartphone/data dive device we’ll someday wear on our faces.
But that’s not what the IBMers are talking about. They’re focused on machine vision, specifically pattern recognition, whereby, through repeated exposure to images, computers are able to identify things.
As it turns out, Google happened to be involved in one of last year’s more notable pattern recognition experiments, a project in which a network of 1,000 computers using 16,000 processors was, after examining 10 million images from YouTube videos, able to teach itself what a cat looked like.
What made this particularly impressive is that the computers were able to do so without any human guidance about what to look for. All the learning was done through the machines working together to decide which features of cats merited their attention and which patterns mattered.
And that’s the model for how machines will learn vision. Here’s how John Smith, a senior manager in IBM’s Intelligent Information Management, explains it:
“Let’s say we wanted to teach a computer what a beach looks like. We would start by showing the computer many examples of beach scenes. The computer would turn those pictures into distinct features, such as color distributions, texture patterns, edge information, or motion information in the case of video. Then, the computer would begin to learn how to discriminate beach scenes from other scenes based on these different features. For instance, it would learn that for a beach scene, certain color distributions are typically found, compared to a downtown cityscape.”
How smart is smart?
Good for them. But face it, identifying a beach is pretty basic stuff for most of us humans. Could we be getting carried away about how much thinking machines will be able to do for us?
Gary Marcus, a psychology professor at New York University, thinks so. Writing recently on The New Yorker’s website, he concludes that while much progress has been made in what’s become known as “deep learning,” machines still have a long way to go before they should be considered truly intelligent.
“Realistically, deep learning is only part of the larger challenge of building intelligent machines. Such techniques lack ways of representing causal relationships (such as between diseases and their symptoms), and are likely to face challenges in acquiring abstract ideas like “sibling” or “identical to.” They have no obvious ways of performing logical inferences, and they are also still a long way from integrating abstract knowledge, such as information about what objects are, what they are for, and how they are typically used.”
The folks at IBM would no doubt acknowledge as much. Machine learning comes in steps, not leaps.
But they believe that within five years, deep learning will have taken enough forward steps that computers will, for instance, start playing a much bigger role in medical diagnosis, that they could actually become better than doctors when it comes to spotting tumors, blood clots or diseased tissue in MRIs, X-rays or CT scans.
And that could make a big difference in our lives.
Seeing is believing
Here are more ways machine vision is having an impact on our lives:
- Putting your best arm forward: Technology developed at the University of Pittsburgh uses pattern recognition to enable paraplegics to control a robotic arm with their brains.
- Your mouth says yes, but your brain says no: Researchers at Stanford found that using pattern recognition algorithms on MRI scans of brains could help them determine if someone actually had lower back pain or if they were faking it.
- When your moles are ready for their close ups: Last year a Romanian startup named SkinVision launched an iPhone app that allows people to take a picture of moles on their skin and then have SkinVision’s recognition software identify any irregularities and point out the risk level–without offering an actual diagnosis. Next step is to make it possible for people to send images of their skin directly to their dermatologist.
- Have I got a deal for you: Now under development is a marketing technology called Facedeals. It works like this: Once a camera at a store entrance recognizes you, you’re sent customized in-store deals on your smart phone. And yes, you’d have to opt in first.
- I’d know that seal anywhere: A computerized photo-ID system that uses pattern recognition is helping British scientists track gray seals, which have unique markings on their coats.
Video bonus: While we’re on the subject of artificial intelligence, here’s a robot swarm playing Beethoven, compliments of scientists at Georgia Tech. Bet you didn’t expect to see that today.
More from Smithsonian.com
December 14, 2012
There are times when I wonder why so many scientists are spending so much time trying to recreate something as fickle and full of fogginess as the human brain.
But who am I kidding? Those dyspeptic moments inevitably pass, as anyone who’s been following this blog knows. Every few months, it seems, I’m back writing about the latest attempt to build machines that can learn to recognize objects or even develop cognitive skills.
And now there’s Spaun.
Staying on task
Its full name is the Semantic Pointer Architecture Unified Network, but Spaun sounds way more epic. It’s the latest version of a techno brain, the creation of a Canadian research team at the University of Waterloo.
So what makes Spaun different from a mindboggingly smart artificial brain like IBM’s Watson? Put simply, Watson is designed to work like a supremely powerful search engine, digging through an enormous amount of data at breakneck speed and using complex algorithms to derive an answer. It doesn’t really care about how the process works; it’s mainly about mastering information retrieval.
But Spaun tries to actually mimic the human brain’s behavior and does so by performing a series of tasks, all different from each other. It’s a computer model that can not only recognize numbers with its virtual eye and remember them, but also can manipulate a robotic arm to write them down.
Spaun’s “brain” is divided into two parts, loosely based on our cerebral cortex and basal ganglia and its simulated 2.5 million neurons–our brains have 100 billion–are designed to mimic how researchers think those two parts of the brain interact.
Say, for instance, that its “eye” sees a series of numbers. The artificial neurons take that visual data and route it into the cortex where Spaun uses it to perform a number of different tasks, such as counting, copying the figures, or solving number puzzles.
Soon it will be forgetting birthdays
But there’s been an interesting twist to Spaun’s behavior. As Francie Diep wrote in Tech News Daily, it became more human than its creators expected.
Ask it a question and it doesn’t answer immediately. No, it pauses slightly, about as long as a human might. And if you give Spaun a long list of numbers to remember, it has an easier time recalling the ones it received first and last, but struggles a bit to remember the ones in the middle.
“There are some fairly subtle details of human behavior that the model does capture,” says Chris Eliasmith, Spaun’s chief inventor. “It’s definitely not on the same scale. But it gives a flavor of a lot of different things brains can do.”
The fact that Spaun can move from one task to another brings us one step closer to being able to understand how our brains are able to shift so effortlessly from reading a note to memorizing a phone number to telling our hand to open a door.
And that could help scientists equip robots with the ability to be more flexible thinkers, to adjust on the fly. Also, because Spaun operates more like a human brain, researchers could use it to run health experiments that they couldn’t do on humans.
Recently, for instance, Eliasmith ran a test in which he killed off the neurons in a brain model at the same rate that neurons die in people as they age. He wanted to see how the loss of neurons affected the model’s performance on an intelligence test.
One thing Eliasmith hasn’t been able to do is to get Spaun to recognize if it’s doing a good or a bad job. He’s working on it.
Here are a few other recent developments in brain research and artificial intelligence:
- I can’t get this song out of your head: Scientists in Berlin wired guitarists playing a duet with electrodes and found that when they had to closely coordinate their playing, their brain activity became synchronized. But when they weren’t coordinated, when one was leading and the other following, their brain activity was distinctly different.
- One day the brain may actually understand itself: A team of MIT neuroscientists has developed a way to monitor how brain cells coordinate with each other to control specific behaviors, such as telling the body to move. Not only could this help them map brain circuits to see how tasks are carried out, but it also may provide insight into how psychiatric diseases develop.
- Deep thinking is so yesterday: The top prize in a recent competition sponsored by pharmaceutical giant Merck went to a team of researchers from the University of Toronto who used a form of artificial intelligence known as deep learning to help discover molecules that could become new drugs.
- So robots will learn how to stare at smart phones?: To teach robots how to function in social situations, scientists at Carnegie-Mellon University are tracking groups of people with head-mounted cameras to see when and where their eyes converge in social settings.
- Unfortunately, they keep trying to hide nuts: By using the deceptive behavior of birds and squirrels as a model, researchers at Georgia Tech have been able to develop robots that can trick each other.
Video bonus: Check out a demo of Spaun in action.
More from Smithsonian.com
June 21, 2012
For all the speech lines we hear about jobs these days, rarely does anyone mention robots.
They do occasionally, but usually it’s saved for the “innovation” speeches. This is understandable. If you’re running for office, better to keep the two ideas separated, because while jobs are good because they’re, well, jobs, and robots are good because they mean progress, mix the two together and soon enough people will start asking how you’ll be able to create a lot of jobs if these really smart machines are doing more and more of the work.
No, I’m not going all Luddite on you. I’m in awe of machines and the remarkable things they can now do. But that’s the point. We’re not talking about the technology of the past, which clearly made humans more productive and allowed us to move into better-paying jobs requiring more specialized skills.
Now we’re creating machines that are much more than tools. They’re learning to think and adapt, and technologists such as Martin Ford, author of Lights in the Tunnel: Automation, Accelerating Technology and the Economy of the Future, believe that within five to ten years, machines will be able to surpass the ability of humans to do routine work. As he told The Fiscal Times: “It’s the first time we’ve had this level of technology that allows machines to solve problems on their own, to interact with their environment, to analyze visual imagines, and to manipulate their environment based on that.”
Do robots know “Kumbaya?”
There are those, of course, who feel that Ford and other techno-downers have the human-robot thing all wrong. Futurist Ray Kurzweil, for one, is convinced that by mid-century, humans and robots will merge in some form. Maybe we’ll be able to live forever in a body of artificial parts. Or our consciousness will live on inside a computer, a kind of humanoid software. Whatever shape it takes, Kurzweil already has a name for it– singularity.
Kurzweil’s take is that machines are gaining intelligence so quickly that it won’t be that long before they’re considerably more intelligent than humans. And he says we should be encouraged by this, not threatened. Technology will only continue to make our lives better, he contends, in ways we can’t yet imagine.
Five years ago, he likes to point out, who would have thought that hundreds of millions of people around the world would be walking around with devices as powerful as smart phones. Or that almost half a million people could have jobs in the business of making mobile apps.
Still, all of this doesn’t seem to bode well for people who don’t have the skills to play in that world. Earlier this month, Forbes, in an article titled, “Is Your Job Robot-Proof?” noted that, “Today America needs 5 million fewer workers to produce a greater value of goods and services than it did in December 2007 when the recession began.”
And other recent news from the robot front provides more grist for worriers like Ford. Canon just announced that it has begun phasing out human workers in its plants, and that in a few years its cameras will be made solely by robots. Earlier this month DARPA, the R&D arm of the Pentagon, awarded $1.2 million to a Georgia start-up to develop machines that would allow U.S. factories to “produce garments with zero direct labor.” That might allow American clothing factories to actually undercut the costs of cheap labor in China.
Or maybe not. Foxconn, the giant Chinese company known both for manufacturing Apple products and for worker suicides, announced last year that it will create a “robot kingdom” of more than 1 million robots within the next few years.
If you could read my mind
But there’s been at least one recent development that’s more in line with Kurzweil’s vision of robot-human togetherness. Researchers at MIT say they’ve developed an algorithm that will enable robots to work side-by-side with humans. The software apparently will allow robots to learn the preferences of their human partners and anticipate their needs out on the factory floor. And if the machine has to move on to help another worker, it would be able to quickly adapt to him or her.
Julie Shah, head of the MIT research team, put it this way: “It’s an interesting machine-learning human-factors problem. Using this algorithm, we can significantly improve the robot’s understanding of what the person’s next likely actions are.”
Wonder if they’ll be able to smell fear?
Mo’ better machines
Here are more reports on robots rising:
- Snakes on a vein: Scientists at Carnegie-Mellon University in Pittsburgh have created tiny, snake-like robots that, armed with cameras, scissors, forceps and sensors, are helping doctors perform surgery on hearts and cancer tumors.
- The barista will not respond to lame attempts at flirtation: A start-up at the University of Texas has installed a coffee kiosk run by a robot barista in the campus’ academic center. Students can order their drink online or on their phone and receive a text when it’s good to go.
- So much for the career in sushi: Yes, the Japanese have been on to the robots in restaurants thing for awhile. But now food machine manufacturer Suzumo has developed a sushi-making robot that can crank out 2,500 pieces in an hour.
- Don’t even think about pulling my finger: Researchers at the University of Southern California have given robots a sense of touch–one, in fact, that’s actually more sensitive than a human’s finger. Sensors can even tell where and in which direction forces are applied to robot’s fingertip.
- That’s nice, but it still doesn’t do windows: Roomba, the king of household robots, is going wireless. iRobot announced earlier this week that its new Roomba 790, which retails for a mere $699, will come with a “wireless command center” that, among other things, will allow you to schedule it to clean your house while you’re not home.
Video bonus: From the land that nailed Robot Cute long ago comes i-SODOG, a robot pup that shakes, dances, responds to voice commands and can be trained through your smartphone. Ignore the background din of little awestruck Japanese kids. This is a toy you’ll want to take to work, to dates, to job interviews. How could they not hire you?
March 23, 2012
It’s been a humbling year since the total beatdown of two former Jeopardy champions on national TV by a supercomputer named Watson. Sure, the machine gave an occasional lame answer, but in the land of game shows, we were a conquered species.
Last weekend we had our revenge.
At the American Crossword Puzzle Tournament in Brooklyn, a computer program named Dr. Fill went up against a roomful of puzzle masters and this time the machine proved human. It finished 141st among 600 contestants, disappointing its inventor, Matthew Ginsberg, who thought it would end up in the top 50.
Our glory, however, will likely be fleeting. Ginsberg, an expert in both artificial intelligence and creating crosswords, said Dr. Fill had simply had a bad day–largely because it wasn’t prepared to deal with one puzzle where some words needed to be spelled backwards and another where a few had to be arranged diagonally. It still thinks too logically. But Ginsberg promises to be back and the next Dr. Fill will be wired wiser.
There’s little question, in fact, that the pace of complex and nuanced thinking by machines will only accelerate in the coming decade. Listen to Judea Pearl, one of the pioneers of artificial intelligence, who was interviewed last week after winning the A.M. Turing Award, considered the Nobel Prize of computing.
“I think there will be computers that acquire free will, that can understand and create jokes… There will be computers that can send jokes to the New York Times that will be publishable.”
Pearl, now 75, is still at it. He’s working on what he calls “the calculus of counterfactuals,”–sentences based on something that didn’t happen. The goal is to provide machines with the knowledge to think through hypothetical situations, such as “What would have happened if John McCain had been elected president?” And that, he contends, is a big step toward computers gaining autonomy and, one day, developing a kind of morality.
“This allows them to communicate to themselves, to take responsibility for one’s actions, a kind of moral sense of behavior,” Pearl said. “These are issues that are interesting–we could build a society of robots that are able to communicate with the notion of morals.”
From the brains of babes
Sounds like a brainy new world, but the key is to teach robots to think in more sophisticated ways–and that doesn’t mean like adult humans. Computers do the task-focused, goal-oriented thing pretty well already. What they need to think like are babies.
More and more AI researchers believe that. As Alison Gopnik, a scientist at the University of California, Berkeley put it, “Young children are the greatest learning machines in the world.” Not only do they learn a language, but they figure out causal relationships, notice patterns and adapt to a world in which, at first, nothing makes sense.
The big challenge, obviously, is to figure out how babies do those things, break the process down into motivations and reactions and then program them. Only then will machines be able to make connections without being told.
But that may be the toughest puzzle of all to solve. And, sadly, even all those smart babies can’t explain it.
Here’s the latest on what’s happening with artificial intelligence:
- Brad must be so jealous: It needed help with the graphics and sound, but an artificial intelligence program named Angelina has created its own video game from scratch. Says Michael Cook, the London computer scientist who created Angelina: “In theory, there’s nothing to stop an artist from sitting down with Angelina, creating a game every 12 hours, and feeding it into the Apple apps store.”
- Motion slickness: A team of researchers at MIT are developing a system through which drones would use 3D vision to read human body signals so the robot planes can land on aircraft carriers.
- This is a movie waiting to happen: If all this talk about smart robots is making you nervous, University of Louisville computer scientist Roman Yampolskiy is already way ahead of you. He’s advocating the creation of “virtual prisons” to contain AI if it gets too smart. And even with that, he worries that particularly clever artificial intelligence programs will be able to “attack human psyches, bribe, blackmail and brainwash those who come in contact with it.”
- Buried past: A Harvard-MIT team has combined artificial intelligence and satellite photos to identify thousands of places where ancient humans may have lived in settlements.
- Watson makes nice: We’ve come full circle. IBM and the Memorial Sloan-Kettering Cancer Center in New York announced that they will use Watson the supercomputer’s ability to mine massive amounts of data and research to help doctors with cancer diagnosis and treatment options.
Video bonus: Okay, sometimes AI can feel a little creepy. Here’s a clip on Bina 48, the talking head that’s the face of LifeNaut, a project where people have started uploading digital files about themselves (videos, pictures, audio recordings), with the goal of creating a digital clone that can live forever.