A New Interface Lets Monkeys Control Two Virtual Arms With Their Brain Alone

The interface is the first that can control multiple limbs—a technology that marks another step toward full mobility for paralyzed people

virtual monkey
A representation of a virtual monkey, whose arms can be manipulated by a real monkey in a new brain-machine interface—the first interface that allows for the control of multiple limbs. Image via Duke Center for Neuroengineering

Brain-machine interfaces were once the stuff of science fiction. But the technology—which enables direct communication between a person or animal’s brain and an external device or another brain—has come a long way in the past decade.

Scientists have developed interfaces that allow paralyzed people to type letters on a screen, let one person move another’s hand with his or her thoughts and even make it possible for two rats to trade thoughts—in this case, the knowledge of how to solve a particular task—when they’re located in labs thousands of miles apart.

Now, a team led by Miguel Nicolelis of Duke University (the scientist behind the rat thought-trading scheme, among other brain-machine interfaces) has created a new setup that allows monkeys to control two virtual arms simply by thinking about moving their real arms. They hope that the technology, revealed in a paper published today in Science Translational Medicine, could someday lead to similar interfaces that allow paralyzed humans to move robotic arms and legs.

Previously, Nicolelis’ team and others had created interfaces that allowed monkeys and humans to move a single arm in a similar fashion, but this is the first technology that lets an animal to move multiple limbs simultaneously. “Bimanual movements in our daily activities—from typing on a keyboard to opening a can—are critically important,” Nicolelis said in a press statement. “Future brain-machine interfaces aimed at restoring mobility in humans will have to incorporate multiple limbs to greatly benefit severely paralyzed patients.”

Like the group’s previous interfaces, the new technology relies upon ultra thin electrodes that are surgically embedded into the cerebral cortex of monkeys’ brains, a region of the brain that controls voluntary movements, among other functions. But unlike many other brain-machine interfaces, which use electrodes that monitor brain activity in just a handful of neurons, Nicolelis’ team recorded activity in nearly 500 brain cells distributed over a range of cortex areas in the two rhesus monkeys who were test subjects for this study.

Then, over the course of a few weeks, they repeatedly set the monkeys in front of a monitor, where they saw a pair of virtual arms from a first-person perspective. Initially, they controlled each of the arms with joysticks, and completed a task in which they had to move the arms to cover up moving shapes to receive a reward (a taste of juice).

As this happened, the electrodes recorded the brain activity in the monkeys that correlated with the various arm movements, and algorithms analyzed it to determine which particular patterns in neuron activation were linked with what sorts of arm movements—left or right, and forward or back.

Eventually, once the algorithm could accurately predict the monkey’s intended arm movement based upon the brain patterns, the setup was altered so that the joysticks no longer controlled the virtual arms—the monkeys’ thoughts, as recorded by the electrodes, were in control instead. From the monkeys’ perspective, nothing had changed, as the joysticks were still put out in front of them, and the control was based on brain patterns (specifically, imagining their own arms moving) that they were producing anyway.

Within two weeks, though, both the monkeys realized they didn’t need to actually move their hands and manipulate the joysticks to move the virtual arms—they only had to think about doing so. Over time, they got better and better at controlling the virtual arms through this machine-brain interface, eventually doing it just as effectively as they’d moved the joysticks.

Future advances in this sort of interface could be enormously valuable for people who’ve lost control of their own limbs, due to paralysis or other causes. As high-tech bionic limbs continue to develop, these types of interfaces could eventually be the way they’ll be used on a daily basis. A person with a spinal cord injury, for example, could learn how to effectively imagine moving two arms so that an algorithm could interpret his or her brain patterns to move two robotic arms in the desired way.

But brain-machine interfaces could also someday serve a much broader population, too: users of smartphones, computers and other consumer technology. Already, companies have developed headsets that monitor your brainwaves so that you can move a character around in a video game merely by thinking about it, essentially using your brain as a joystick. Eventually, some engineers envision that brain-machine interfaces could enable us to manipulate tablets and control wearable technology such as Google Glass without saying a word or touching a screen.

Get the latest Science stories in your inbox.