December 3, 2013
Scientific equipment that’s left unattended in the field can provide all sorts of interesting information. It can, for instance, snap photographs of exotic and shy wild animals, or analyze the noises coming from an ecosystem to identify the species living there.
But often, leaving valuable instruments scattered outside risks interference or destruction from an unavoidable force of nature: humans. Data isn’t regularly collected or published on the topic, but it’s well known among field scientists that leaving equipment in highly-trafficked areas leaves it vulnerable to petty vandalism and theft.
This unfortunate trend was recently on the mind of Holger Goerlitz, a scientist at Germany’s Max Planck Institute for Ornithology who studies animal behavior (specifically, how bats use echolocation within their environment) and leaves his field equipment unattended to do so. “As part of a project, I wanted to deploy automated equipment that would sit in the field for several nights,” he says. “So my colleagues and I were concerned about how to protect this equipment. We considered using chains and video cameras. And then suddenly we had this random idea: Let’s see what would be a good label to attach to the equipment to reduce vandalism.”
His team’s results, based on 60 pieces of fake scientific equipment scattered in four different Munich parks and published last week in Methods in Ecology and Evolution, may surprise you. They found that friendly labels that gave information about the project and the researcher led to about 40 percent fewer instances of equipment interference—either theft, vandalism or park visitors simply moving or trying to open up the equipment—as compared to neutral labels or ones that threatened surveillance and punishment.
They found this out by distributing boxes that looked vaguely like scientific instruments (above)—basically, plastic boxes with car antennas and fake cameras attached—for one week in each park, right next to a footpath, and checking for interference with the objects daily. All of the boxes had a similar warning label identifying it as properly of the Planck Institute and giving Goerlitz’ contact information, but the tone of the messages differed slightly.
A third of the boxes had neutral messages (translated to English, they read “Part of an experiment—Please do not touch”), while another third had threatening warnings (“‘Part of an experiment—Every theft will be reported to the police! GPS monitored!”) and the remaining third had a friendly, personalized note (“Part of my thesis–Please do not touch–Please call me if you have any questions and would like to know more.”) The friendly label also had a picture of a juvenile squirrel, to give passersby an idea of the researcher’s work:
The researchers checked the boxes daily, and had even placed pebbles inside so they could determine if the boxes had been picked up and moved. They also counted how many were pried open, vandalized or stolen.
Apparently, at least in these German city parks, the impulse to steal or vandalize is one that can be dislodged by just a bit of information about the potential victim, presented in a friendly way. Simply knowing that the device was important to one particular researcher—likely a student, given the word “thesis”—led visitors to interfere with the equipment about half as frequently. Over the course of a month, 39 instances of interference occurred with a box with a friendly label, compared to 59 with a neutral label and 67 with a threatening label.
Although the researchers were heartened by the finding, they were surprised, especially due to the fact that the threatening labels were the least successful. They imagine that the friendly label worked primarily by establishing the personal connection, perhaps aided by the squirrel photo, but the threatening label’s ineffectiveness is a mystery. “We don’t know why this was the case,” Goerlitz says. “It could be that people didn’t believe the threatening label, or that they thought, ‘oh, there’s a GPS device inside, this could be valuable.’” The fact that it was the only label that included the word “theft” might indicate that simply implanting the idea in people’s minds influenced them to engage in it.
But, although there’s obviously a lot more work to be done—this was conducted with a small sample of people over a short time period in one particular German city—this finding about friendly labels is a positive and potentially helpful one. If presented with the chance, people can be influenced to help science succeed if they’re treated in a respectful way and informed about what’s going on. Despite the enormous amount of money spent annually on scientific equipment, very little research has actually been done in this area to date, and more work to see how this might apply to people in different cultures, for instance, may aid scientists around the world in their efforts to protect their surveying instruments.
Goerlitz, for one, is ready to start using this finding to better safeguard his own equipment that monitors bat echolocation. “In my labels, I’ll try to be informative and friendly to people,” he says. “I think if you expose people to what you’re doing, they’ll be much more supportive of it.”
November 25, 2013
Once upon a time, scientists thought that the human brain was a rigid, predictable organ, not tremendously different from the lungs or liver. Based on a person’s genetics, it developed in a predetermined way, endowing an individual with a particular level of learning capabilities, problem-solving abilities and baseline intelligence.
Now, though, as part of emerging research into brain plasticity, neuroscientists are recognizing that the brain is a responsive, constantly evolving organ that can change at both the cellular and large-scale levels due to environmental influences and experiences. Much of this research is hopeful: It’s shown how in people with impaired vision, for instance, areas of the brain normally devoted to processing sights can be repurposed to analyze sound.
Over the past few months, though, a series of studies have emphasized that the brain can change for worse, as well as for the better. A child’s brain, not surprisingly, is especially vulnerable to such effects—and this research has shown that growing up in difficult circumstances dictated by poverty can wreak damage to a child’s cognitive skills that last a lifetime.
An October study by researchers from the University of Michigan, for instance, used fMRI (functional magnetic resonance imaging)—which detects blood flow in various areas of the brain as a reflection of brain activity—to study the regulation of emotions in young adults who were part of a long-term study on poverty. They compared a participant’s family income at age 9 (based on survey data collected at the time) with his or her current neural activity in different brain regions, and found that those who grew up in poverty showed increased activity in the amygdala (believed to be involved in anxiety, fear and emotional disorders) and decreased activity in the prefrontal cortex (which limits the influence of the amygdala, putting long-term decision making over impulse) when the participants were shown emotionally-upsetting images.
It’s impossible to know for sure, but the researchers suspect that a range of chronic stresses that can accompany growing up in poverty—things like crowding, noise, violence, family turmoil or separation—impact the development of the brain in childhood and adolescence, potentially explaining this correlation.
Another October study, meanwhile, took a more basic approach, examining the relationship between nurturing during childhood and the growth of brain tissue in children between the ages of six and 12. In it, Washington University in St. Louis researchers found that among the 145 children studied, those whose parents had poor nurturing skills had slowed growth in white matter, grey matter and the volumes of several different areas of the brain involved with learning skills and coping with stress. Based on the differing growth rates between children who resembled each other in terms of other key factors, it seemed as though the experience of growing up with adults with less nurturing skills effectively set back their mental development a year or two. And impoverished parents, they found, were more likely to have poor nurturing skills.
Sure, attempting to objectively evaluate the parenting styles of the adults in this study might be a bit heavy-handed, but the study identified chronic stresses experienced by the children as a key element as well: Children who grew up in poverty but had fewer stressful life events (as part of a larger program, they’d gone through annual assessments from the age of three onward) demonstrated smaller reductions in neural development.
Others have even looked into very specific behavioral effects of poverty. A recent Northwestern University study found a link that children with lower socioeconomic status tended to have less efficient auditory processing abilities—that is, the area of their brains responsible for processing sound showed more response to distracting noise and less activity as a result of a speaker’s voice than control participants. This might be an effect, the researchers say, of the known correlation between low income and the amount of noise exposure in urban populations.
Of course, most of these are limited by the very nature of a longitudinal study in that they’re correlations, rather than causations—ethics aside, it’s impossible to actively alter a person’s childhood circumstances in a controlled manner and then check the results, so researchers are forced to observe what happens in the real world and draw conclusions. Additionally, in most of these cases, it’s unknown whether the effects are temporary or permanent—whether children exposed to poverty are permanently left behind their peers, or whether they’re able to catch up if given the chance.
But the fact that correlations between poverty and altered mental function when stressed has been repeatedly observed across a range of study designs, circumstances and research groups makes it likely that these effects aren’t aberrations. Additionally, even if they are temporary effects that can be resolved by changing a child’s environment, there’s other recent research that dishearteningly reveals a neurological mechanism that helps to perpetuate poverty, by making it difficult for parent to make choices that change these circumstances.
An August study in Science found that being preoccupied with the all-consuming concerns of poverty—struggling to pay medical bills, for instance—taxes the brain, leaving less extra bandwidth to solve complex cognitive problems and harming long-term decision making ability. In a pair of study groups (shoppers in a New Jersey mall and sugar cane farmers in rural India), simply getting the participants thinking about economic problems (asking them what they’d do if they had to pay $1500 to repair their car, for instance) caused them to perform more poorly on tests that measure IQ and impulse control than otherwise.
The bandwidth problem they identified is temporary, not permanent, but it does explain how making the difficult decisions that might allow someone to get ahead are harder for a person immersed in poverty. It also highlights yet another stressor for parents seeking to ensure that their children escape poverty—they might be inadvertently contributing to an environment that keeps their children from rising above their circumstances.
November 22, 2013
One afternoon in October 2005, neuroscientist James Fallon was looking at brain scans of serial killers. As part of a research project at UC Irvine, he was sifting through thousands of PET scans to find anatomical patterns in the brain that correlated with psychopathic tendencies in the real world.
“I was looking at many scans, scans of murderers mixed in with schizophrenics, depressives and other, normal brains,” he says. “Out of serendipity, I was also doing a study on Alzheimer’s and as part of that, had brain scans from me and everyone in my family right on my desk.”
“I got to the bottom of the stack, and saw this scan that was obviously pathological,” he says, noting that it showed low activity in certain areas of the frontal and temporal lobes linked to empathy, morality and self-control. Knowing that it belonged to a member of his family, Fallon checked his lab’s PET machine for an error (it was working perfectly fine) and then decided he simply had to break the blinding that prevented him from knowing whose brain was pictured. When he looked up the code, he was greeted by an unsettling revelation: the psychopathic brain pictured in the scan was his own.
Many of us would hide this discovery and never tell a soul, out of fear or embarrassment of being labeled a psychopath. Perhaps because boldness and disinhibition are noted psychopathic tendencies, Fallon has gone all in towards the opposite direction, telling the world about his finding in a TED Talk, an NPR interview and now a new book published last month, The Psychopath Inside. In it, Fallon seeks to reconcile how he—a happily married family man—could demonstrate the same anatomical patterns that marked the minds of serial killers.
“I’ve never killed anybody, or raped anyone,” he says. “So the first thing I thought was that maybe my hypothesis was wrong, and that these brain areas are not reflective of psychopathy or murderous behavior.”
But when he underwent a series of genetic tests, he got more bad news. “I had all these high-risk alleles for aggression, violence and low empathy,” he says, such as a variant of the MAO-A gene that has been linked with aggressive behavior. Eventually, based on further neurological and behavioral research into psychopathy, he decided he was indeed a psychopath—just a relatively good kind, what he and others call a “pro-social psychopath,” someone who has difficulty feeling true empathy for others but still keeps his behavior roughly within socially-acceptable bounds.
It wasn’t entirely a shock to Fallon, as he’d always been aware that he was someone especially motivated by power and manipulating others, he says. Additionally, his family line included seven alleged murderers, including Lizzie Borden, infamously accused of killing her father and stepmother in 1892.
But the fact that a person with the genes and brain of a psychopath could end up a non-violent, stable and successful scientist made Fallon reconsider the ambiguity of the term. Psychopathy, after all, doesn’t appear as a formal diagnosis in the Diagnostic and Statistical Manual of Mental Disorders in part because it encompasses such a wide range of symptoms. Not all psychopaths kill; some, like Fallon, exhibit other sorts of psychopathic behavior.
“I’m obnoxiously competitive. I won’t let my grandchildren win games. I’m kind of an asshole, and I do jerky things that piss people off,” he says. “But while I’m aggressive, but my aggression is sublimated. I’d rather beat someone in an argument than beat them up.”
Why has Fallon been able to temper his behavior, while other people with similar genetics and brain turn violent and end up in prison? Fallon was once a self-proclaimed genetic determinist, but his views on the influence of genes on behavior have evolved. He now believes that his childhood helped prevent him from heading down a scarier path.
“I was loved, and that protected me,” he says. Partly as a result of a series of miscarriages that preceded his birth, he was given an especially heavy amount of attention from his parents, and he thinks that played a key role.
This corresponds to recent research: His particular allele for a serotonin transporter protein present in the brain, for example, is believed to put him at higher risk for psychopathic tendencies. But further analysis has shown that it can affect the development of the ventromedial prefrontal cortex (the area with characteristically low activity in psychopaths) in complex ways: It can open up the region to be more significantly affected by environmental influences, and so a positive (or negative) childhood is especially pivotal in determining behavioral outcomes.
Of course, there’s also a third ingredient, in addition to genetics and environment: free will. “Since finding all this out and looking into it, I’ve made an effort to try to change my behavior,” Fallon says. “I’ve more consciously been doing things that are considered ‘the right thing to do,’ and thinking more about other people’s feelings.”
But he added, “At the same time, I’m not doing this because I’m suddenly nice, I’m doing it because of pride—because I want to show to everyone and myself that I can pull it off.”
November 18, 2013
Among us mortals who forget where we just left our keys and what we ate for lunch yesterday, there are a handful of remarkable people with hyperthymesia—also known as highly superior autobiographical memory (HSAM)—who can remember astounding details about every day of their life going back to childhood.
Given any particular date in the last few decades, Brad Williams can remember both what he did that day and what significant world events occurred. Bob Petrella can recall the date he met every one of his friends and acquaintances. Jill Price can remember the exact day that any episode of any TV show she’s ever seen first aired—including, in some cases, that day’s weather.
But some of the same scientists who first documented this condition in 2006 have some news that might bring the hyperthymestic back to Earth: In recent experiments, they showed that even these people with extraordinary memory can be tricked into recalling events that didn’t happen.
“Researchers have previously found that memories can be distorted in every group they’ve looked at: in the young, the old, those with high intelligence and those with low intelligence,” says Lawrence Patihis, a psychologist at UC Irvine and the lead author of the new study, published today in Proceedings of the National Academy of Sciences. “And when we looked at people with HSAM, in all measures, we found memory distortions too.”
To test whether these people could be fooled into recalling false or distorted memories, the researchers recruited 20 people with hyperthymesia and compared their performance in a series of standard memory implantation tests—which attempt to trick the participant into recalling false details or events that didn’t happen—to 38 people with normal memory. These sorts of tests have frequently been used to argue for reducing the reliance on witness testimony in the criminal justice system, as they show that people frequently misremember events and details because of leading questions.
In the first test, the participants were shown a series of 15 related words one by one a screen (light, shade, bulb, table and others, for instance). Then, afterward, they were quizzed on whether they’d seen specific words, including a key “lure” word that was related to the rest but not actually shown (in this example, “lamp”). Most non-hyperthymestic people taking the test would report seeing the lure word even though it never appeared, simply because they’d associate it with the words actually shown.
But what about hyperthymestic people? In the experiment, both the control group and the participants with hyperthymesia said they saw the lure 70 percent of the time.
The second test involved a slideshow of photographs that depicted a crime. For example, the pictures showed a man bumping into a woman, knocking everything out of her bag, and pretending to help her clean her things up while putting her wallet in his jacket pocket. Forty minutes later, the participants read a text narrative that described the same event, but introduced six specific errors—such as the man putting the wallet in his pants pocket instead of jacket.
Afterward, the participants were given multiple choice questions on what they saw in the original photos. Both groups made errors, swayed by the text they read after seeing the photos, but the people with hyperthymesia actually made about 73 percent more errors than the control group, perhaps indicating a heavier reliance of textual cues.
The results of the third test were perhaps the most jarring. Scattered among dozens of irrelevant facts about recent history that were merely intended to distract, the participants were specifically told that, on 9/11, someone had captured footage of United Flight 93 crashing in Pennsylvania. Then, on a written questionnaire, they were asked if they’d seen that “well-publicized footage.” Twenty-nine percent of the control group and 20 percent of the hyperthymestic participants reported that they’d seen the footage—an impossibility, because no such video exists.
For Patihis, the results of the experiments are unsurprising, and serve as a reminder that it’s foolish to rely on witness testimony to put someone behind bars, even if they claim to have remarkable memory. “This study is a rather simple way of showing—to lawyers, to the police, to juries—that all people are likely susceptible to misinformation,” he says.
But the study also serves as a broader window into the memory processes of people with hyperthymesia and how they might—or might not—differ from the rest of us. “It’s been a puzzle as to how anyone can remember so much information, so the advantage of these memory distortion techniques is that you can get a glimpse into how memory works,” Patihis says.
One of this study’s authors, Aurora LePort, has previously conducted neurological research into the same people with HSAM tested in this study, and found structural differences in their brains, including more robust white matter in areas specifically linked to autobiographical memory—that is, recall of events that happened to oneself—rather than areas tied to broader applications of memory. The fact that these people can’t perform any better on memory distortion tests similarly suggests a core difference between autobiographical memory (at which they obviously excel) and memory as a whole.
November 1, 2013
For the first few decades of his career, Emory neuroscientist Gregory Berns studied the human mind. Using fMRI technology, which tracks the flow of blood to different areas of the brain, he sought to find correlations between people’s internal mental patterns and their real-world behaviors, decisions and preferences.
Then, in 2011, he took on a new object of neuroscientific study: Canis lupus familiaris, otherwise known as the domesticated dog. Instead of merely studying canine behavior, as has been done for years, he and his colleagues began scrutinizing the internal architecture and patterns of dogs’ brains, using the same tools they rely on to better understand the brains of humans.
“I’ve always been a dog person, and when my dog died, a pug named Newton, it planted a seed in my mind,” says Berns, who published a new book on his recent work, How Dogs Love Us, last week. “It got me wondering about how dogs view their relationship with us—if he had loved me the same way I had loved him.”
Just looking inside inside the canine brain, however, posed a formidable challenge: Getting an accurate fMRI reading means that the subject has to stay almost perfectly still, moving less than a millimeter from one moment to the next. Using anesthesia or restraining the dogs would ruin the experiments, producing an image of an unconscious or anxious dog instead of a comfortable, alert one.
To solve the problem, Berns recruited dogs from the local community—starting with a dog he adopted after Newtown died—and gradually trained them to climb up a series of steps into a table, rest their head on a pad inside the fMRI’s inner tunnel and sit still for 30 seconds at a time as the machine does its work. To deal with the device’s noise (which can surpass 95 decibels, equivalent to the sound of a jackhammer 50 feet away), they taped earmuffs to the dogs’ heads and piped in ambient noise over loudspeakers, so instead of the machine’s sound beginning abruptly, it gradually arrived over background noises.
In total, they’ve successfully trained about a dozen dogs to voluntarily participate in their studies. The research is still in its preliminary stages, but as Berns’ team begins to scratch the surface of the canine brain, they’re finding something surprising—in several ways, its activity mirrors that of the human brain to a much greater extent than expected.
As part of their first paper published on the work in 2012, they trained dogs to recognize two different hand signals: one that meant the animal would be given a piece of hot dog imminently, and one that meant no hot dog. As they hypothesized, the first signal triggered elevated activity in an area called the caudate nucleus, which is rich in receptors for dopamine (a neurotransmitter involved in the sensation of pleasure). In humans—and in dogs, the research indicated—caudate activity is related to the desire to have something that causes pleasure, and the satisfaction involved in obtaining it.
Subsequent work revealed more unexpected findings. As part of a second experiment, they had dogs sit in the scanner and exposed them to smells of humans (from either their owners or strangers) and other dogs (from either dogs they lived with or unfamiliar dogs). “We wanted to understand how dogs recognize other people and dogs in their households,” Berns says. Again, they saw increased activity in the caudate, but only as a result of one of the scents. “In this case, the reward system only seems to activate in response to the smell of a familiar human, which is pretty amazing,” he says.
To further probe how the dogs’ brain activity correlates with the actions of humans they know well, they put the dogs in the fMRI and had their owners leave the room, then walk back in. This, too, triggered activation in the caudate.
Berns interprets these results as indications that, in some ways, the mental processes of dogs may not be so different from those of humans. They’re close enough, he suggests, that we can safely describe them with words we don’t often apply to animals: the mental activity represents emotions, and perhaps even constitute love. “At some fundamental level, we believe the dogs are experiencing emotions something like we do,” Berns says.
He admits that the idea is controversial. But, he points out, the research suggests that the human brain and canine brain aren’t as radically different as we might have imagined.
“Obviously, dog brains are much smaller, and they don’t have as much cortex as we do, but some of the core areas around the brainstem—the basal ganglia, which the caudate nucleus is part of—look very much like those in humans,” he says. Dogs might not have the hardware necessary for complex thoughts and higher-level reasoning, the thinking goes, but they do have the relevant structures for basic emotions.
This also makes sense from an evolutionary perspective: We evolved the heavily folded cortex necessary for high-level thinking after we diverged from all other animal species, but areas like the basal ganglia developed beforehand, so it follows that our ability to feel emotions produced by those areas existed way back in our evolutionary history, in ancestors that we share with many other mammals, including dogs.
Dog lovers mind find these ideas obvious, but Berns’ work has attracted a fair amount of criticism. One of the biggest complaints is against his use of words like emotion and love for dogs—their attachment to us is simply a result of conditioning, some say, entirely based on the desire for food, rather than the deeper emotional connections we feel for other humans.
But Berns hopes to respond with future fMRI work, which will compare brain activity in dogs being fed by automated mechanisms with that of dogs being fed by humans. He hopes to show that dogs do develop qualitatively different relationships with humans, underscoring the strength of those attachments.
He took his ideas to what some might call as a rather extreme conclusion earlier this month in the New York Times, in an op-ed he penned with a provocative headline: Dogs Are People, Too. If animals truly are capable of emotions we normally consider characteristically human, he argued, they should no longer be treated as mere objects, or property, but instead be given some of the rights we associate with personhood—namely, a respect for their preferences and well-being that would lead to the abolition of things like puppy mills and dog racing.
There’s obviously a long way to go—both in terms of scientific evidence and policy changes—before dogs are treated anything like people. But Berns cites a recent Supreme Court decision that invoked neuroscientific evidence (specifically, the finding that the juvenile brain is less developed than a mature adult’s, and thus should not be subject to the same punishments) as an indication that our laws will inevitably follow the science. The next step, then, is for he and his colleagues to keep peering into the minds of dogs, finding out how deeply the mental similarities truly go.