November 25, 2013
Once upon a time, scientists thought that the human brain was a rigid, predictable organ, not tremendously different from the lungs or liver. Based on a person’s genetics, it developed in a predetermined way, endowing an individual with a particular level of learning capabilities, problem-solving abilities and baseline intelligence.
Now, though, as part of emerging research into brain plasticity, neuroscientists are recognizing that the brain is a responsive, constantly evolving organ that can change at both the cellular and large-scale levels due to environmental influences and experiences. Much of this research is hopeful: It’s shown how in people with impaired vision, for instance, areas of the brain normally devoted to processing sights can be repurposed to analyze sound.
Over the past few months, though, a series of studies have emphasized that the brain can change for worse, as well as for the better. A child’s brain, not surprisingly, is especially vulnerable to such effects—and this research has shown that growing up in difficult circumstances dictated by poverty can wreak damage to a child’s cognitive skills that last a lifetime.
An October study by researchers from the University of Michigan, for instance, used fMRI (functional magnetic resonance imaging)—which detects blood flow in various areas of the brain as a reflection of brain activity—to study the regulation of emotions in young adults who were part of a long-term study on poverty. They compared a participant’s family income at age 9 (based on survey data collected at the time) with his or her current neural activity in different brain regions, and found that those who grew up in poverty showed increased activity in the amygdala (believed to be involved in anxiety, fear and emotional disorders) and decreased activity in the prefrontal cortex (which limits the influence of the amygdala, putting long-term decision making over impulse) when the participants were shown emotionally-upsetting images.
It’s impossible to know for sure, but the researchers suspect that a range of chronic stresses that can accompany growing up in poverty—things like crowding, noise, violence, family turmoil or separation—impact the development of the brain in childhood and adolescence, potentially explaining this correlation.
Another October study, meanwhile, took a more basic approach, examining the relationship between nurturing during childhood and the growth of brain tissue in children between the ages of six and 12. In it, Washington University in St. Louis researchers found that among the 145 children studied, those whose parents had poor nurturing skills had slowed growth in white matter, grey matter and the volumes of several different areas of the brain involved with learning skills and coping with stress. Based on the differing growth rates between children who resembled each other in terms of other key factors, it seemed as though the experience of growing up with adults with less nurturing skills effectively set back their mental development a year or two. And impoverished parents, they found, were more likely to have poor nurturing skills.
Sure, attempting to objectively evaluate the parenting styles of the adults in this study might be a bit heavy-handed, but the study identified chronic stresses experienced by the children as a key element as well: Children who grew up in poverty but had fewer stressful life events (as part of a larger program, they’d gone through annual assessments from the age of three onward) demonstrated smaller reductions in neural development.
Others have even looked into very specific behavioral effects of poverty. A recent Northwestern University study found a link that children with lower socioeconomic status tended to have less efficient auditory processing abilities—that is, the area of their brains responsible for processing sound showed more response to distracting noise and less activity as a result of a speaker’s voice than control participants. This might be an effect, the researchers say, of the known correlation between low income and the amount of noise exposure in urban populations.
Of course, most of these are limited by the very nature of a longitudinal study in that they’re correlations, rather than causations—ethics aside, it’s impossible to actively alter a person’s childhood circumstances in a controlled manner and then check the results, so researchers are forced to observe what happens in the real world and draw conclusions. Additionally, in most of these cases, it’s unknown whether the effects are temporary or permanent—whether children exposed to poverty are permanently left behind their peers, or whether they’re able to catch up if given the chance.
But the fact that correlations between poverty and altered mental function when stressed has been repeatedly observed across a range of study designs, circumstances and research groups makes it likely that these effects aren’t aberrations. Additionally, even if they are temporary effects that can be resolved by changing a child’s environment, there’s other recent research that dishearteningly reveals a neurological mechanism that helps to perpetuate poverty, by making it difficult for parent to make choices that change these circumstances.
An August study in Science found that being preoccupied with the all-consuming concerns of poverty—struggling to pay medical bills, for instance—taxes the brain, leaving less extra bandwidth to solve complex cognitive problems and harming long-term decision making ability. In a pair of study groups (shoppers in a New Jersey mall and sugar cane farmers in rural India), simply getting the participants thinking about economic problems (asking them what they’d do if they had to pay $1500 to repair their car, for instance) caused them to perform more poorly on tests that measure IQ and impulse control than otherwise.
The bandwidth problem they identified is temporary, not permanent, but it does explain how making the difficult decisions that might allow someone to get ahead are harder for a person immersed in poverty. It also highlights yet another stressor for parents seeking to ensure that their children escape poverty—they might be inadvertently contributing to an environment that keeps their children from rising above their circumstances.
November 22, 2013
One afternoon in October 2005, neuroscientist James Fallon was looking at brain scans of serial killers. As part of a research project at UC Irvine, he was sifting through thousands of PET scans to find anatomical patterns in the brain that correlated with psychopathic tendencies in the real world.
“I was looking at many scans, scans of murderers mixed in with schizophrenics, depressives and other, normal brains,” he says. “Out of serendipity, I was also doing a study on Alzheimer’s and as part of that, had brain scans from me and everyone in my family right on my desk.”
“I got to the bottom of the stack, and saw this scan that was obviously pathological,” he says, noting that it showed low activity in certain areas of the frontal and temporal lobes linked to empathy, morality and self-control. Knowing that it belonged to a member of his family, Fallon checked his lab’s PET machine for an error (it was working perfectly fine) and then decided he simply had to break the blinding that prevented him from knowing whose brain was pictured. When he looked up the code, he was greeted by an unsettling revelation: the psychopathic brain pictured in the scan was his own.
Many of us would hide this discovery and never tell a soul, out of fear or embarrassment of being labeled a psychopath. Perhaps because boldness and disinhibition are noted psychopathic tendencies, Fallon has gone all in towards the opposite direction, telling the world about his finding in a TED Talk, an NPR interview and now a new book published last month, The Psychopath Inside. In it, Fallon seeks to reconcile how he—a happily married family man—could demonstrate the same anatomical patterns that marked the minds of serial killers.
“I’ve never killed anybody, or raped anyone,” he says. “So the first thing I thought was that maybe my hypothesis was wrong, and that these brain areas are not reflective of psychopathy or murderous behavior.”
But when he underwent a series of genetic tests, he got more bad news. “I had all these high-risk alleles for aggression, violence and low empathy,” he says, such as a variant of the MAO-A gene that has been linked with aggressive behavior. Eventually, based on further neurological and behavioral research into psychopathy, he decided he was indeed a psychopath—just a relatively good kind, what he and others call a “pro-social psychopath,” someone who has difficulty feeling true empathy for others but still keeps his behavior roughly within socially-acceptable bounds.
It wasn’t entirely a shock to Fallon, as he’d always been aware that he was someone especially motivated by power and manipulating others, he says. Additionally, his family line included seven alleged murderers, including Lizzie Borden, infamously accused of killing her father and stepmother in 1892.
But the fact that a person with the genes and brain of a psychopath could end up a non-violent, stable and successful scientist made Fallon reconsider the ambiguity of the term. Psychopathy, after all, doesn’t appear as a formal diagnosis in the Diagnostic and Statistical Manual of Mental Disorders in part because it encompasses such a wide range of symptoms. Not all psychopaths kill; some, like Fallon, exhibit other sorts of psychopathic behavior.
“I’m obnoxiously competitive. I won’t let my grandchildren win games. I’m kind of an asshole, and I do jerky things that piss people off,” he says. “But while I’m aggressive, but my aggression is sublimated. I’d rather beat someone in an argument than beat them up.”
Why has Fallon been able to temper his behavior, while other people with similar genetics and brain turn violent and end up in prison? Fallon was once a self-proclaimed genetic determinist, but his views on the influence of genes on behavior have evolved. He now believes that his childhood helped prevent him from heading down a scarier path.
“I was loved, and that protected me,” he says. Partly as a result of a series of miscarriages that preceded his birth, he was given an especially heavy amount of attention from his parents, and he thinks that played a key role.
This corresponds to recent research: His particular allele for a serotonin transporter protein present in the brain, for example, is believed to put him at higher risk for psychopathic tendencies. But further analysis has shown that it can affect the development of the ventromedial prefrontal cortex (the area with characteristically low activity in psychopaths) in complex ways: It can open up the region to be more significantly affected by environmental influences, and so a positive (or negative) childhood is especially pivotal in determining behavioral outcomes.
Of course, there’s also a third ingredient, in addition to genetics and environment: free will. “Since finding all this out and looking into it, I’ve made an effort to try to change my behavior,” Fallon says. “I’ve more consciously been doing things that are considered ‘the right thing to do,’ and thinking more about other people’s feelings.”
But he added, “At the same time, I’m not doing this because I’m suddenly nice, I’m doing it because of pride—because I want to show to everyone and myself that I can pull it off.”
November 20, 2013
About a month ago, I suffered my first-ever concussion, when I was (accidentally) kicked in the head playing ultimate frisbee. Over the next few weeks, I dutifully followed medical instructions to avoid intense physical activity. For a little while, I noticed a bit of mental fogginess—I had trouble remembering words and staying focused—but eventually, these symptoms faded away, and I now feel essentially the same as before.
Except, it turns out, that if doctors were to look inside my head using a type of brain scanning technology called diffusion MRI, there’s a good chance that they’d notice lingering abnormalities in the gray matter of my left prefrontal cortex. These abnormalities, in fact, could persist up to four months after the injury, even after my behavioral symptoms are long gone. This news, from a study published today in the journal Neurology, underlines just how much more prolonged and complex the healing process from even a mild concussion is than we’ve previously thought.
“These results suggest that there are potentially two different modes of recovery for concussion, with the memory, thinking and behavioral symptoms improving more quickly than the physiological injuries in the brain,” Andrew R. Mayer, a neuroscientist at the University of New Mexico and lead author of the study, explained in a press statement issued with the paper.
The abnormalities that Mayer’s team detected, they say, are so subtle that they can’t be detected by standard MRI or CT scans. Instead, they found them using the diffusion MRI technology, which measures the movement of molecules (mostly water) through different areas of the brain, reflecting the tissue’s underlying architecture and structure.
Mayer and colleagues performed these scans on 26 people who’d suffered mild concussions four months earlier, in addition to scanning them 14 days after the injuries. They also gave them behavioral and memory tests at both times, and then compared all the results to 26 healthy participants.
In the initial round, the people with concussions performed slightly worse than the healthy participants on tests that measure memory and attention, consistent with prior findings on concussions. Using the diffusion MRI, the researchers also found structural changes in the prefrontal cortex of both hemispheres of the subjects with recent concussions.
Four months later, the behavioral tests showed that the gap between the two groups had significantly narrowed, and the concussion patients’ self-reported symptoms were less significant too. But interestingly, when they averaged the scans of all 26 people, the neurological changes were still detectable in the left hemisphere of their brains.
What were these abnormalities? Specifically, their gray matter—the squishy outer layer of brain tissue in the cortex—showed ten percent more fractional anisotrophy (FA) than the controls’. This value is a measure of how likely water molecules located in this area are to travel in one direction, along the same axis, rather than scattering in all directions. It’s believed to reflect the density and thickness of neurons: the thicker and denser these brain cells are, the more likely water molecules are to flow in the direction of the cells’ fibers.
In other words, in this one particular area of the brain, people who’d suffered concussions four months earlier may have denser, thicker neurons than before. But it’s hard to say what these abnormalities reflect, and if they’re even a bad thing. As I found during my semi-obsessive post-concussion research, there are bigger gaps in scientists’ understanding of the brain than any other part of our bodies, and knowledge of the healing process after a concussion is no exception.
The scientists speculate that the increased FA could be a lingering effect of edema (the accumulation of fluid with the brain as a result of concussion) or gliosis (a change in the shape of the brain’s structural cells, rather than neurons).
But it’s even possible that this increased FA could be a sign of healing. A 2012 study found that in people who’d suffered mild concussions, higher FA scores right after the injury were correlated with fewer post-concussive symptoms, such as memory loss, a year after the injury. Similarly, a study published this past summer found a correlation between low FA scores and the incidence of severe symptoms right after a concussion. Interestingly, the researchers noted similar correlations in studies of Alzheimer’s—people with the disease tend to also demonstrate lower FA scores, in the same areas of the brain as those with most severe concussions, underscoring the link to memory performance.
If that’s the case, then the thicker, denser neurons in the brain of people with concussions might be something like the tough scabs that form after your skin gets burned, scabs that linger long after the pain has dissipated. As Mayer points out, during the recovery process after a burn “reported symptoms like pain are greatly reduced before the body is finished healing, when the tissue scabs.” In a similar way, the symptoms of a concussion—memory loss and difficulty maintaining attention, for instance—may disappear after a few weeks, while the nerve tissue continues forming its own type of scab four months later.
It’s possible that this scab, though, might be vulnerable. Scientific research is increasingly revealing just how devastating the impact of repeated concussions—the type suffered by football players—can be in the long term. “These findings may have important implications about when it is truly safe to resume physical activities that could produce a second concussion, potentially further injuring an already vulnerable brain,” Mayer said. The fact that the brain’s healing process is more prolonged than previously assumed could help explain why returning to the field a few weeks after a concussion and experiencing another is so dangerous.
November 8, 2013
In 1854, in response to a devastating cholera epidemic that was sweeping through London, British doctor John Snow introduced an idea that would revolutionize the field of public health: the epidemiological map. By recording instances of cholera in different neighborhoods of the city and plotting them on a map based on patients’ residences, he discovered that a single contaminated water pump was responsible for a great deal of the infections.
The map persuaded him—and, eventually, the public authorities—that the miasma theory of disease (which claimed that diseases spread via noxious gases) was false, and that the germ theory (which correctly claimed that microorganisms were to blame) was true. They put a lock on the handle of the pump responsible for the outbreak, signaling a paradigm shift that permanently changed how we deal with infectious diseases and thus sanitation.
The mapping technology is quite different, as is the disease, but there’s a certain similarity between Snow’s map and a new project conducted by a group of researchers led by Henry Kautz of the University of Rochester. By creating algorithms that can spot flu trends and make predictions based on keywords in publicly available geotagged tweets, they’re taking a new approach to studying the transmission of disease—one that could change the way we study and track the movement of diseases in society.
“We can think of people as sensors that are looking at the world around them and then reporting what they are seeing and experiencing on social media,” Kautz explains. “This allows us to do detailed measurements on a population scale, and doesn’t require active user participation.”
In other words, when we tweet that we’ve just been laid low by a painful cough and a fever, we’re unwittingly providing rich data for an enormous public health experiment, information that researchers can use to track the movement of diseases like flu in high resolution and real time.
Kautz’ project, called SocialHealth, has made use of tweets and other sorts of social media to track a range of public health issues—recently, they began using tweets to monitor instances of food poisoning at New York City restaurants by logging everyone who had posted geotagged tweets from a restaurant, then following their tweets for the next 72 hours, checking for mentions of vomiting, diarrhea, abdominal pain, fever or chills. In doing so, they detected 480 likely instances of food poisoning.
But as the season changes, it’s their work tracking the influenza virus that’s most eye-opening. Google Flu Trends
has similarly sought to use Google searchers to track the movement of flu, but the model greatly overestimated last year’s outbreak, perhaps because media coverage of flu prompted people to start making flu-related queries. Twitter analysis represents a new dataset with a few qualities—a higher geographic resolution and the ability to capture the movement of a user over time—that could yield better predictions.
To start their flu-tracking project [PDF], the SocialHealth researchers looked specifically at New York, collecting around 16 million geotagged public tweets per month from 600,000 users for three months’ time. Below is a time-lapse of one New York Twitter day, with different colors representing different frequencies of tweets at that location (blue and green mean fewer tweets, orange and red mean more):
To make use of all this data, his team developed an algorithm that determines if each tweet represents a report of flu-like symptoms. Previously, other researchers had simply done this by searching for keywords in tweets (“sick,” for example), but his team found that the approach leads to false positives: Many more users tweet that they’re sick of homework than they’re feeling sick.
To account for this, his team’s algorithm looks for three words in a row (instead of one), and considers how often the particular sequence is indicative of an illness, based on a set of tweets they’d manually labelled. The phrase “sick of flu,” for instance, is strongly correlated with illness, whereas “sick and tired” is less so. Some particular words—headache, fever, coughing—are strongly linked with illness no matter what three-word sequence they’re part of.
Once these millions of tweets were coded, the researchers could do a few intriguing things with them. For starters, they looked at changes in flu-related tweets over time, and compared them with levels of flu as reported by the CDC, confirming that the tweets accurately captured the overall trend in flu rates. However, unlike CDC data, it’s available in nearly real-time, rather than a week or two after the fact.
But they also went deeper, looking at the interactions between different users—as represented by two users tweeting from the same location (the GPS resolution is about half a city block) within the same hour—to model how likely it is that a healthy person would become sick after coming into contact with someone with the flu. Obviously, two people tweeting from the same block 40 minutes apart didn’t necessarily meet in person, but the odds of them having met are slightly higher than two random users.
As a result, when you look at a large enough dataset of interactions, a picture of transmission emerges. They found that if a healthy user encounters 40 other users who report themselves as sick with flu symptoms, his or her odds of getting flu symptoms the next day increases from less than one percent to 20 percent. With 60 interactions, that number rises to 50 percent.
The team also looked at interactions on Twitter itself, isolating pairs of users who follow each other and calling them “friendships.” Even though many Twitter relationships exist only on the Web, some correspond to real-life interactions, and they found that a user who has ten friends who report themselves as sick are 28 percent more likely to become sick the next day. In total, using both of these types of interactions, their algorithm was able to predict whether a healthy person would get sick (and tweet about it) with 90 percent accuracy.
We’re still in the early stages of this research, and there are plenty of limitations: Most people still don’t use Twitter (yes, really) and even if they do, they might not tweet about getting sick.
But if this sort of system could be developed further, it’s easy to imagine all sorts of applications. Your smartphone could automatically warn you, for instance, if you’d spent too much time in the places occupied by people with the flu, prompting you to go home to stop putting yourself in the path of infection. An entire city’s residents could even be warned if it were on the verge of an outbreak.
Despite the 150 years we’re removed from John Snow’s disease-mapping breakthrough, it’s clear that there are still aspects of disease information we don’t fully understand. Now, as then, mapping the data could help yield the answers.
October 29, 2013
One of the freakiest parts of getting bitten by a tick is the
insect arachnid’s incredible tenacity: If one successfully pierces your skin and you don’t pull it off, it can hang on for days at a time, all the while sucking your blood and swelling in size.
Despite plenty of research into ticks and the diseases they carry, though, scientists have never fully understood the mechanics by which
the insects they use their mouths to penetrate skin and attach themselves so thoroughly. To address that, a group of German researchers recently used specialized microscopes and high-speed video cameras to capture a castor bean tick burrowing into a mouse’s bare skin in real time.
Their work, published today in Proceedings of the Royal Society B, produced all sorts of new revelations about the structure and function of the tick’s mouthparts. Perhaps the most harrowing part of the research, though, is the microscopic video they captured, shown at an accelerated speed above.
The team of scientists, led by Dania Richter of Charité Medical School in Berlin, conducted the work by placing five ticks on the ears of lab mice and letting them have their fill of blood. Unbeknownst to the ticks, though, they’d been caught on camera—and by analyzing the footage, along with detailed scanning electron microscope images of the ticks’ mouth appendages, the researchers found that the insects’ bites are really a highly specialized two-step process.
To begin, after the tick has climbed aboard a host animal, a pair of sharp structures called chelicerae, which are located at the end of its feeding appendage, alternate in poking downward. As they gradually dig, their barbed ends prevent them from slipping out, and the tick slowly and shallowly lodges itself in the skin, as seen in the first few seconds of the video.
After about 30 or so of these small digging movements, the tick switches to phase two (shown just after the video above zooms in). At this point, the insect simultaneously flexes both of the telescoping chelicerae, causing them to lengthen, and pushes them apart in what the researchers call “a breaststroke-like motion,” forming a V-shape.
With the tips of the chelicerae anchored in the skin, flexing them outward causes them to penetrate even deeper. When this occurs, the tick’s hypostome—a razor-sharp, even-more-heavily-barbed spear—plunges into the host’s skin and attaches firmly.
The tick’s not done, however: It repeats this same breaststroke five or six times in a row, pushing the hypostome deeper and deeper until it’s fully implanted. With the hypostome firmly in place, the tick begins drawing blood—sucking the fluid up to its mouth through a grooved channel that lies in between the chelicerae and hypostome—and if left interrupted, will continue until it’s sated days later.
This new understanding of how ticks accomplish this feat, the researchers say, could help us someday figure out how to prevent transmission of the most feared risk of a tick bite: Lyme disease. Scientists know that the disease is caused by several different species of bacteria that adhere to the inner lining of the tick’s gut and typically make the jump into a human’s bloodstream only after a full day of feeding. Knowing how ticks are able to attach themselves so stubbornly could eventually allow us to determine a means of thwarting their advances, before the Lyme-bearing bacteria have a chance to cross the species barrier.