May 23, 2013
There’s a lot we don’t understand about an itch. Why do itches sometimes pop up for no apparent reason? Why is itching contagious? Why can the very idea of an itch—maybe even the fact that you’re currently reading about itching—cause you to feel the actual physical sensation of one?
Given all this uncertainty, a new discovery reported today in Science should at least scratch the surface of your curiosity and answer a question you’ve been itching to ask (terrible puns intended). A pair of molecular geneticists from the National Institutes of Health, Santosh Mishra and Mark Hoon, isolated a crucial signaling molecule produced by nerve cells that is necessary for passing along the sensation of an itch to the brain.
The pair worked with mice, and started off by examining the neurotransmitter chemicals produced by a type of neuron that runs all the way from the animals’ skin into their spinal columns. These neurons are known to be involved in passing along sensory information about the outer environment, including sensations of heat and pain. They measured that one of the neurotransmitters produced by these nerve cells—a chemical called Nppb (natriuretic polypeptide b)—was secreted in excess when the mice were subjected to a range of itch-inducing substances, such as histamine (the natural compound that triggers the itchiness associated with allergies) and chloroquine (a malaria drug that’s notorious for causing itching as a side-effect).
To test whether Nppd played a role in the itching, they genetically engineered some mice so that they failed to produce the chemical. Initially, they checked to see if these engineered mice were impervious to other types of sensations also conveyed by these neurons (pain, movement and heat), but they seemed to behave just the same as the normal mice, indicating Nppb wasn’t involved in the transmission of those stimuli.
Then, they exposed them once again to the itch-inducing chemicals. The normal mice scratched away, but the genetically engineered mice were another story. “It was amazing to watch,” Mishra said in a press statement. “Nothing happened. The mice wouldn’t scratch.”
Nppb, they determined, plays a key role in passing along the sensation of an itch from these neurons to the brain—especially because, when they injected these same mice with doses of Nppb, they suddenly started scratching just like the others.
To investigate just how Nppb relays this message, they zeroed in on a spot in the mice’s spines called the dorsal horn, in which sensory information from the skin and muscles gets integrated into the spinal column and sent to the brain. In this area, they discovered a high concentration of neurons with a receptor called Npra (natriuretic peptide receptor A) that seemed likely to accept the Nppb molecules secreted when the mice encountered an itch-triggering substance.
Sure enough, when they removed the neurons with the Npra receptor from normal, non-engineered mice that produced Nppb, they too stopped scratching when exposed to the substances. This indicates that Nppb is crucial for passing along the itch sensation from the nerves that reach out into the skin to the spine, and that it fits into the Npra receptor on spinal nerve cells, which then convey the sensation to the brain. But removing these receptors didn’t impact the transmission of pain or touch, indicating that Npra is specifically involved in the itch sensation pathway. This comes as a surprise, as most previous research has indicated that the pain and itching nervous networks are intricately related.
While this chemical pathway explains part of the physical mechanism behind an itch, scientists still don’t fully understand the underlying evolutionary reason for the sensation in the first place. Some have speculated that it serves as a defense measure against insects, parasites and allergens, prompting us to scratch—and, ideally, remove the offending item from our skin—before it causes further damage.
Regardless of the evolutionary reason, our nervous system is similar enough to that of mice that the finding could help us better understand itching patterns in humans—perhaps people who are more prone to itching naturally produce higher levels of Nppb, compared to those who get biten by a mosquito and find the itchiness easy to ignore. On a practical level, the discovery could eventually help us develop anti-itch drugs for people with chronic itching ailments, such as allergic reactions or skin conditions like eczema, which affects an estimated 30 million people.
The problem, though, is that Nppb plays several other important roles in the body (it was originally discovered due to its role in the regulation of blood circulation and pressure) so simply creating a drug that disables Nppb is likely to cause disruptive side-effects that go way beyond itching. But looking more closely into the way the Nppb molecule acts as a “start switch” for itching in humans—and perhaps figuring out a way to safely turn the switch off—could potentially provide relief for itchiness caused by all sorts of triggers, because in the mice, at least, the molecule was found to be involved in the whole range of itch-inducing substances the team tested.
May 22, 2013
For most of human history, any baby who suffered a collapsed trachea or bronchi faced a tragic fate: suffocation. These tubes convey air from the mouth to the lungs, and some infants are born with congenitally weakened cartilage surrounding them, a condition known as tracheomalacia. In severe cases, this can lead the trachea or bronchi to collapse completely, blocking the flow or air and causing a newborn to suddenly stop breathing.
To the amazingly wide-ranging list of accomplishments attributed to 3D printing technology, we can now add one more: a custom-made tracheal splint that saved the life of an infant with tracheomalacia and will be safely absorbed into his tissue over the next two years. A team of doctors and engineers from the University of Michigan printed the splint and implanted it into six-week-old Kaiba Gionfriddo last year, and announced the feat in a letter published today in the New England Journal of Medicine.
In December of 2011, Giondriddo was born with tracheomalacia, a condition that affects roughly 1 in 2200 American babies. Typically, the weakened cartilage causes some difficulty breathing, but children grow out of it by age 2 or 3 as the trachea naturally strengthens over time. His case, though, was particularly severe, and in February 2012, his parents April and Bryan were out to dinner when they noticed that he suddenly stopped breathing and was turning blue.
He was rushed to a hospital and kept alive with a ventilator, but doctors said there was a good chance he wouldn’t be able to survive long-term. Several weeks later, a team of Michigan engineers led by Scott Hollister began designing the device, based off prior research, in which they’d 3D printed splints and other prostheses but hadn’t implanted them in clinical patients. For this splint, they used a CT scan of Giondriddo’s trachea and left bronchus to create a 3D digital representation that was then printed, allowing them to produce a splint that would perfectly match his airway’s size and contours.
On February 21, 2012, the splint was surgically sewn around Giondriddo’s failed bronchus; almost immediately, it held open his air passages and allowed him to breathe normally. “It was amazing. As soon as the splint was put in, the lungs started going up and down for the first time,” Glenn Green, the doctor who performed the surgery and helped design the splint, said in a press statement.
21 days later, Giondriddo was taken off the ventilator and has had no breathing problems in the 14 months since the surgery. In addition to holding open the bronchus, the splint also provides a skeleton upon which natural cartilage tissue can grow, and because it was printed using a biopolymer called polycaprolactone, it will gradually be absorbed into this body tissue over time.
Previously, severe tracheomalacia was treated by extended periods of time using a ventilator, or the implantation of mesh tubes around the trachea or bronchus to keep the airway open. By custom-designing the splint based off a CT scan, though, the team created a treatment method that they say is more effective. Additionally, the dissolvable material means Giondriddo won’t need invasive surgery later to remove the device.
The team has also worked on using this same CT scanning and 3D printing process to produce custom-made ear, nose, skull and bone prostheses that are currently in experimental phases. Other research groups have successfully implanted 3D printed ears, noses and skulls in clinical patients, while last month, an Oxford team figured out how to print microscopic droplets that behave like human tissue.
May 21, 2013
For nearly 150 years, starting in the late 17th century, millions of people living in Ireland subsisted largely off one crop: the potato. Then, in 1845, farmers noticed that their potato plants’ leaves were covered in mysterious dark splotches. When they pulled potatoes from the ground, most were shrunken, mushy and inedible. The blight spread alarmingly quickly, cutting yields from that year’s harvest in half. By 1846, harvest from potato farms had dropped to one quarter of its original size.
The disease—along with a political system that required Ireland to export large amounts of corn, dairy and meat to England—led to widespread famine, and nearly all of the few potatoes available were eaten, causing shortages of seed potatoes that ensured starvation would continue for nearly a decade. Ultimately, over one million people died, and another million emigrated to escape the disaster, causing Ireland’s population to fall by roughly 25 percent; the island has still not reached its pre-famine population levels today.
At the time, the science behind the blight was poorly understood, and most believed it was caused by a fungus. During the twentieth century, scientists determined that it was caused by an oomycete (a fungus-like eukaryote) called Phytophthora infestans. However, without access to the 1840s-era specimens, they couldn’t identify exactly which strain of the organism was responsible.
Now, an international group of scientists has gone back and sampled the DNA of Irish potato leaves preserved in the collections of London’s Kew Gardens since 1847. In doing so, they discovered that a unique, previously unknown strain of P. infestans that they call HERB-1 caused the blight.
The researchers, from the Sainsbury Laboratory in the UK and the Max Planck Institutes in Germany, came to the finding as part of a project sequencing DNA from 11 different preserved historial samples and 15 modern ones to track the evolution of the pathogen over time, published today in the journal eLife [PDF].
Currently, P. infestans is distributed worldwide, with the vast majority comprised of the destructive strain US-1. Most of the other strains of P. infestans occur only in Mexico’s Toluca Valley, where wild potato varieties are indigenous, so scientists long believed that US-1 had been responsible for the 1840s famine.
But when the researchers extracted small pieces of intact DNA from the old dried-out potato leaves, originally collected from from Ireland, Great Britain, Europe and North America, and compared them with present-day P. infestans specimens, they found that the strain responsible for the famine differed slightly from today’s US-1.
Based on their analysis of the genetic variation between the two strains and the other historical samples, they suggest that sometime in 1842 or 1843, the ancestor of the HERB-1 strain of P. infestans made it out of Mexico to North America and then to Europe, perhaps contained within the potatoes that ships carried as food for their passengers. Soon, it spread across the world, triggering famine in Ireland, and persisting until the 1970s, when it died out and was largely replaced by the US-1 strain. The two strains likely split apart sometime soon after their common ancestor made it out of Mexico.
The study is the first time that the genetics of a plant pathogen have been analyzed by extracting DNA from dried plant samples, opening up the possibility that researchers can study other plant diseases based on the historical collections of botanical gardens and herbaria around the world. Better understanding the evolution of plant diseases over time, the team says, could be instrumental in figuring out ways to breed more robust plant varieties that are resistant to the pathogens that infect plants today.
May 10, 2013
It’s hard to appreciate just how quickly and thoroughly Twitter has taken over the world. Just seven years ago, in 2006, it was an idea sketched out on a pad of paper. Now, the service is used by an estimated 554 million users—a number that amounts to nearly 8 percent of the all humans on the planet—and an estimated 170 billion tweets have been sent, with that number climbing by roughly 58 million every single day.
All these tweets provide an invaluable source of news, entertainment, conversation and connection between people. But for scientists, they’re also valuable as something rather different: raw data.
Because Twitter features an open API (which allows for tweets to be downloaded as raw, analyzable data) and many tweets are geotagged, researchers can use billions of these tweets and analyze them by location to learn more about the geography of humans across the planet. Last fall, as part of the Global Twitter Heartbeat, a University of Illinois team analyzed the language and location of over a billion tweets from across the U.S. to create sophisticated maps of things like positive and negative emotions expressed during Hurricane Sandy, or support for Barack Obama or Mitt Romney during the Presidential election.
As Joshua Keating noted on Foreign Policy‘s War of Ideas blog, members of the same group, led by Kalev Leetaru, have recently gone one step further. As published in a new study earlier this week in the online journal First Monday, they analyzed the locations and languages of 46,672,798 tweets posted between October 23 and November 30 of last year to create a stunning portrait of human activity around the planet, shown at the top of the post. They made use of the Twitter decahose, a data stream that captures a random 10 percent of all tweets worldwide at any given time (which totaled 1,535,929,521 for the time period), and simply focused on the tweets with associated geographic data.
As the researchers note, the geographic density of tweets in many regions—especially in the Western world, where computers, mobile devices, and Twitter are all used at peak levels—closely matches rates of electrification and lighting use. As a result, the maps of tweets (such as the detail view of the continental U.S., below) end up looking a lot like satellite images of artificial light at night.
As a test to see how well tweets matched artificial light use, they created the composite map below, in which tweets are shown as red dots and nighttime lighting is shown as blue. Areas where they correspond in frequency (and effectively cancel each other out) are shown as white, and areas where one outweighs the other remain red or blue. Many areas end up looking pretty white, with some key exceptions: Iran and China, where Twitter is banned, are noticeably blue, while many countries with relatively low electrification rates (but where Twitter is still popular) appear as red.
The project got even more interesting when the researchers used an automated system to break down tweets by language. The most common language in Twitter is English, which is represented in 38.25 percent of all Tweets. After that came Japanese (11.84 percent), Spanish (11.37 percent), Indonesian (8.84 percent), Norwegian (7.74 percent) and Portugese (5.58 percent).
The team constructed a map of all tweets written in the 26 most popular languages, with each represented by a different color, below:
While most countries’ tweets are dominated by their official languages, many are revealed to include tweets in a variety of other languages. Look closely enough, and you’ll see a rainbow of colors subtly popping out from the grey dots (English tweets) that blanket the U.S.:
Among other analyses, the research team even looked at the geography of retweeting and referencing—the average distance between a user and someone he or she retweets, as well as the average distance between that user and someone he or she simply references in a tweet. On average, the distance for a retweet was 1,115 miles and 1,118 for a reference. But, counterintuitively, there was a positive relationship between the number of times a given user retweeted or referenced another user and their distance: Pairs of users with just a handful of interactions, on the whole, were more likely to be closer together (500-600 miles apart) than those with dozens of retweets and references between them.
This indicates that users who live far apart are more likely to use Twitter to interact on a regular basis. One explanation might be that the entities with the most followers—and thus the most references and retweets—are often celebrities, organizations or corporations, users that people are familiar with but don’t actually have a personal relationship with. A global map of retweets between users is below:
The paper went into even more detail on other data associated with tweets: the ratio between mainstream news coverage and number of tweets in a country (Europe and the U.S. get disproportionate media coverage, while Latin America and Indonesia are overlooked), the places Twitter has added the most users recently (the Middle East and Spain) and the places where users have, on average, the most followers (South America and the West Coast).
There are a few caveats to all this data. For one, though the tweets analyzed number in the tens of millions, they are still just 0.3 percent of all tweets sent, so they might not adequately represent all Twitter patterns, especially if users who enable geotagging behave differently than others. Additionally, in the fast-changing world of Twitter, some trends might have already changed significantly since last fall. But as Twitter continues to grow and as more data become available, it stands to reason that this sort of analysis will only become more popular for demographers, computer scientists and other researchers.
May 8, 2013
Throwing a baseball is hard. As xkcd pointed out just yesterday, accurately throwing a strike requires that a pitcher release the ball at an extremely precise moment—doing so more than half a millisecond too early or too late causes it to miss the strike zone entirely. Because it takes far longer (a full five milliseconds) just for our nerve impulses to cover the distance of our arm, this feat requires the brain to send a signal to to the hand to release the ball well before the arm has reached its proper throwing position.
The one feat even more difficult than throwing a fastball, though, might be hitting one. There’s a 100 millisecond delay between the moment your eyes see an object and the moment your brain registers it. As a result, when a batter sees a fastball flying by at 100 mph, it’s already moved an additional 12.5 feet by the time his or her brain has actually registered its location.
How, then, do batters ever manage to make contact with 100 mph fastballs—or, for that matter, 75 mph change-ups?
In a study published today in the journal Neuron, UC Berkeley researchers used fMRI (functional magnetic resonance imaging) to pinpoint the prediction mechanisms in the brain that enable hitters to track pitches (and enable all sorts of people to envision the paths of moving objects in general). They found that the brain is capable of effectively “pushing” forward objects along in their trajectory from the moment it first sees them, simulating their path based on their direction and speed and allowing us to unconsciously project where they’ll be a moment later.
The research team put participants in an fMRI machine (which measures blood flow to various parts of the brain in real time) and had them watch a screen showing the “flash-drag effect” (below), a visual illusion in which a moving background causes the brain to mistakenly interpret briefly flashed stationary objects as moving. “The brain interprets the flashes as part of the moving background, and therefore engages its prediction mechanism to compensate for processing delays,” said Gerrit Maus, the paper’s lead author, in a press statement.
Because the participants’ brains thought these briefly flashing boxes were moving, the researchers hypothesized, the area of their brain responsible for predicting the motion of objects would show increased activity. Similarly, when shown a video where the background didn’t move but the flashing objects actually did, the same motion-prediction mechanism would cause similar neuron activity to occur. In both cases, the V5 region of their visual cortex showed distinctive activity, suggesting this area is home to the motion-prediction capabilities that allow us to track fast-moving objects.
Previously, in another study, the same team had zeroed in on the V5 region by using transcranial magnetic stimulation (which interferes with brain activity) to disrupt the area and found that participants were less effective at predicting the movement of objects. “Now not only can we see the outcome of prediction in area V5, but we can also show that it is causally involved in enabling us to see objects accurately in predicted positions,” Maus said.
It’s not much of a leap to suppose that this prediction mechanism is more sophisticated in some people than others—which is why most of us would whiff when trying to hit the fastball of a major league pitcher.
A failure in this mechanism might be at work, the researchers say, in people who have motion perception disorders such as akinetopsia, which leaves the ability to see stationary objects completely intact but renders a person essentially blind to anything in motion. Better understanding how neurological activity in the V5 region—along with other areas of the brain—allows us to track and predict movement could, in the long-term, help us develop treatments for these sorts of disorders.