May 10, 2013
It’s hard to appreciate just how quickly and thoroughly Twitter has taken over the world. Just seven years ago, in 2006, it was an idea sketched out on a pad of paper. Now, the service is used by an estimated 554 million users—a number that amounts to nearly 8 percent of the all humans on the planet—and an estimated 170 billion tweets have been sent, with that number climbing by roughly 58 million every single day.
All these tweets provide an invaluable source of news, entertainment, conversation and connection between people. But for scientists, they’re also valuable as something rather different: raw data.
Because Twitter features an open API (which allows for tweets to be downloaded as raw, analyzable data) and many tweets are geotagged, researchers can use billions of these tweets and analyze them by location to learn more about the geography of humans across the planet. Last fall, as part of the Global Twitter Heartbeat, a University of Illinois team analyzed the language and location of over a billion tweets from across the U.S. to create sophisticated maps of things like positive and negative emotions expressed during Hurricane Sandy, or support for Barack Obama or Mitt Romney during the Presidential election.
As Joshua Keating noted on Foreign Policy‘s War of Ideas blog, members of the same group, led by Kalev Leetaru, have recently gone one step further. As published in a new study earlier this week in the online journal First Monday, they analyzed the locations and languages of 46,672,798 tweets posted between October 23 and November 30 of last year to create a stunning portrait of human activity around the planet, shown at the top of the post. They made use of the Twitter decahose, a data stream that captures a random 10 percent of all tweets worldwide at any given time (which totaled 1,535,929,521 for the time period), and simply focused on the tweets with associated geographic data.
As the researchers note, the geographic density of tweets in many regions—especially in the Western world, where computers, mobile devices, and Twitter are all used at peak levels—closely matches rates of electrification and lighting use. As a result, the maps of tweets (such as the detail view of the continental U.S., below) end up looking a lot like satellite images of artificial light at night.
As a test to see how well tweets matched artificial light use, they created the composite map below, in which tweets are shown as red dots and nighttime lighting is shown as blue. Areas where they correspond in frequency (and effectively cancel each other out) are shown as white, and areas where one outweighs the other remain red or blue. Many areas end up looking pretty white, with some key exceptions: Iran and China, where Twitter is banned, are noticeably blue, while many countries with relatively low electrification rates (but where Twitter is still popular) appear as red.
The project got even more interesting when the researchers used an automated system to break down tweets by language. The most common language in Twitter is English, which is represented in 38.25 percent of all Tweets. After that came Japanese (11.84 percent), Spanish (11.37 percent), Indonesian (8.84 percent), Norwegian (7.74 percent) and Portugese (5.58 percent).
The team constructed a map of all tweets written in the 26 most popular languages, with each represented by a different color, below:
While most countries’ tweets are dominated by their official languages, many are revealed to include tweets in a variety of other languages. Look closely enough, and you’ll see a rainbow of colors subtly popping out from the grey dots (English tweets) that blanket the U.S.:
Among other analyses, the research team even looked at the geography of retweeting and referencing—the average distance between a user and someone he or she retweets, as well as the average distance between that user and someone he or she simply references in a tweet. On average, the distance for a retweet was 1,115 miles and 1,118 for a reference. But, counterintuitively, there was a positive relationship between the number of times a given user retweeted or referenced another user and their distance: Pairs of users with just a handful of interactions, on the whole, were more likely to be closer together (500-600 miles apart) than those with dozens of retweets and references between them.
This indicates that users who live far apart are more likely to use Twitter to interact on a regular basis. One explanation might be that the entities with the most followers—and thus the most references and retweets—are often celebrities, organizations or corporations, users that people are familiar with but don’t actually have a personal relationship with. A global map of retweets between users is below:
The paper went into even more detail on other data associated with tweets: the ratio between mainstream news coverage and number of tweets in a country (Europe and the U.S. get disproportionate media coverage, while Latin America and Indonesia are overlooked), the places Twitter has added the most users recently (the Middle East and Spain) and the places where users have, on average, the most followers (South America and the West Coast).
There are a few caveats to all this data. For one, though the tweets analyzed number in the tens of millions, they are still just 0.3 percent of all tweets sent, so they might not adequately represent all Twitter patterns, especially if users who enable geotagging behave differently than others. Additionally, in the fast-changing world of Twitter, some trends might have already changed significantly since last fall. But as Twitter continues to grow and as more data become available, it stands to reason that this sort of analysis will only become more popular for demographers, computer scientists and other researchers.
May 9, 2013
In September 2009, after decades of speculation, evidence of water on the surface of the Moon was discovered for the first time. Chandrayaan-1, a lunar probe launched by India’s space agency, had created a detailed map of the minerals that make up the Moon’s surface and analysts determined that, in several places, the characteristics of lunar rocks indicated that they bore as much 600 million metric tonnes of water.
In the years since, we’ve seen further evidence of water both on the surface and within the interior of the Moon, locked within the pore space of rocks and perhaps even frozen in ice sheets. All this has gotten space exploration enthusiasts pretty excited, as the presence of frozen water could someday make permanent human habitation of the Moon much more feasible.
For planetary scientists, though, it’s raised a knotty question: How did water arrive on the Moon in the first place?
A new paper published today in Science suggests that, unlikely as it may seem, the Moon’s water originated from the same source as the water that comes out of the faucet when you open a tap. Just as many scientists believe the Earth’s entire supply of water was initially delivered via water-bearing meteorites that traveled from the asteroid belt billions of years ago, a new analysis of lunar volcanic rocks brought back during the Apollo missions indicates the Moon’s water has its roots in these same meteorites. But there’s a twist: Before reaching the Moon, this lunar water was first on Earth.
The research team, led by Alberto Saal of Brown University, analyzed the isotopic composition of hydrogen found in water within tiny bubbles of volcanic glass (supercooled lava) as well as melt inclusions (blobs of melted material trapped in slowly cooling magma that later solidified) in the Apollo-era rocks, as shown in the image above. Specifically, they looked at the ratio of deuterium isotopes (“heavy” hydrogen atoms that contain an added neutron) to normal hydrogen atoms.
Previously, scientists have found that in water, this ratio changes depending on where in the solar system the water molecules initially formed, as water that originated closer to the Sun has less deuterium than water formed further away. The water locked in the lunar glass and melt inclusions was found to have deuterium levels similar to that found in a class of meteorites called carbonaceous chondrites, which scientists believe to be the most unaltered remnants of the nebula from which the solar system formed. Carbonaceous chondrites that fall to Earth originate in the asteroid belt between Mars and Jupiter.
Higher deuterium levels would have suggested that water was first brought on to the Moon by comets—as many scientists have hypothesized—because comets largely come from the Kuiper belt and Oort Cloud, remote regions far beyond Neptune where deuterium is more plentiful. But if the water in these samples represents lunar water as a whole, the findings indicate that the water came from a much closer source—in fact, the same source as the water on Earth.
The simplest explanation for this similarity would be a scenario in which, when a massive collision between a young Earth and a Mars-sized proto-planet formed the Moon some 4.5 billion years ago, some of the liquid water on our planet was somehow preserved from vaporization and transferred along with the solid material that would become the Moon.
Our current understanding of massive impacts, though, doesn’t allow for this possibility: The heat we believe would be generated by such an enormous collision would theoretically vaporize all lunar water and send it off into space in a gaseous form. But there are a few other scenarios that might explain how water was transferred from our proto-Earth to the Moon in other forms.
One possibility, the researchers speculate, is that the early Moon borrowed a bit of Earth’s high-temperature atmosphere the instant it formed, so any water that had been locked in the chemical composition of Earth rocks pre-impact would have vaporized along with the rock into this shared atmosphere after impact; this vapor would have then coalesced into a solid lunar blob, binding the water into the chemical composition of lunar material. Another possibility is that the rocky chunk of Earth was kicked off to form the Moon retained the water molecules locked inside its chemical composition, and later on, these were released as a result of radioactive heating inside the Moon’s interior.
Evidence from recent lunar missions suggests that lunar rocks—not just craters at the poles—indeed contain substantial amounts of water, and this new analysis suggests that water originally came from Earth. So the findings will force scientists to rethink models of how the Moon could have formed, given that it clearly didn’t dry out completely.
May 8, 2013
Throwing a baseball is hard. As xkcd pointed out just yesterday, accurately throwing a strike requires that a pitcher release the ball at an extremely precise moment—doing so more than half a millisecond too early or too late causes it to miss the strike zone entirely. Because it takes far longer (a full five milliseconds) just for our nerve impulses to cover the distance of our arm, this feat requires the brain to send a signal to to the hand to release the ball well before the arm has reached its proper throwing position.
The one feat even more difficult than throwing a fastball, though, might be hitting one. There’s a 100 millisecond delay between the moment your eyes see an object and the moment your brain registers it. As a result, when a batter sees a fastball flying by at 100 mph, it’s already moved an additional 12.5 feet by the time his or her brain has actually registered its location.
How, then, do batters ever manage to make contact with 100 mph fastballs—or, for that matter, 75 mph change-ups?
In a study published today in the journal Neuron, UC Berkeley researchers used fMRI (functional magnetic resonance imaging) to pinpoint the prediction mechanisms in the brain that enable hitters to track pitches (and enable all sorts of people to envision the paths of moving objects in general). They found that the brain is capable of effectively “pushing” forward objects along in their trajectory from the moment it first sees them, simulating their path based on their direction and speed and allowing us to unconsciously project where they’ll be a moment later.
The research team put participants in an fMRI machine (which measures blood flow to various parts of the brain in real time) and had them watch a screen showing the “flash-drag effect” (below), a visual illusion in which a moving background causes the brain to mistakenly interpret briefly flashed stationary objects as moving. “The brain interprets the flashes as part of the moving background, and therefore engages its prediction mechanism to compensate for processing delays,” said Gerrit Maus, the paper’s lead author, in a press statement.
Because the participants’ brains thought these briefly flashing boxes were moving, the researchers hypothesized, the area of their brain responsible for predicting the motion of objects would show increased activity. Similarly, when shown a video where the background didn’t move but the flashing objects actually did, the same motion-prediction mechanism would cause similar neuron activity to occur. In both cases, the V5 region of their visual cortex showed distinctive activity, suggesting this area is home to the motion-prediction capabilities that allow us to track fast-moving objects.
Previously, in another study, the same team had zeroed in on the V5 region by using transcranial magnetic stimulation (which interferes with brain activity) to disrupt the area and found that participants were less effective at predicting the movement of objects. “Now not only can we see the outcome of prediction in area V5, but we can also show that it is causally involved in enabling us to see objects accurately in predicted positions,” Maus said.
It’s not much of a leap to suppose that this prediction mechanism is more sophisticated in some people than others—which is why most of us would whiff when trying to hit the fastball of a major league pitcher.
A failure in this mechanism might be at work, the researchers say, in people who have motion perception disorders such as akinetopsia, which leaves the ability to see stationary objects completely intact but renders a person essentially blind to anything in motion. Better understanding how neurological activity in the V5 region—along with other areas of the brain—allows us to track and predict movement could, in the long-term, help us develop treatments for these sorts of disorders.
May 7, 2013
In the past century or so, football helmets have come a long way, evolving from crude “leatherheads” crafted by shoemakers to plastic-and-rubber hybrids that can be customized to fit a player’s head and have radios built in.
Nevertheless, the sport currently faces a serious and growing problem: brain injuries. Studies have shown that former NFL players are about three times more likely to die from Alzheimer’s, Parkinson’s and Lou Gehrig’s diseases as the general population, a result of the alarming number of concussions they experience over the course of their careers. The NFL has responded by changing its rules to minimize head impacts, instituting stricter guidelines for concussed players returning to games and pouring money into attempts to develop safer helmets.
But some critics argue that, no matter how much research we undertake, there’s simply no way to create a concussion-proof helmet—no technology can stop a fundamentally violent game from inflicting harm. A 2011 study, in fact, found that with many types of impacts, modern helmets were no better than vintage leather ones at protecting players’ heads.
But now, fans who find their desire for a safe game at odds with their love of it can take comfort in a new study, published today in the Journal of Neurosurgery, that determines otherwise: Compared to “leatherheads,” new helmets are indeed much more effective at protecting the human head. Researchers from Virginia Tech came to the finding by using an automated head impact simulation system to test the effectiveness of a pair of vintage Hutch H-18 leather helmets from the 1930s against 10 plastic helmets currently in use, and found that, depending on the force of impact, modern helmets reduced the concussion risk by anywhere from 45 to 96 percent.
The team used the system to measure four different types of head impacts (on the front, side, rear and top of the head), and dropped the head from a range of heights (12, 24, 36, 48 and 60 inches) with each helmet on to simulate in-game impacts of a variety of intensities. Sensors inside the head were used to measure the force of each type of impact. This same type of testing, developed by the Virginia Tech team, has been used extensively to classify the safety of modern helmets on a 1 through 5-star scale.
The researchers found that there was some difference in the performance of the modern helmets—but, as you’d probably expect just from looking at them, the vintage leather helmets performed significantly more poorly than any of the plastic ones. At the lowest intensity impacts (from a 12 inch drop height), the modern helmets reduced the impact on the head by by 59 to 63 percent, and at the medium-intensity impacts (from 36 inches), they provided a 67 to 73 percent reduction. The researchers didn’t even try dropping the head with the leather helmets on from 48 or 60 inches for fear of damaging them.
At the same time, it’s worth noting that the vintage helmets tested were each about 80 years old, so their age might have meant weaker leather fibers than if pristine leatherheads had been tested. Additionally, the leather helmets had presumably taken some beating during their years of use, while the plastic ones were unused before being subjected to the drops, which might have further skewed the results.
Still, both these factors were also included in the previous 2011 finding that leather helmets were just as effective as modern ones—so what accounts for the fact that this experiment so thoroughly contradicted it? The authors of this study say that the experimental setup used in the previous one—in which two helmeted heads were smashed together, one with a modern helmet and the other with either a modern or leather one—distorted the findings and masked the differences between helmet types. Some of the impact, they say, was actually absorbed by the padding in the modern plastic helmet even when the leather one was being tested.
Of course, given the distressing numbers on the concussions suffered by football players even with the latest helmets, this sort of testing shouldn’t suggest that the goal of designing safer head gear has been achieved. But it should give us a bit of hope in showing that 100 years of helmet design has provided some benefits—and future efforts to create and rigorously test new helmet technologies might be able to cut down on concussions long-term.
May 3, 2013
If you’ve ever noticed a strange, not-entirely-pleasant scent coming from your urine after you eat asparagus, you’re definitely not alone.
Distinguished thinkers as varied as Scottish mathematician and physician John Arbuthnot (who wrote in a 1731 book that “asparagus…affects the urine with a foetid smell”) and Marcel Proust (who wrote how the vegetable “transforms my chamber-pot into a flask of perfume”) have commented on the phenomenon.
Even Benjamin Franklin took note, stating in a 1781 letter to the Royal Academy of Brussels that “A few Stems of Asparagus eaten, shall give our Urine a disagreable Odour” (he was trying to convince the academy to “To discover some Drug…that shall render the natural Discharges of Wind from our Bodies, not only inoffensive, but agreable as Perfumes”—a goal that, alas, modern science has still not achieved).
But modern science has, at least, shed some light on why this one particular vegetable has such an unusual and potent impact on the scent of urine. Scientists tell us that the asparagus-urine link all comes down to one chemical: asparagusic acid.
Asparagusic acid, as the name implies, is (to our knowledge) only found in asparagus. When our bodies digest the vegetable, they break down this chemical into a group of related sulfur-containing compounds with long, complicated names (including dimethyl sulfide, dimethyl disulfide, dimethyl sulfoxide and dimethyl sulfone). As with many other substances that include sulfur—such as garlic, skunk spray and odorized natural gas—these sulfur-containing molecules convey a powerful, typically unpleasant scent.
All of these molecules also share another key characteristic: They’re volatile, meaning that have a low enough boiling point that they can vaporize and enter a gaseous state at room temperature, which allows them to travel from urine into the air and up your nose. Asparagusic acid, on the other hand, isn’t volatile, so asparagus itself doesn’t convey the same rotten smell. But once your body converts asparagusic acid into these volatile, sulfur-bearing compounds, the distinctive aroma can be generated quite quickly—in some cases, it’s been detected in the urine of people who ate asparagus just 15-30 minutes earlier.
Of course, the whole asparagus-urine scent issue is complicated by an entire separate issue: Some people simply don’t smell anything different when urinate after they eat asparagus. Scientists have long been divided into two camps in explaining this issue. Some believe that, for physiological reasons, these people (which constitute anywhere from 20 to 40 percent of the population) don’t produce the aroma in their urine when they digest asparagus, while others think that they produce the exact same scent, but somehow lack the ability to smell it.
On the whole, the evidence is mixed. Initially, a pair of studies conducted in the 1980s with participants from France and Israel found that everyone produced the characteristic scent, and that a minority of people were simply unable to smell it. People with the ability to detect the scent, though, were able to smell it even in the urine of those who couldn’t smell it, indicating that the differences were rooted in perception, not production.
More recent studies, though, suggest the issue is a bit more complicated. The most recent study, from 2010, found that differences existed between individuals in both the production and detection of the scent.
Overall, scientists now conclude that most of the difference is in perception—that is, if your urine doesn’t seem to smell any differently after you eat asparagus, it’s likely that you simply can’t perceive the sulfurous compounds’ foul odor, but there’s a small chance it’s because your body digests asparagus in a way that reduces the concentration of these chemicals in your urine.
It’s still unclear why some people don’t produce the smell, but we do seem to have a clear explanation of why some people don’t perceive it. In 2010, the genetic sequencing company 23andMe conducted a study in which they asked nearly 10,000 customers if they noticed any scent in their urine after eating asparagus, and looked for genetic similarities among those who couldn’t. This peculiarity—which you might consider useful if you eat asparagus frequently—appears to stem from a single genetic mutation, a switched base-pair among a cluster of 50 different genes that code for olfactory receptors.
We’re still waiting for some enterprising team of scientists to attempt gene therapy to convert smellers into non-smellers—but given other priorities to use genetic modification to cure blindness and breast cancer, it seems likely that those suffering from asparagus-scented urine might have to wait a while.