May 10, 2013
It’s hard to appreciate just how quickly and thoroughly Twitter has taken over the world. Just seven years ago, in 2006, it was an idea sketched out on a pad of paper. Now, the service is used by an estimated 554 million users—a number that amounts to nearly 8 percent of the all humans on the planet—and an estimated 170 billion tweets have been sent, with that number climbing by roughly 58 million every single day.
All these tweets provide an invaluable source of news, entertainment, conversation and connection between people. But for scientists, they’re also valuable as something rather different: raw data.
Because Twitter features an open API (which allows for tweets to be downloaded as raw, analyzable data) and many tweets are geotagged, researchers can use billions of these tweets and analyze them by location to learn more about the geography of humans across the planet. Last fall, as part of the Global Twitter Heartbeat, a University of Illinois team analyzed the language and location of over a billion tweets from across the U.S. to create sophisticated maps of things like positive and negative emotions expressed during Hurricane Sandy, or support for Barack Obama or Mitt Romney during the Presidential election.
As Joshua Keating noted on Foreign Policy‘s War of Ideas blog, members of the same group, led by Kalev Leetaru, have recently gone one step further. As published in a new study earlier this week in the online journal First Monday, they analyzed the locations and languages of 46,672,798 tweets posted between October 23 and November 30 of last year to create a stunning portrait of human activity around the planet, shown at the top of the post. They made use of the Twitter decahose, a data stream that captures a random 10 percent of all tweets worldwide at any given time (which totaled 1,535,929,521 for the time period), and simply focused on the tweets with associated geographic data.
As the researchers note, the geographic density of tweets in many regions—especially in the Western world, where computers, mobile devices, and Twitter are all used at peak levels—closely matches rates of electrification and lighting use. As a result, the maps of tweets (such as the detail view of the continental U.S., below) end up looking a lot like satellite images of artificial light at night.
As a test to see how well tweets matched artificial light use, they created the composite map below, in which tweets are shown as red dots and nighttime lighting is shown as blue. Areas where they correspond in frequency (and effectively cancel each other out) are shown as white, and areas where one outweighs the other remain red or blue. Many areas end up looking pretty white, with some key exceptions: Iran and China, where Twitter is banned, are noticeably blue, while many countries with relatively low electrification rates (but where Twitter is still popular) appear as red.
The project got even more interesting when the researchers used an automated system to break down tweets by language. The most common language in Twitter is English, which is represented in 38.25 percent of all Tweets. After that came Japanese (11.84 percent), Spanish (11.37 percent), Indonesian (8.84 percent), Norwegian (7.74 percent) and Portugese (5.58 percent).
The team constructed a map of all tweets written in the 26 most popular languages, with each represented by a different color, below:
While most countries’ tweets are dominated by their official languages, many are revealed to include tweets in a variety of other languages. Look closely enough, and you’ll see a rainbow of colors subtly popping out from the grey dots (English tweets) that blanket the U.S.:
Among other analyses, the research team even looked at the geography of retweeting and referencing—the average distance between a user and someone he or she retweets, as well as the average distance between that user and someone he or she simply references in a tweet. On average, the distance for a retweet was 1,115 miles and 1,118 for a reference. But, counterintuitively, there was a positive relationship between the number of times a given user retweeted or referenced another user and their distance: Pairs of users with just a handful of interactions, on the whole, were more likely to be closer together (500-600 miles apart) than those with dozens of retweets and references between them.
This indicates that users who live far apart are more likely to use Twitter to interact on a regular basis. One explanation might be that the entities with the most followers—and thus the most references and retweets—are often celebrities, organizations or corporations, users that people are familiar with but don’t actually have a personal relationship with. A global map of retweets between users is below:
The paper went into even more detail on other data associated with tweets: the ratio between mainstream news coverage and number of tweets in a country (Europe and the U.S. get disproportionate media coverage, while Latin America and Indonesia are overlooked), the places Twitter has added the most users recently (the Middle East and Spain) and the places where users have, on average, the most followers (South America and the West Coast).
There are a few caveats to all this data. For one, though the tweets analyzed number in the tens of millions, they are still just 0.3 percent of all tweets sent, so they might not adequately represent all Twitter patterns, especially if users who enable geotagging behave differently than others. Additionally, in the fast-changing world of Twitter, some trends might have already changed significantly since last fall. But as Twitter continues to grow and as more data become available, it stands to reason that this sort of analysis will only become more popular for demographers, computer scientists and other researchers.
May 9, 2013
In September 2009, after decades of speculation, evidence of water on the surface of the Moon was discovered for the first time. Chandrayaan-1, a lunar probe launched by India’s space agency, had created a detailed map of the minerals that make up the Moon’s surface and analysts determined that, in several places, the characteristics of lunar rocks indicated that they bore as much 600 million metric tonnes of water.
In the years since, we’ve seen further evidence of water both on the surface and within the interior of the Moon, locked within the pore space of rocks and perhaps even frozen in ice sheets. All this has gotten space exploration enthusiasts pretty excited, as the presence of frozen water could someday make permanent human habitation of the Moon much more feasible.
For planetary scientists, though, it’s raised a knotty question: How did water arrive on the Moon in the first place?
A new paper published today in Science suggests that, unlikely as it may seem, the Moon’s water originated from the same source as the water that comes out of the faucet when you open a tap. Just as many scientists believe the Earth’s entire supply of water was initially delivered via water-bearing meteorites that traveled from the asteroid belt billions of years ago, a new analysis of lunar volcanic rocks brought back during the Apollo missions indicates the Moon’s water has its roots in these same meteorites. But there’s a twist: Before reaching the Moon, this lunar water was first on Earth.
The research team, led by Alberto Saal of Brown University, analyzed the isotopic composition of hydrogen found in water within tiny bubbles of volcanic glass (supercooled lava) as well as melt inclusions (blobs of melted material trapped in slowly cooling magma that later solidified) in the Apollo-era rocks, as shown in the image above. Specifically, they looked at the ratio of deuterium isotopes (“heavy” hydrogen atoms that contain an added neutron) to normal hydrogen atoms.
Previously, scientists have found that in water, this ratio changes depending on where in the solar system the water molecules initially formed, as water that originated closer to the Sun has less deuterium than water formed further away. The water locked in the lunar glass and melt inclusions was found to have deuterium levels similar to that found in a class of meteorites called carbonaceous chondrites, which scientists believe to be the most unaltered remnants of the nebula from which the solar system formed. Carbonaceous chondrites that fall to Earth originate in the asteroid belt between Mars and Jupiter.
Higher deuterium levels would have suggested that water was first brought on to the Moon by comets—as many scientists have hypothesized—because comets largely come from the Kuiper belt and Oort Cloud, remote regions far beyond Neptune where deuterium is more plentiful. But if the water in these samples represents lunar water as a whole, the findings indicate that the water came from a much closer source—in fact, the same source as the water on Earth.
The simplest explanation for this similarity would be a scenario in which, when a massive collision between a young Earth and a Mars-sized proto-planet formed the Moon some 4.5 billion years ago, some of the liquid water on our planet was somehow preserved from vaporization and transferred along with the solid material that would become the Moon.
Our current understanding of massive impacts, though, doesn’t allow for this possibility: The heat we believe would be generated by such an enormous collision would theoretically vaporize all lunar water and send it off into space in a gaseous form. But there are a few other scenarios that might explain how water was transferred from our proto-Earth to the Moon in other forms.
One possibility, the researchers speculate, is that the early Moon borrowed a bit of Earth’s high-temperature atmosphere the instant it formed, so any water that had been locked in the chemical composition of Earth rocks pre-impact would have vaporized along with the rock into this shared atmosphere after impact; this vapor would have then coalesced into a solid lunar blob, binding the water into the chemical composition of lunar material. Another possibility is that the rocky chunk of Earth was kicked off to form the Moon retained the water molecules locked inside its chemical composition, and later on, these were released as a result of radioactive heating inside the Moon’s interior.
Evidence from recent lunar missions suggests that lunar rocks—not just craters at the poles—indeed contain substantial amounts of water, and this new analysis suggests that water originally came from Earth. So the findings will force scientists to rethink models of how the Moon could have formed, given that it clearly didn’t dry out completely.
With their big, glossy black eyes and downy fluff, baby Weddell seal pups are some of the most adorable newborns in the animal kingdom. But these cute infants are far from helpless bundles of joy. New research published in the journal Marine Mammal Science reveals that Weddell seal pups likely possess the most adult-like brain of any mammal at birth.
The seal pups’ brains, compared to adult seals’ brain proportions, are the largest known for any mammal to date. The researchers write that this is “remarkable” considering that the pups are quite small at birth compared to many other newborn mammals.
To arrive at these findings, a team of researchers from the Smithsonian Environmental Research Center and the National Museum of Natural History traveled to Antarctica to collect fresh pups specimens. They took advantage of the fact that many pups never make it to adulthood due to stillbirths, abandonment and accidental death, such as being crushed by an adult. The researchers collected 10 dead seal pups (which quickly freeze in the Antarctic temperatures), conducted a few measurements and then decapitated and shipped the frozen heads back to the Smithsonian. They also tossed in a couple adult Weddell seal heads into the mix, one of which had died from acute toxemia–possibly from its gut being punctured by a fish spine–and the other whose cause of death could not be determined.
Back in the U.S., the researchers partially thawed the skulls in a lab and–like a well picked-over Thanksgiving turkey–manually peeled the tissue off of the baby seal faces. Then, they drilled into the skulls to extract the intact brains. Finally, they put the bones into a tank full of flesh-eating beetles to remove any remaining scraps of meat. Clean skulls and brains in hand, they went about taking measurements, and they also drew upon measurements of some older Weddell Seal skull specimens from the museum’s collection.
Remarkably, baby Weddell seal brains are already 70 percent developed at birth, the team found. Compare this to human infants, whose brains are a mere 25 percent of their eventual adult mass. As a Smithsonian statement explains, baby animals born with proportionally larger brains usually live in challenging environments in which they need to act quickly in order to survive. Other animals that share this trait include most marine mammals, zebras and wildebeest.
For Weddell seal pups, large brains likely help with diving under ice sheets and orienting themselves under water at less than three weeks old–an extremely dangerous task for any mammal, newborn or not. The pups must acclimate quickly since Weddell seal mothers abandon their young at about 6 weeks old, meaning they need to be able to completely fend for themselves when that day arrives.
In nature, however, everything comes with a price. The Weddell seal pups may have the biggest, best developed brains on the block when compared to what they will be as adults, but this metabolically taxing organ requires excessive energy to maintain. A pup weighing just 65 pounds needs between 30 to 50 grams of glucose per day in order to survive, and the team estimates that the energetically hungry brain may account for a full 28 grams of that demand.
Luckily for the seal pups, their mothers’ milk is almost exactly matched to the babies’ caloric needs. Weddell seal milk supplies about 39 grams of sugar per day. Females seals, however, lose significant weight while tending to their young, which jeopardizes their own survival. At their mother’s cost, the babies’ brains are allowed to thrive. That is, until their mother decides she’s had enough with the nurturing and leaves her pups to survive on their own.
May 8, 2013
Throwing a baseball is hard. As xkcd pointed out just yesterday, accurately throwing a strike requires that a pitcher release the ball at an extremely precise moment—doing so more than half a millisecond too early or too late causes it to miss the strike zone entirely. Because it takes far longer (a full five milliseconds) just for our nerve impulses to cover the distance of our arm, this feat requires the brain to send a signal to to the hand to release the ball well before the arm has reached its proper throwing position.
The one feat even more difficult than throwing a fastball, though, might be hitting one. There’s a 100 millisecond delay between the moment your eyes see an object and the moment your brain registers it. As a result, when a batter sees a fastball flying by at 100 mph, it’s already moved an additional 12.5 feet by the time his or her brain has actually registered its location.
How, then, do batters ever manage to make contact with 100 mph fastballs—or, for that matter, 75 mph change-ups?
In a study published today in the journal Neuron, UC Berkeley researchers used fMRI (functional magnetic resonance imaging) to pinpoint the prediction mechanisms in the brain that enable hitters to track pitches (and enable all sorts of people to envision the paths of moving objects in general). They found that the brain is capable of effectively “pushing” forward objects along in their trajectory from the moment it first sees them, simulating their path based on their direction and speed and allowing us to unconsciously project where they’ll be a moment later.
The research team put participants in an fMRI machine (which measures blood flow to various parts of the brain in real time) and had them watch a screen showing the “flash-drag effect” (below), a visual illusion in which a moving background causes the brain to mistakenly interpret briefly flashed stationary objects as moving. “The brain interprets the flashes as part of the moving background, and therefore engages its prediction mechanism to compensate for processing delays,” said Gerrit Maus, the paper’s lead author, in a press statement.
Because the participants’ brains thought these briefly flashing boxes were moving, the researchers hypothesized, the area of their brain responsible for predicting the motion of objects would show increased activity. Similarly, when shown a video where the background didn’t move but the flashing objects actually did, the same motion-prediction mechanism would cause similar neuron activity to occur. In both cases, the V5 region of their visual cortex showed distinctive activity, suggesting this area is home to the motion-prediction capabilities that allow us to track fast-moving objects.
Previously, in another study, the same team had zeroed in on the V5 region by using transcranial magnetic stimulation (which interferes with brain activity) to disrupt the area and found that participants were less effective at predicting the movement of objects. “Now not only can we see the outcome of prediction in area V5, but we can also show that it is causally involved in enabling us to see objects accurately in predicted positions,” Maus said.
It’s not much of a leap to suppose that this prediction mechanism is more sophisticated in some people than others—which is why most of us would whiff when trying to hit the fastball of a major league pitcher.
A failure in this mechanism might be at work, the researchers say, in people who have motion perception disorders such as akinetopsia, which leaves the ability to see stationary objects completely intact but renders a person essentially blind to anything in motion. Better understanding how neurological activity in the V5 region—along with other areas of the brain—allows us to track and predict movement could, in the long-term, help us develop treatments for these sorts of disorders.
May 7, 2013
Last month, a trio of engineers debuted an app that allows Icelanders to determine if they’re actually related to a potential date. Why, you ask? Because the entire population of Iceland, roughly 320,000 people, derives from a single family tree, and it’s very possible to bump into a former flame at a family gathering.
The case of Iceland is an extreme one, but the idea that we are all distant cousins, in the scope of human history, is well accepted. A new study, published today in the journal PLOS Biology, explains this degree of relatedness in modern-day Europeans.
The study reveals that just about any two random people from anywhere in Europe, even those living on opposite sides of the continent, share hundreds of genetic ancestors from only 1,000 years ago. In fact, a person living in the United Kingdom shares a chunk of genomic material with someone living in Turkey 20 percent of the time.
Researchers from the University of California, Davis and the University of Southern California studied genomic data for 2,257 Europeans from a massive database of genome-mapped individuals known as the Population Reference Sample. They measured ancestral ties going back 3,000 years by analyzing long segments of genome, passed down from generation to generation, shared by individuals.
Distant relatives share these long blocks of genome because they have both inherited them from common ancestors. First cousins share about one-fourth of their genome, inherited from a shared set of grandparents. Second cousins share just one-sixteenth of their genome, thanks to the same pair of great-grandparents. The researchers detected 1.9 million of these shared DNA sequences within the data pool, and then used their varying lengths to infer how long ago the shared ancestors lived.
These shared chunks of genome become shorter and shorter between more distant relatives because DNA strands undergo recombination, shuffling our genetic makeup around, with each successive generation. For example, a shared block of genome is shorter between second cousins than it is between first cousins. The longer a shared segment, the more recent the common ancestor.
As we might expect, the numbers of shared genetic ancestors dramatically decrease as geographic distance (in this case, across Europe) increases. This means that people who live near each other are more likely to be related to each other than those who don’t. For example, someone living in England will have a higher degree of relatedness to a fellow Briton than he would with someone from Germany. Researchers found that two modern Europeans living in neighboring populations, for example two adjacent countries, share between two and 12 genetic ancestors from the last 1,500 years.
This pattern can be seen in historically small or more isolated populations too, where fewer possible ancestors exist. Such is the case on the Italian and Iberian peninsulas—areas least affected by Slavic and Hunnic migrations between the fourth and eighth centuries—where people share more ancestors with each other than people in most other regions of Europe. Additionally, those living in Western Europe are also somewhat less related to each other than people living in Eastern Europe, a historically tight-knit region in terms of population.
However, some findings deviate from this genealogical norm. The researchers found that people from the United Kingdom shared more recent ancestors with people living in Ireland than with other UK residents. Recent ancestry also tied Germans more closely with Polish people than with other Germans. These instances likely reflect human migration in recent centuries, as smaller populations moved into larger ones.
Although this study looked only at European lineage, the researchers suggest that such patterns probably exist in the rest of the world. In any case, such research in human history brings us closer to learning more about the most recent common ancestor of all modern humans,
which scientists believe who, according to mathematical models, might have walked the Earth roughly as early as 3,500 years ago (PDF). This common ancestor, a product of the intermixing of once-isolated population groups, could have lived much earlier than this if remote populations managed to prevent its members from mating with far-flung explorers, but the recent paper’s finding seems to support the idea that distant populations converged relatively recently when compared to the long history of ancient humans.