May 10, 2013
It’s hard to appreciate just how quickly and thoroughly Twitter has taken over the world. Just seven years ago, in 2006, it was an idea sketched out on a pad of paper. Now, the service is used by an estimated 554 million users—a number that amounts to nearly 8 percent of the all humans on the planet—and an estimated 170 billion tweets have been sent, with that number climbing by roughly 58 million every single day.
All these tweets provide an invaluable source of news, entertainment, conversation and connection between people. But for scientists, they’re also valuable as something rather different: raw data.
Because Twitter features an open API (which allows for tweets to be downloaded as raw, analyzable data) and many tweets are geotagged, researchers can use billions of these tweets and analyze them by location to learn more about the geography of humans across the planet. Last fall, as part of the Global Twitter Heartbeat, a University of Illinois team analyzed the language and location of over a billion tweets from across the U.S. to create sophisticated maps of things like positive and negative emotions expressed during Hurricane Sandy, or support for Barack Obama or Mitt Romney during the Presidential election.
As Joshua Keating noted on Foreign Policy‘s War of Ideas blog, members of the same group, led by Kalev Leetaru, have recently gone one step further. As published in a new study earlier this week in the online journal First Monday, they analyzed the locations and languages of 46,672,798 tweets posted between October 23 and November 30 of last year to create a stunning portrait of human activity around the planet, shown at the top of the post. They made use of the Twitter decahose, a data stream that captures a random 10 percent of all tweets worldwide at any given time (which totaled 1,535,929,521 for the time period), and simply focused on the tweets with associated geographic data.
As the researchers note, the geographic density of tweets in many regions—especially in the Western world, where computers, mobile devices, and Twitter are all used at peak levels—closely matches rates of electrification and lighting use. As a result, the maps of tweets (such as the detail view of the continental U.S., below) end up looking a lot like satellite images of artificial light at night.
As a test to see how well tweets matched artificial light use, they created the composite map below, in which tweets are shown as red dots and nighttime lighting is shown as blue. Areas where they correspond in frequency (and effectively cancel each other out) are shown as white, and areas where one outweighs the other remain red or blue. Many areas end up looking pretty white, with some key exceptions: Iran and China, where Twitter is banned, are noticeably blue, while many countries with relatively low electrification rates (but where Twitter is still popular) appear as red.
The project got even more interesting when the researchers used an automated system to break down tweets by language. The most common language in Twitter is English, which is represented in 38.25 percent of all Tweets. After that came Japanese (11.84 percent), Spanish (11.37 percent), Indonesian (8.84 percent), Norwegian (7.74 percent) and Portugese (5.58 percent).
The team constructed a map of all tweets written in the 26 most popular languages, with each represented by a different color, below:
While most countries’ tweets are dominated by their official languages, many are revealed to include tweets in a variety of other languages. Look closely enough, and you’ll see a rainbow of colors subtly popping out from the grey dots (English tweets) that blanket the U.S.:
Among other analyses, the research team even looked at the geography of retweeting and referencing—the average distance between a user and someone he or she retweets, as well as the average distance between that user and someone he or she simply references in a tweet. On average, the distance for a retweet was 1,115 miles and 1,118 for a reference. But, counterintuitively, there was a positive relationship between the number of times a given user retweeted or referenced another user and their distance: Pairs of users with just a handful of interactions, on the whole, were more likely to be closer together (500-600 miles apart) than those with dozens of retweets and references between them.
This indicates that users who live far apart are more likely to use Twitter to interact on a regular basis. One explanation might be that the entities with the most followers—and thus the most references and retweets—are often celebrities, organizations or corporations, users that people are familiar with but don’t actually have a personal relationship with. A global map of retweets between users is below:
The paper went into even more detail on other data associated with tweets: the ratio between mainstream news coverage and number of tweets in a country (Europe and the U.S. get disproportionate media coverage, while Latin America and Indonesia are overlooked), the places Twitter has added the most users recently (the Middle East and Spain) and the places where users have, on average, the most followers (South America and the West Coast).
There are a few caveats to all this data. For one, though the tweets analyzed number in the tens of millions, they are still just 0.3 percent of all tweets sent, so they might not adequately represent all Twitter patterns, especially if users who enable geotagging behave differently than others. Additionally, in the fast-changing world of Twitter, some trends might have already changed significantly since last fall. But as Twitter continues to grow and as more data become available, it stands to reason that this sort of analysis will only become more popular for demographers, computer scientists and other researchers.
May 7, 2013
Last month, a trio of engineers debuted an app that allows Icelanders to determine if they’re actually related to a potential date. Why, you ask? Because the entire population of Iceland, roughly 320,000 people, derives from a single family tree, and it’s very possible to bump into a former flame at a family gathering.
The case of Iceland is an extreme one, but the idea that we are all distant cousins, in the scope of human history, is well accepted. A new study, published today in the journal PLOS Biology, explains this degree of relatedness in modern-day Europeans.
The study reveals that just about any two random people from anywhere in Europe, even those living on opposite sides of the continent, share hundreds of genetic ancestors from only 1,000 years ago. In fact, a person living in the United Kingdom shares a chunk of genomic material with someone living in Turkey 20 percent of the time.
Researchers from the University of California, Davis and the University of Southern California studied genomic data for 2,257 Europeans from a massive database of genome-mapped individuals known as the Population Reference Sample. They measured ancestral ties going back 3,000 years by analyzing long segments of genome, passed down from generation to generation, shared by individuals.
Distant relatives share these long blocks of genome because they have both inherited them from common ancestors. First cousins share about one-fourth of their genome, inherited from a shared set of grandparents. Second cousins share just one-sixteenth of their genome, thanks to the same pair of great-grandparents. The researchers detected 1.9 million of these shared DNA sequences within the data pool, and then used their varying lengths to infer how long ago the shared ancestors lived.
These shared chunks of genome become shorter and shorter between more distant relatives because DNA strands undergo recombination, shuffling our genetic makeup around, with each successive generation. For example, a shared block of genome is shorter between second cousins than it is between first cousins. The longer a shared segment, the more recent the common ancestor.
As we might expect, the numbers of shared genetic ancestors dramatically decrease as geographic distance (in this case, across Europe) increases. This means that people who live near each other are more likely to be related to each other than those who don’t. For example, someone living in England will have a higher degree of relatedness to a fellow Briton than he would with someone from Germany. Researchers found that two modern Europeans living in neighboring populations, for example two adjacent countries, share between two and 12 genetic ancestors from the last 1,500 years.
This pattern can be seen in historically small or more isolated populations too, where fewer possible ancestors exist. Such is the case on the Italian and Iberian peninsulas—areas least affected by Slavic and Hunnic migrations between the fourth and eighth centuries—where people share more ancestors with each other than people in most other regions of Europe. Additionally, those living in Western Europe are also somewhat less related to each other than people living in Eastern Europe, a historically tight-knit region in terms of population.
However, some findings deviate from this genealogical norm. The researchers found that people from the United Kingdom shared more recent ancestors with people living in Ireland than with other UK residents. Recent ancestry also tied Germans more closely with Polish people than with other Germans. These instances likely reflect human migration in recent centuries, as smaller populations moved into larger ones.
Although this study looked only at European lineage, the researchers suggest that such patterns probably exist in the rest of the world. In any case, such research in human history brings us closer to learning more about the most recent common ancestor of all modern humans,
which scientists believe who, according to mathematical models, might have walked the Earth roughly as early as 3,500 years ago (PDF). This common ancestor, a product of the intermixing of once-isolated population groups, could have lived much earlier than this if remote populations managed to prevent its members from mating with far-flung explorers, but the recent paper’s finding seems to support the idea that distant populations converged relatively recently when compared to the long history of ancient humans.
April 24, 2013
Who you’re physically attracted to might seem like a frivolous, random preference. In recent years, though, science has told us that our seemingly arbitrary tastes often reflect unconscious choices that are based upon very relevant biological traits.
In general, we find symmetric faces more attractive, likely because they reflect a healthy underlying genome. Women typically prefer men with more distinctively masculine facial features because they indicate high testosterone levels and physical strength, while men prefer women with exaggerated youthful features, possibly because of the evolutionary advantages a male gets when coupling with a younger mate.
Despite all this research into our visual appearances, though, scientists have done relatively little digging into our auditory preferences when it comes to sexual attraction. Why do we find certain peoples’ voices attractive–and why do we sometimes find other types of voices such a turn-off? Specifically, why do women generally prefer men with deep voices, and men prefer women with higher ones?
At least according to a paper published today in PLOS ONE, the explanation is relatively simple: It’s all about body size. Researchers from University College London found that, at least among a sample of 32 participants, high-pitched female voices females were found to be attractive because they indicated the speaker had a small body. Deep male voices, on the other hand, were judged as more attractive because they conveyed that the speaker had a large frame—but were found to be most attractive when tempered by a touch of “breathiness,” suggesting the speaker had a low level of aggression despite his large size.
The group, led by Yi Xu, figured this out by playing recordings of digitally manipulated voices to the participants. The males in the study heard a computer-generated female voice saying phrases such as “I owe you a yo-yo” in which the voice was manipulated with a number of digital alterations in terms of pitch, formant (the particular peaks and valleys in a sound’s frequency spectrum) and other qualities.
The specific manipulations either conveyed a smaller body size or a larger one, based upon previous research that matched various voice qualities with different body sizes in humans. When asked to rate the voice’s attractiveness on a 1 to 5 scale, the men preferred the voices that suggested a smaller female. Past a certain point, though, higher voices were judged as no more attractive that slightly deeper ones. Listen to the most and least attractive (both, admittedly creepy) voices below:
The female participants’ voice preferences were similar, but slightly more nuanced. On the whole, they preferred deeper voices, which signaled a large body size, but another trait was also crucial: “breathiness.” The researchers hypothesized that this breathiness effectively takes the edge off a voice, making a man with a presumed large frame seem less aggressive and angry. They also polled the participants on whether they thought the simulated voices sounded angry or happy, and the breathy deep males voices were generally perceived as much happier and less angry than the less breathy (i.e. “pressed”) deep ones. Listen to the most and least attractive male voices below:
Beyond explaining the popularity of Barry White, the researchers say these findings correspond to much of what we know about voice preferences in the rest of the animal kingdom. Birds and other mammals, it turns out, have long been known to advertise their physical characteristics via the sound qualities in their mating calls.
All this points to an obvious question, though: Why would males prefer smaller females, and females prefer larger males in the first place? The researchers don’t attempt to address this question, but this duality reflects the sexual dimorphism present in most animal species. These differences generally result from sexual selection giving incentive to different mating strategies—so in this case, our voice preferences suggest that women benefit, in evolutionary terms, by mating with larger, but less aggressive men, while males benefit from mating with smaller females.
As the same time, what we commonly consider attractive varies dramatically over time and location—for example, dozens of prehistoric “Venus figurines,” discovered all over the world, portray extremely voluptuous female figures. So, if we tested the preferences of all humans throughout history, we might find a less obvious trend. This preference for small-voiced females and big-voiced males, then, might simply be an artifact of our contemporary cultural concepts of “attractiveness,” rather than a deep-seated evolutionary choice after all.
April 16, 2013
After a baby orangutan is born, it’ll spend the first two years of its life completely dependent on its mother—maintaining direct physical contact with her for at least the first four months—and breastfeeding for up to five years in total. During that time, it will likely never meet its father. Polar bears are also born helpless, surviving on their mothers’ milk through the harsh Arctic winter, but polar bear fathers provide no parenting, and have even been known to eat their cubs on occasion if they get the chance.
Both of these facts reflect a pattern common across the animal kingdom: In most species, mothers are inherently much more involved in parenting than fathers, and evolution has driven them to develop parenting instincts that are absent in their male counterparts.
A new experiment, though, suggests that contrary to conventional wisdom, one animal species remains a pretty significant exception to this rule: humans. It’s often believed that nobody can recognize a baby’s cry as accurately as his or her mother, but a study published today in Nature Communications by a team of French scientists led by Erik Gustafsson of the University de Saint-Etienne found that fathers can do it equally well—if they spend as much time with their offspring as mothers do.
The study involved 29 babies from France and the Democratic Republic of Congo, all less than half a year old, along with each of their mothers and 27 of their fathers (2 could not be located for the study). The researchers recorded the cries these infants made while being bathed, and then played them back to their parents (along with the cries of other babies) later on. To this non-parenting bystander, the cries (published along with the paper) generally seem pretty similar—like the one below, they all sound, well, like a quintessential baby’s cry:
In one of those astounding feats of parenthood, though, the parents did way better than chance in identifying which of the seemingly-identical cries belonged to their child from the sound alone. Each parent heard a random sequence of 30 different cries (24 from 8 other babies, and 6 from their own), and on average, they correctly identified 5.4 of their baby’s cries, while making 4.1 false-positives (incorrectly identifying another infant’s cry as their child’s). Although having this skill doesn’t necessarily indicate that a parent provides expert care, it does reflect a remarkably well-attuned connection between parent and infant.
When the researchers split the data along gender lines, they found something interesting. The factor that best predicted which parents were best at identifying their child’s cries was the amount of time the parent spent with their babies, regardless of if they were the mother or father.
Of the 14 fathers who spent an average of 4 or more hours a day with their babies, 13 correctly identified 98% of their total cries (and the outlier still got 90% right). The 29 mothers who spent a comparable amount of time with their children (that is, all the mothers in the study) got the same 98% correct. The remaining 13 fathers who spent less than 4 hours a day with their kids, though, were only able to identify 75% of the cries correctly.
The finding might not seem particularly surprising—of course whichever parents spend the most time with their children will be best at identifying the nuances of his or her pitch—but it cuts against the grain of previous research on this topic, which found that mothers seemed to be naturally better than fathers at identifying their own infants’ cries. (People often make the same assumption, the researchers say—in an informal survey they took of 531 students at the University de Saint-Etienne, 43% felt mothers were better, and the rest thought fathers and mothers were equally good at identifying their baby’s cries, while none felt fathers were.) But previous studies didn’t take into account the amount of time parents typically spent with their children on a daily basis.
The results suggest that experience and learning may be more critical to good parenting than innate skills. Far from being inherently disadvantaged in recognizing their babies’ cries, males who spent lots of time parenting turned out to be just as good as females at the task—so in terms of this particular skill, at least, parenting is less an inherent talent than a one to be practiced and developed. This also implies that whoever is the primary caregivers for a baby—whether grandparents, aunts, uncles or people unrelated to the child—may develop the same ability to distinguish the cries of the child in their care from other children.
Of course, while the findings don’t depict any innate asymmetry in parenting skills between the sexes, they do reveal an enormous asymmetry in the behavior of parents regardless of their continent, predicated on traditional gender roles. Every mother participating in the study spent enough time with their kids to develop the skill tested, while just about half of the fathers did—and two fathers couldn’t even be located to participate in the study in the first place.
Fathers might have the same innate parenting skills as mothers, but only if they make the enormous time investment necessary. This study indicates that it’s usually not the case, and though its sample size was extremely limited, broader data sets show the same. According to the most recent Pew Research data on parenting, the average American mother spends 14 hours per week in child care duties, compared to just 7 hours for the average father—so while men can develop the ability to know their babies just as well as women, most fathers out there probably haven’t so far.
April 8, 2013
If you suffered from a medical ailment in the year 1900, your treatment options were varied: You could take everything from Dr. Tutt’s Liver Pills to Hollister’s Golden Nugget Tablets, Dr. Sawen’s Magic Nerving Pills or Dr. Comfort’s Candy-Covered Cathartic Compound.
Of course, their titles notwithstanding, the creators of these pills weren’t always doctors, and the medicines certainly hadn’t gone through the controlled randomized trials we have today to ensure safety—they could contain ingredients that were ineffective, or worse, toxic. In many cases, their proprietors might not have known what they were even putting in these so-called “snake oil” medicines (a term that likely stemmed from the sale of actual snake oil to supposedly treat joint pain).
But now, at least, we do. Mark Benvenuto, a chemist at University of Detroit Mercy, recently led a research group that chemically analyzed several dozen patent medicines dating to the late 1800s and early 1900s from the Henry Ford Museum‘s collections. Their findings, which they presented yesterday at the annual meeting of the American Chemical Society in Atlanta, were that many of the pills, powders and ointments tested had beneficial ingredients like calcium and zinc—but that others had toxins such as lead, mercury and arsenic.
“Back in the day, this was a very trial-and-error kind of field,” Benvenuto said in an interview. “The stuff that we think of as dangerous now, though it was dangerous, was as cutting-edge as they had at the time.”
The researchers figured out what was in the historical medicines via a pair of methods. For the solid pills and powders, they used X-ray fluorescence, in which a substance is bombarded with X-rays and the particles emitted as a result indicate the material’s composition. For the liquid ointments, they used nuclear magnetic resonance testing, which relies on the electromagnetic emissions of a material’s nuclei when placed in a magnetic field.
The findings, Benvenuto says, will provide extra context for visitors to the Ford Museum, helping them better understand this era of medical quackery. “You can look at Dr. J.J. Gallop’s Vegetable Family Pills and find out what’s supposed to be in them from the box, and what they cost from some old newspaper that’s archived, but you can’t tell what’s really in them without testing,” he said.
Though some medicines intentionally misled customers about their contents and made outlandish claims, the presence of mercury in, say, Dr. F. G. Johnson’s French Female Pills doesn’t necessarily indicate that Mr. Johnson was a quack, Benvenuto said. Mercury was long used as the primary treatment for syphilis, as it kills the spirochete bacteria that cause the disease, though it can also harm the patient. (Lewis and Clark, among others, used mercury to treat the sexually-transmitted infection, and archaeologists have even pinpointed some of the camping spots of their Corps of Discovery Expedition by finding traces of mercury in the soil.)
In an era before rigorously controlled trials, putting a what was commonly believed to be a safe cure into a medicine and simply selling it to people was considered normal practice, and may have indeed led to progress in medicine. “Nowadays, we start by seeing if a drug can kill certain kinds of cells, then we’ll try it in mice, then dogs, then humans,” Benvenuto said. “Obviously, we have a better system now, but I think this type of medicine was the first step in the road to where we are now. Compared to folk cures, it was a first step at being logical.”