October 17, 2013
In recent years, scientists have discovered that chimpanzees, our closest relatives, are capable of all sorts of human-like behaviors that go far beyond tool use.
They self-medicate, eating roughage to clear their intestines of parasites. Baby chimps use human-like gestures to convey their needs to adults. Studies suggest even that chimps have a seemingly innate sense of fairness and go through mid-life crises.
Now, new research indicates that chimps’ vocalized communications are a bit closer in nature to our own spoken languages as well. A new study published in PLOS ONE shows that, when chimps warn each other about impending danger, the noises they make are much more than the instinctive expression of fear—they’re intentionally produced, exclusively in the presence of other chimps, and cease when these other chimps are safe from danger.
This not might sound like much, but linguists use intentionality as a key hallmark of language. Those who argue that apes aren’t capable of language—and that the apes who’ve been trained in sign language are merely engaging in rote memorization, not true language acquisition—point to a lack of intentionality as one of the reasons why. So the study shows that, in their natural environment, chimps do use vocalizations in a way more similar to language than previously thought.
The researchers, led by Anne Marijke Schel of the University of York, studied a community of 73 chimps that lives in Uganda’s Budongo Forest Reserve. To simulate danger, they used the skin of a dead African Rock Python—one of the chimps’ natural predators—to create a fake python, with fishing line attached to its head so they could make it move realistically.
Over the course of nearly a year in the field, they repeatedly placed this artificial predator in the forest with a camera rolling, waiting for unsuspecting chimps—sometimes alone, sometimes with other chimps—to come upon it so they could closely study their response. Typically, when the chimps saw the snake, they were startled, and made one of two different vocalizations, which the researchers identified as ‘huus’ (softer calls, with less alarm) or ‘waas’ (louder, more alarmed calls).
When the researchers analyzed the specific responses, they found that when other chimps were around, the startled chimps were much more likely to make the ‘waas’ rather than ‘huus.’ Moreover, the chimps clearly observed the location of other chimps and whether they were paying attention, and kept sounding the alarm until the others had fled and were safe from danger. The length of time they sounded the alarm, meanwhile, wasn’t linked with their own distance from the snake, further supporting the idea that the call was an intentional warning to others.
The researchers also took note of the pre-existing relationships among chimps (within the social hierarchy, some are closer than others) and found that closer relationships were more likely to trigger alarms. “It was particularly striking when new individuals who had not seen the snake yet, arrived in the area,” Schel said in a press statement. “If a chimpanzee who had actually seen the snake enjoyed a close friendship with this arriving individual, they would give alarm calls, warning their friend of the danger. It really seemed the chimpanzees directed their alarm calls at specific individuals.”
The authors argue that these characteristics—specifically, the fact that alternate vocalizations were employed in different circumstances, that they were made with the attention of the audience in mind and that they were goal-directed, continuing until they’d successfully warned other chimps so they fled—show that the noises are more than reflections of instinctive fear. Rather, they’re a tactical, intentional form of communication.
This observation, the authors say, may also tell us something about the evolution of human language. Gestural theories on the origin of language contend that spoken language evolved from hand gestures, and cite the fact that non-human primates (a model for primitive hominids) exclusively use gestures for true communication, merely making vocalizations based on engrained instinct, rather than calculated intention.
But this discovery of intentional warnings in chimps seems to upend that idea, suggesting that primitive hominids too were able to communicate via both vocalizations and gestures. This indicates, the researchers say, that spoken language may have evolved from multiple different sources, both gestures and vocal calls.
September 23, 2013
Most people think of history as a series of stories—tales of one army unexpectedly defeating another, or a politician making a memorable speech, or an upstart overthrowing a sitting monarch.
Peter Turchin of the University of Connecticut sees things rather differently. Formally trained as a ecologist, he sees history as a series of equations. Specifically, he wants to bring the types of mathematical models used in fields such as wildlife ecology to explain population trends in a different species: humans.
In a paper published with colleagues today in the Proceedings of the National Academy of Sciences, he presents a mathematical model (shown on the left of the video above) that correlates well with historical data (shown on the right) on the development and spread of large-scale, complex societies (represented as red territories on the green area studied). The simulation runs from 1500 B.C.E. to 1500 C.E.—so it encompasses the growth of societies like Mesopotamia, ancient Egypt and the like—and replicates historical trends with 65 percent accuracy.
This might not sound like a perfect accounting of human history, but that’s not really the goal. Turchin simply wants to apply mathematical analysis to the field of history so that researchers can determine which factors are most influential in affecting the spread of human states and populations, just as ecologists have done when analyzing wildlife population dynamics. Essentially, he wants to answer a simple question: Why did complex societies develop and spread in some areas but not others?
In this study, Turchin’s team found that conflict between societies and the development of military technology as a result of war were the most important elements that predicted which states would develop and expand over the map—with those factors taken away, the model deteriorated, describing actual history with only 16 percent accuracy.
Turchin began thinking about applying math to history in general about 15 years ago. “I always enjoyed history, but I realized then that it was the last major discipline which was not mathematized,” he explains. “But mathematical approaches—modeling, statistics, etc.—are an inherent part of any real science.”
In bringing these sorts of tools into the arena of world history and developing a mathematical model, his team was inspired by a theory called cultural multilevel selection, which predicts that competition between different groups is the main driver of the evolution of large-scale, complex societies. To build that into the model, they divided all of Africa and Eurasia into gridded squares which were each categorized by a few environmental variables (the type of habitat, elevation, and whether it had agriculture in 1500 B.C.E.). They then “seeded” military technology in squares adjacent to the grasslands of central Asia, because the domestication of horses—the dominant military technology of the age—likely arose there initially.
Over time, the model allowed for domesticated horses to spread between adjacent squares. It also simulated conflict between various entities, allowing squares to take over nearby squares, determining victory based on the area each entity controlled, and thus growing the sizes of empires. After plugging in these variables, they let the model simulate 3,000 years of human history, then compared its results to actual data, gleaned from a variety of historical atlases.
Although it’s not perfect, the accuracy of their model—predicting the development and spread of empires in nearly all the right places—surprised even the researchers. “To tell the truth, the success of this enterprise exceeded my wildest expectations,” Turchin says. “Who would have thought that a simple model could explain 65% of variance in a large historical database?”
So why would conflict between societies prove to be such a crucial variable in predicting where empires would form? “To evolve to a large size, societies need special institutions that are necessary for holding them together,” Turchin proposes. “But such institutions have large internal costs, and without constant competition from other societies, they collapse. Only constant competition ensures that ultrasocial norms and institutions will persist and spread.”
The model shows that agriculture is a necessary but not sufficient precondition for a complex society, he says—these states can’t form without farming, but the persistent presence of competition and warfare is necessary to forge farming societies into durable, large-scale empires. Conventional analyses of history could come to this same conclusion, but they wouldn’t be able to demonstrate it in the same mathematically-based way. Using this approach, on the other hand, Turchin’s group could remove the influence of warfare and see the model’s accuracy in describing real historical data plummet.
Of course, there are limitations to viewing history through math—humans are more complicated than numbers. “Differences in culture, environmental factors and thousands of other variables not included in the model all have effect,” Turchin says. “A simple general model should not be able to capture actual history in all its glorious complexity.”
Still, the model is a unique and valuable tool. Going forward, Turchin’s team wants to develop it further—adding more nuance (such as including the quality of agricultural productivity, rather than merely toggling if farming exists in a given area or not) to improve on that 65 percent accuracy. Additionally, they’d like to expand the model, applying it to more recent world history and also pre-Columbian North America, if they can find relevant historical data.
Based on his experiences so far, Turchin thinks they’ll be successful in developing a model that better reflects the the rise and fall of civilizations. “It turns out that there is a lot of quantitative data in history,” he says, “you just have to be creative in looking for it.”
August 21, 2013
As the inane car insurance commercials suggest, ancient humans were smarter than we give them credit for. They created some of the same words we still use today. They even brewed beer.
Now evidence suggests that they had some culinary flair as well. A new analysis of food residue encrusted on millennia-old pottery shards collected from sites in Germany and Denmark shows that prehistoric humans used the spice mustard seed to season the plant and animal staples that made up the bulk of their diet.
As part of the new study, published today in PLOS ONE, researchers from the UK’s University of York and elsewhere chemically analyzed the residue on ancient pieces of pottery that are part of the collections of a trio of museums—the Kalunborg and Holbæk Museums, in Denmark, along with the Schleswig-Holstein Museum in Germany. The artifacts were originally excavated from three different sites in the same two countries which are between 5,750 and 6,100 years old, an era during which people in the area were in the midst of transitioning from hunter-gatherer to nomadic societies.
When analyzing the food gunk encrusted on the pottery, the team looked specifically at phytoliths, microscopic granules of silica that plants produce and store in their cells after absorbing silicic acid from the soil. Different plants produce slightly different types of phytoliths, so by closely examining them, the scientists were able to figure out which sorts of plants had been cooked in the pottery.
They found that the residue from the insides of the pots had much larger quantities of phytoliths than the outsides, confirming that the granules were indicative of cooking use. When they compared the size and shape of the phytoliths to databases of hundreds of modern plant phytoliths, they most closely matched that of mustard seed. The team also found oil residue from both land animals and marine life, and other plant residues that come from starchier plants—suggesting that these prehistoric people were cooking fish, meat and plants in the pots and seasoning them with the mustard seed.
For the scientists, the most surprising aspect of the find is the pots’ age. Until now, the oldest clear evidence for spice use was the discovery of residue from ginger and turmeric in 4,500-year-old cooking pots linked to the Harappa culture, in Northern India. But the new find shows that humans were using spices more than 1,000 years earlier.
In Northern Europe, this was a time soon after domestic animals, such as goats and cattle, were introduced, dramatically remaking these societies’ lifestyles. Still, at this point, crops were not known to have been domesticated—these people were still centuries away from the fully settled agricultural societies that would eventually dominate.
Previously, experts thought that the use of plants in cooking during this era was solely motivated by a need for calories. But the presence of mustard seed, which provides essentially no caloric or nutritional value, indicates that these prehistoric people valued taste as much as we do.
August 8, 2013
If you’re an active Redditor, you might spend time lamenting the fact that some of your most clever, insightful comments get so few upvotes, and the lamest comments of other users sometimes seem to arbitrarily rise to the top.
As it turns out, a trio of researchers—Lev Muchnik of the Hebrew University of Jerusalem, Sinan Aral of MIT and Sean J. Taylor of NYU—recently decided to apply one of the basic tools of science to investigate this phenomenon: the randomized controlled experiment. And by teaming with a social news site to randomly vote on thousands of comments and closely track how they did afterward, the researchers proved that the very first vote a particular item receives—and not just its intrinsic merit—has an outsized influence on its overall fate.
Their new study, published today in Science, relied upon research they did between December 2010 and May 2011. In the paper, they say that it was conducted on “a social news aggregation web site similar to Digg.com and Reddit.com,” but they don’t disclose which particular site it was, because they say the site’s administrators are nervous about the risk to user privacy.
Nevertheless, they describe a bunch of features (the ability to submit links, make comments, vote up or down each post and comment, and a front page populated by the most popular posts) that are a core part of Reddit, and they even use screenshots of Reddit to illustrate them. Suffice to say that if they didn’t do the experiment using Reddit, they did so with an extremely similar site.
During that five-month window, they analyzed 101,281 comments on the site—all posted by normal, unwitting users—as part of the experiment. The comments were randomly assigned into one of three different groups: those that would receive the “positive treatment” (automatically getting one upvote right after being posted), the “negative treatment” (automatically getting a downvote instead) or the control (simply being passed along with no artificial vote).
The vast majority of the comments (95,290) were simply part of the control group—users of the site interacted with these comments with no outside influence from the researchers. The researchers then split the remaining comments between positive and negative at roughly the same ratio that upvotes and downvotes occur naturally on the site: 4049 comments got the positive treatment, receiving an automatic upvote that had nothing to do with their content, while 1942 comments got an arbitrary downvote instead.
The researchers had a hunch that when the site’s users voted on the comments, they were significantly influenced—whether consciously or not—by the votes that had come beforehand. The very first vote, in that case, would be particularly pivotal, because it’d be the only vote the second voter would see. By influencing that voter one way or another, it could potentially influence the third voter, and then the fourth, with cascading effects that influence thousands of votes and produce what the researchers call “herding effects.”
When they analyzed the overall performance of the comments included in the experiment, as represented by the 308,515 subsequent ratings they got in total, their hunch was confirmed: Getting an upvote at the start made the second vote 32 percent more likely to be positive, as compared to the control. The effect was also passed down the line to subsequent voters in much the way the researchers expected, as at the end of the five months, those in the “positive treatment” group had an overall rating (calculated by subtracting the number of downvotes from number of upvotes) 25 percent higher than those in the control group.
Interestingly, though, when applied to the “negative treatment,” the phenomenon seemed to be reversed: Comments that got an arbitrary downvote were actually more likely to receive an upvote from the second voter
. The researchers speculate that this represents users’ desire to “correct” unfair downvotes for a comment that didn’t deserve them for any obvious reason.
The experimenters also analyzed the data based on which of the site’s topic areas (i.e. subreddits) the comment fell within—business, culture and society, politics, IT, fun, economics, general news. Comments in the politics, culture and society, and business areas exhibited the greatest herding effects, suggesting that the phenomenon of upvoting in these topic areas was the subject to being significantly yet arbitrarily
influenced by the votes that came beforehand, rather than the content of the comment.
It’s easy to imagine how the findings—basically, that our judgement of something is heavily skewed by our knowledge of how others have already judged it—apply to all sorts of situations that go beyond Reddit, both in real life and online. Previous work has already shown that the comments on a Facebook profile picture can influence how attractive we deem it, and if a news article posted on Facebook garners a lot of “likes,” aren’t we more likely to read it? Politicians, meanwhile, have long known that creating the impression of popularity can often be just as important, in an election, as articulating specific positions that merit support.
But does the desire to correct downvotes reveal something inherently optimistic about our society—that we don’t want to watch something undeservedly crash and burn? Does the herding effect of upvotes mean that if we’re not ourselves successful, we’d like to be on the peripheries of successes, regardless of how deserving that success may be?
For the Redditors, the study proves something they probably already suspected, but alas, have no control over anyway: Getting that first upvote can make all the difference.
August 1, 2013
Climate change isn’t just affecting the natural world. Researchers have long understood that rising levels of greenhouse gas emissions will also have cascading ramifications on the dynamics of human society, whether by forcing refugees to flee from newly flood-prone areas or arid regions, by causing spikes in the prices of food crops, or by reducing the productivity of livelihoods based on fishing or grazing in certain regions.
Recently, studies and journalistic investigations have focused on one particularly chilling potential social consequence of climate change: an increased frequency of armed conflicts around the world. By studying the link between various climactic factors and rates of historical violence, researchers have speculated that the climate trends we’ll experience over the next century—hotter overall temperatures, more erratic rainfall patterns and a rising sea level—could make conflict and war more common in the future.
Now, in the most comprehensive analysis of the work on climate change and armed conflict to date, a team from UC Berkeley and elsewhere has found that these climate trends are indeed likely to significantly increase the incidence of armed conflict overall. Their paper, published today in Science, examined 60 studies to aggregate sets of data on events spanning 8000 B.C.E. to the present that examined climate variables and incidences of violence in all major regions of the globe. For example, one of the source papers focused on temperature changes and violent crime in the U.S. from 1952 to 2009, while another looked at the number of conflicts in Europe per decade from 1400 to 1999 as a function of precipitation.
Cross-comparing these studies with the same statistical methods revealed patterns that, when projected into future, suggest that by 2050 we could see 50 percent more instances of mass conflict due to the effects of climate change.
The team, led by Solomon Hsiang, specifically looked the historical relationship between climatic factors (temperature and rainfall fluctuations) and the incidence of all sorts of conflicts detailed in their source studies, which they grouped into the categories of personal crime (murder, domestic violence, rape and assault), intergroup violence (civil wars, ethnic violence and riots) and institutional breakdowns (collapses of governing bodies or even of entire civilizations such as the Mayan empire). They examined this relationship on a variety of spatial scales, ranging from countries to regions to even warmer areas within a large building or stadium, and on varying time scales, from months to years to centuries in duration.
To standardize data from many different climates and regions, the researchers calculated the number of standard deviations away from baseline averages that temperatures and rainfall rates shifted in the areas studied by the previous papers, based on the time periods covered. A standard deviation is a statistical tool used to examine how data is clustered about an average—the more standard deviations away from the average you go, the more the observation in question is an outlier.
They found that when temperatures or precipitation patterns in an area strayed from the norm, all three types of violence tended to increase, with intergroup conflict in particular surging the most during hotter periods. Specifically, a region that experienced a period of warming that fell beyond one standard deviation of average conditions saw 4 percent more personal crime and 14 percent more intergroup conflict over the period studied. In other words, assuming the variables fall in a bell curve around from average conditions, life became more violent for the roughly 32 percent of regions that significantly deviated away from average temperatures and precipitation rates.
This level of deviation, to put it into perspective, is equivalent to a country in Africa going through an entire year of temperatures averaging 0.6°F warmer than usual or to a county in the U.S. experiencing average temperatures of 5°F warmer than normal in a given month. “These are moderate changes, but they have a sizable impact on societies,” explained Marshall Burke, the study’s co-lead author and a doctoral candidate at Berkeley’s Department of Agricultural and Resource Economics.
Extrapolating to the future, these rates mean that if the entire planet went through an average of 3.6°F of warming by 2050—an optimistic limit set at the 2009 Copenhagen conference—we’d see personal crime rise by 16 percent and intergroup conflicts surge by 50 percent. The distribution of violence wouldn’t be equal, either, as climate models indicate that some areas will be hit with warming periods that fall outside two, three or even four standard deviations of the norm (and thus experience more conflict), as shown in the map below:
But what characteristics of these climate changes—heat and erratic rainfall—cause people or institutions to become violent? The mechanisms that link climate trends with violence are varied and, in many cases, unclear.
Statistics show that in cities, hotter temperatures lead to more arrests for violent crimes, and some researchers believe our basic physiological stress response to heat is to blame someone or something for the heat—but it’s unclear whether the data represent causation or correlation. On a broader level, it’s believed that reductions in agricultural productivity—especially in largely agrarian societies—can drive intergroup conflict, as can extreme weather events and reductions in resources such as potable water (due to erratic rainfall) and arable land (due to sea level rise). All of these factors are likely to come into play as the climate changes.
Of course, there are a few caveats to the finding. For one, the researchers are extrapolating from historical data, so it’s possible that even though humans have previously become more violent as temperatures increased, we could behave differently in the future. Additionally, these hypotheses can’t be rigorously tested in a lab, so it’s impossible to entirely rule out all confounding factors and establish that the climate trends cause more conflict, rather than coincidentally occurring at the same time.
The researchers, though, say that they conducted the most rigorous analysis possible. The fact that the climate-violence relationship was consistently found among a wide range of time periods, cultures and regions, they argue, indicates that there is a substantial link between the two.
If warmer temperatures and erratic precipitation really do drive violence, what can we do? The researchers say that we need to engage in research to better understand the mechanisms by which this occurs—so that eventually, just as we’ll build infrastructure to anticipate and defend against the brunt of climate change’s most dire effects, we can also create innovative social institutions and policies that might minimize violence in a warming world.