November 12, 2013
In the summer of 2010, husband-and-wife paleobiologist team Z. Jack Tseng and Juan Liu traveled to the Zanda Basin in western Tibet with a group of colleagues. The remote area, a week’s drive from Beijing and near the border of Pakistan and China, is “basically badlands everywhere, with deeply cut valleys throughout,” Tseng says.
To explore the valleys, the team drove up dirt trail after dirt trail before coming upon a dense patch of fossils sticking out of the ground halfway up a hill. “In the little concentration of fossils, there were lots of limb bones from antelopes and horses obscuring everything else,” says Tseng, who was then a graduate student at USC and is now with the American Museum of Natural History. “It wasn’t until we started lifting things up, one by one, that we saw the top of a skull, and we thought, from the shape, that it looked something like a cat.”
After a few years of analysis, Tseng’s team has discovered that the skull doesn’t belong to any old cat. As they’ve documented in a study published today in Proceedings of the Royal Society B, the skull and six associated fossilized jawbone fragments are the first evidence of a newly discovered species, which they’ve called Panthera blytheae. The discovery represents the oldest “big cat” (a group that includes large predatory cats like lions, jaguars, tigers and leopards) ever found by a wide margin.
The sediments that make up the basin as a whole range from 6 million to 400,000 years in age, so the group dated the fossil by analyzing the age of the particular rock layers it was buried in. This involved using techniques of magnetostratigraphy, in which scientists analyze the magnetic orientation of the rocks and compare it to known reversals of the Earth’s magnetic field. This method can only provide rough estimates for an item’s age, but it revealed that the skull is between 4.10 and 5.95 million years old. Previously, the oldest known big cat fossils—a number of tooth fragments found in Tanzania—were 3.6 million years old.
The new find fills a gap in the evolutionary record of big cats. By analyzing the DNA of living species, scientists had previously estimated that big cats had split from the Felinae subfamily—which includes smaller wild cats, like cougars, lynxes, along with domestic cats—about 6.37 million years ago. The very existence of P. blytheae confirms that the split happened prior to when this big cat roamed.
But how much earlier? The find could suggest, Tsang says, that big cats branched off from smaller cats much farther back than thought. By comparing the skull’s characteristics with fossils from other extinct big cats, the anatomy of living cat species, and DNA samples taken from both living cats and a few recently extinct, Ice Age-era species (known as cave lions), the researchers assembled a new evolutionary family tree for all big cats. Using known rates of anatomical changes over time and the observed anatomy of P. blytheae, they projected backwards, and estimated that the earliest big cats likely branched off from the Felinae subfamily between 10 and 11 million years ago.
The new fossil also solves a geological mystery. Previously, using DNA analysis of all living big cats and mapping the the fossils excavated from various sites around the world, researchers had determined it was most likely that their common ancestor had lived in Asia. The oldest known specimens, however, were found in Africa. The new species provides the first direct evidence that central Asia was indeed the big cats’ ancestral home, at least as far back as the current fossil record currently goes.
From the fragmented fossils, it’s hard to know much about the extinct species’ behavior and lifestyle, but the researchers were able to make some basic extrapolations from the skull’s anatomy. “It’s not a huge cat, like a lion or a tiger, but closer to a leopard,” Tsang says. The creature’s habitat was likely similar to the current Tibetan plateau, so Tseng speculates that, like the snow leopards that currently live in the area, this species did not hunt on the open plains, but rather cliffs and valleys. Tooth wear patterns also suggest similarities with current snow leopards—the rear teeth, likely used for cutting soft tissue, remain sharp, whereas the front teeth are heavily worn, perhaps reflecting their use in prying open carcasses and picking meat off bones.
Tseng says that he and colleagues plan to return to the area to search for more fossils that could help enlighten us on the evolutionary history of big cats. “The gap still isn’t completely filled yet,” he says. “We need to find older big cats to put the picture together.”
November 8, 2013
In 1854, in response to a devastating cholera epidemic that was sweeping through London, British doctor John Snow introduced an idea that would revolutionize the field of public health: the epidemiological map. By recording instances of cholera in different neighborhoods of the city and plotting them on a map based on patients’ residences, he discovered that a single contaminated water pump was responsible for a great deal of the infections.
The map persuaded him—and, eventually, the public authorities—that the miasma theory of disease (which claimed that diseases spread via noxious gases) was false, and that the germ theory (which correctly claimed that microorganisms were to blame) was true. They put a lock on the handle of the pump responsible for the outbreak, signaling a paradigm shift that permanently changed how we deal with infectious diseases and thus sanitation.
The mapping technology is quite different, as is the disease, but there’s a certain similarity between Snow’s map and a new project conducted by a group of researchers led by Henry Kautz of the University of Rochester. By creating algorithms that can spot flu trends and make predictions based on keywords in publicly available geotagged tweets, they’re taking a new approach to studying the transmission of disease—one that could change the way we study and track the movement of diseases in society.
“We can think of people as sensors that are looking at the world around them and then reporting what they are seeing and experiencing on social media,” Kautz explains. “This allows us to do detailed measurements on a population scale, and doesn’t require active user participation.”
In other words, when we tweet that we’ve just been laid low by a painful cough and a fever, we’re unwittingly providing rich data for an enormous public health experiment, information that researchers can use to track the movement of diseases like flu in high resolution and real time.
Kautz’ project, called SocialHealth, has made use of tweets and other sorts of social media to track a range of public health issues—recently, they began using tweets to monitor instances of food poisoning at New York City restaurants by logging everyone who had posted geotagged tweets from a restaurant, then following their tweets for the next 72 hours, checking for mentions of vomiting, diarrhea, abdominal pain, fever or chills. In doing so, they detected 480 likely instances of food poisoning.
But as the season changes, it’s their work tracking the influenza virus that’s most eye-opening. Google Flu Trends
has similarly sought to use Google searchers to track the movement of flu, but the model greatly overestimated last year’s outbreak, perhaps because media coverage of flu prompted people to start making flu-related queries. Twitter analysis represents a new dataset with a few qualities—a higher geographic resolution and the ability to capture the movement of a user over time—that could yield better predictions.
To start their flu-tracking project [PDF], the SocialHealth researchers looked specifically at New York, collecting around 16 million geotagged public tweets per month from 600,000 users for three months’ time. Below is a time-lapse of one New York Twitter day, with different colors representing different frequencies of tweets at that location (blue and green mean fewer tweets, orange and red mean more):
To make use of all this data, his team developed an algorithm that determines if each tweet represents a report of flu-like symptoms. Previously, other researchers had simply done this by searching for keywords in tweets (“sick,” for example), but his team found that the approach leads to false positives: Many more users tweet that they’re sick of homework than they’re feeling sick.
To account for this, his team’s algorithm looks for three words in a row (instead of one), and considers how often the particular sequence is indicative of an illness, based on a set of tweets they’d manually labelled. The phrase “sick of flu,” for instance, is strongly correlated with illness, whereas “sick and tired” is less so. Some particular words—headache, fever, coughing—are strongly linked with illness no matter what three-word sequence they’re part of.
Once these millions of tweets were coded, the researchers could do a few intriguing things with them. For starters, they looked at changes in flu-related tweets over time, and compared them with levels of flu as reported by the CDC, confirming that the tweets accurately captured the overall trend in flu rates. However, unlike CDC data, it’s available in nearly real-time, rather than a week or two after the fact.
But they also went deeper, looking at the interactions between different users—as represented by two users tweeting from the same location (the GPS resolution is about half a city block) within the same hour—to model how likely it is that a healthy person would become sick after coming into contact with someone with the flu. Obviously, two people tweeting from the same block 40 minutes apart didn’t necessarily meet in person, but the odds of them having met are slightly higher than two random users.
As a result, when you look at a large enough dataset of interactions, a picture of transmission emerges. They found that if a healthy user encounters 40 other users who report themselves as sick with flu symptoms, his or her odds of getting flu symptoms the next day increases from less than one percent to 20 percent. With 60 interactions, that number rises to 50 percent.
The team also looked at interactions on Twitter itself, isolating pairs of users who follow each other and calling them “friendships.” Even though many Twitter relationships exist only on the Web, some correspond to real-life interactions, and they found that a user who has ten friends who report themselves as sick are 28 percent more likely to become sick the next day. In total, using both of these types of interactions, their algorithm was able to predict whether a healthy person would get sick (and tweet about it) with 90 percent accuracy.
We’re still in the early stages of this research, and there are plenty of limitations: Most people still don’t use Twitter (yes, really) and even if they do, they might not tweet about getting sick.
But if this sort of system could be developed further, it’s easy to imagine all sorts of applications. Your smartphone could automatically warn you, for instance, if you’d spent too much time in the places occupied by people with the flu, prompting you to go home to stop putting yourself in the path of infection. An entire city’s residents could even be warned if it were on the verge of an outbreak.
Despite the 150 years we’re removed from John Snow’s disease-mapping breakthrough, it’s clear that there are still aspects of disease information we don’t fully understand. Now, as then, mapping the data could help yield the answers.
November 7, 2013
A lot has changed in the last 165 million years. A cluster of islands coalesced to form Europe, the Atlantic Ocean opened up and India collided with the rest of Asia as the world’s tectonic plates gradually drifted to form the continents we know today. Dinosaurs largely went extinct, felled mostly by a mass extinction event some 65 million years ago that was likely caused by an enormous meteorite impact. Eventually, apes came down from the trees, evolved into humans, and migrated out of Africa to cover the planet.
During all this time, though, one thing has stayed exactly the same: The anatomical structure of froghopper genitals.
We know that from a remarkable new fossil excavated from the Jiulongshan Formation in Northeastern China and described in the journal PLOS ONE. The 165-million-year-old fossil, analyzed by researchers at the Capital Normal University in China, captures two of the small insects in the act of copulation—the oldest record of insect sex ever discovered. And though the insects themselves belong to a particular species that went extinct long ago, their genital structures appear to be identical to those of the modern-day froghoppers that are distributed all around the world.
There are only 33 known fossils that show insects mating worldwide, and the previous oldest—which contains fossilized midges in the act of copulation—dates to 135 million years ago. These sorts of fossils are so rare because of the nature of fossilization: an organism is buried in dirt or sediment sometime after its death and its skeletal remains are slowly replaced by minerals or converted into new compounds over time.
So, for the act of copulation to be fossilized, it means that two insects would have to die while mating, then fall to the ground locked in the mating position. The excavation site, in Ningcheng County, may help explain how this happened: Most of the fossils there were created as the result of a volcanic eruption, which appears to have knocked all sorts of creatures into a lake, so it’s quite possible that the insects died, their mating bodies stuck to the plant stem they were perched on, then the entire plant settled to the lake bottom with them on it.
The researchers are sure that the fossil depicts the act of mating, rather than two insects lying next to each other, because of their inspection of the insects’ reproductive parts. The fossil shows the male’s aedeagus—roughly, the insect equivalent of a penis—inserted into the female’s bursa copulatrix, the organ that receives sperm for fertilization.
Although the creatures differ slightly from known froghoppers, the anatomical structure of their genitals—even down to the number of segments in the male’s aedeagus—is the same as those of modern-day froghoppers. For that reason, the researchers named the new species Anthoscytina perpetua: “From the Latin perpet, eternal love, in reference to this everlasting copulation,” they write in the paper.
One thing that may have changed, though, is the position these insects assume while they’re mating. From the fossil, it seems that ancient froghoppers faced each other, belly-to-belly, in order to mate. Currently, froghoppers usually do it rather differently, laying side-by-side on their stomachs, with the male’s aedeagus twisting underneath to enter the female:
November 6, 2013
Brain-machine interfaces were once the stuff of science fiction. But the technology—which enables direct communication between a person or animal’s brain and an external device or another brain—has come a long way in the past decade.
Scientists have developed interfaces that allow paralyzed people to type letters on a screen, let one person move another’s hand with his or her thoughts and even make it possible for two rats to trade thoughts—in this case, the knowledge of how to solve a particular task—when they’re located in labs thousands of miles apart.
Now, a team led by Miguel Nicolelis of Duke University (the scientist behind the rat thought-trading scheme, among other brain-machine interfaces) has created a new setup that allows monkeys to control two virtual arms simply by thinking about moving their real arms. They hope that the technology, revealed in a paper published today in Science Translational Medicine, could someday lead to similar interfaces that allow paralyzed humans to move robotic arms and legs.
Previously, Nicolelis’ team and others had created interfaces that allowed monkeys and humans to move a single arm in a similar fashion, but this is the first technology that lets an animal to move multiple limbs simultaneously. “Bimanual movements in our daily activities—from typing on a keyboard to opening a can—are critically important,” Nicolelis said in a press statement. “Future brain-machine interfaces aimed at restoring mobility in humans will have to incorporate multiple limbs to greatly benefit severely paralyzed patients.”
Like the group’s previous interfaces, the new technology relies upon ultra thin electrodes that are surgically embedded into the cerebral cortex of monkeys’ brains, a region of the brain that controls voluntary movements, among other functions. But unlike many other brain-machine interfaces, which use electrodes that monitor brain activity in just a handful of neurons, Nicolelis’ team recorded activity in nearly 500 brain cells distributed over a range of cortex areas in the two rhesus monkeys who were test subjects for this study.
Then, over the course of a few weeks, they repeatedly set the monkeys in front of a monitor, where they saw a pair of virtual arms from a first-person perspective. Initially, they controlled each of the arms with joysticks, and completed a task in which they had to move the arms to cover up moving shapes to receive a reward (a taste of juice).
As this happened, the electrodes recorded the brain activity in the monkeys that correlated with the various arm movements, and algorithms analyzed it to determine which particular patterns in neuron activation were linked with what sorts of arm movements—left or right, and forward or back.
Eventually, once the algorithm could accurately predict the monkey’s intended arm movement based upon the brain patterns, the setup was altered so that the joysticks no longer controlled the virtual arms—the monkeys’ thoughts, as recorded by the electrodes, were in control instead. From the monkeys’ perspective, nothing had changed, as the joysticks were still put out in front of them, and the control was based on brain patterns (specifically, imagining their own arms moving) that they were producing anyway.
Within two weeks, though, both the monkeys realized they didn’t need to actually move their hands and manipulate the joysticks to move the virtual arms—they only had to think about doing so. Over time, they got better and better at controlling the virtual arms through this machine-brain interface, eventually doing it just as effectively as they’d moved the joysticks.
Future advances in this sort of interface could be enormously valuable for people who’ve lost control of their own limbs, due to paralysis or other causes. As high-tech bionic limbs continue to develop, these types of interfaces could eventually be the way they’ll be used on a daily basis. A person with a spinal cord injury, for example, could learn how to effectively imagine moving two arms so that an algorithm could interpret his or her brain patterns to move two robotic arms in the desired way.
But brain-machine interfaces could also someday serve a much broader population, too: users of smartphones, computers and other consumer technology. Already, companies have developed headsets that monitor your brainwaves so that you can move a character around in a video game merely by thinking about it, essentially using your brain as a joystick. Eventually, some engineers envision that brain-machine interfaces could enable us to manipulate tablets and control wearable technology such as Google Glass without saying a word or touching a screen.
November 1, 2013
For the first few decades of his career, Emory neuroscientist Gregory Berns studied the human mind. Using fMRI technology, which tracks the flow of blood to different areas of the brain, he sought to find correlations between people’s internal mental patterns and their real-world behaviors, decisions and preferences.
Then, in 2011, he took on a new object of neuroscientific study: Canis lupus familiaris, otherwise known as the domesticated dog. Instead of merely studying canine behavior, as has been done for years, he and his colleagues began scrutinizing the internal architecture and patterns of dogs’ brains, using the same tools they rely on to better understand the brains of humans.
“I’ve always been a dog person, and when my dog died, a pug named Newton, it planted a seed in my mind,” says Berns, who published a new book on his recent work, How Dogs Love Us, last week. “It got me wondering about how dogs view their relationship with us—if he had loved me the same way I had loved him.”
Just looking inside inside the canine brain, however, posed a formidable challenge: Getting an accurate fMRI reading means that the subject has to stay almost perfectly still, moving less than a millimeter from one moment to the next. Using anesthesia or restraining the dogs would ruin the experiments, producing an image of an unconscious or anxious dog instead of a comfortable, alert one.
To solve the problem, Berns recruited dogs from the local community—starting with a dog he adopted after Newtown died—and gradually trained them to climb up a series of steps into a table, rest their head on a pad inside the fMRI’s inner tunnel and sit still for 30 seconds at a time as the machine does its work. To deal with the device’s noise (which can surpass 95 decibels, equivalent to the sound of a jackhammer 50 feet away), they taped earmuffs to the dogs’ heads and piped in ambient noise over loudspeakers, so instead of the machine’s sound beginning abruptly, it gradually arrived over background noises.
In total, they’ve successfully trained about a dozen dogs to voluntarily participate in their studies. The research is still in its preliminary stages, but as Berns’ team begins to scratch the surface of the canine brain, they’re finding something surprising—in several ways, its activity mirrors that of the human brain to a much greater extent than expected.
As part of their first paper published on the work in 2012, they trained dogs to recognize two different hand signals: one that meant the animal would be given a piece of hot dog imminently, and one that meant no hot dog. As they hypothesized, the first signal triggered elevated activity in an area called the caudate nucleus, which is rich in receptors for dopamine (a neurotransmitter involved in the sensation of pleasure). In humans—and in dogs, the research indicated—caudate activity is related to the desire to have something that causes pleasure, and the satisfaction involved in obtaining it.
Subsequent work revealed more unexpected findings. As part of a second experiment, they had dogs sit in the scanner and exposed them to smells of humans (from either their owners or strangers) and other dogs (from either dogs they lived with or unfamiliar dogs). “We wanted to understand how dogs recognize other people and dogs in their households,” Berns says. Again, they saw increased activity in the caudate, but only as a result of one of the scents. “In this case, the reward system only seems to activate in response to the smell of a familiar human, which is pretty amazing,” he says.
To further probe how the dogs’ brain activity correlates with the actions of humans they know well, they put the dogs in the fMRI and had their owners leave the room, then walk back in. This, too, triggered activation in the caudate.
Berns interprets these results as indications that, in some ways, the mental processes of dogs may not be so different from those of humans. They’re close enough, he suggests, that we can safely describe them with words we don’t often apply to animals: the mental activity represents emotions, and perhaps even constitute love. “At some fundamental level, we believe the dogs are experiencing emotions something like we do,” Berns says.
He admits that the idea is controversial. But, he points out, the research suggests that the human brain and canine brain aren’t as radically different as we might have imagined.
“Obviously, dog brains are much smaller, and they don’t have as much cortex as we do, but some of the core areas around the brainstem—the basal ganglia, which the caudate nucleus is part of—look very much like those in humans,” he says. Dogs might not have the hardware necessary for complex thoughts and higher-level reasoning, the thinking goes, but they do have the relevant structures for basic emotions.
This also makes sense from an evolutionary perspective: We evolved the heavily folded cortex necessary for high-level thinking after we diverged from all other animal species, but areas like the basal ganglia developed beforehand, so it follows that our ability to feel emotions produced by those areas existed way back in our evolutionary history, in ancestors that we share with many other mammals, including dogs.
Dog lovers mind find these ideas obvious, but Berns’ work has attracted a fair amount of criticism. One of the biggest complaints is against his use of words like emotion and love for dogs—their attachment to us is simply a result of conditioning, some say, entirely based on the desire for food, rather than the deeper emotional connections we feel for other humans.
But Berns hopes to respond with future fMRI work, which will compare brain activity in dogs being fed by automated mechanisms with that of dogs being fed by humans. He hopes to show that dogs do develop qualitatively different relationships with humans, underscoring the strength of those attachments.
He took his ideas to what some might call as a rather extreme conclusion earlier this month in the New York Times, in an op-ed he penned with a provocative headline: Dogs Are People, Too. If animals truly are capable of emotions we normally consider characteristically human, he argued, they should no longer be treated as mere objects, or property, but instead be given some of the rights we associate with personhood—namely, a respect for their preferences and well-being that would lead to the abolition of things like puppy mills and dog racing.
There’s obviously a long way to go—both in terms of scientific evidence and policy changes—before dogs are treated anything like people. But Berns cites a recent Supreme Court decision that invoked neuroscientific evidence (specifically, the finding that the juvenile brain is less developed than a mature adult’s, and thus should not be subject to the same punishments) as an indication that our laws will inevitably follow the science. The next step, then, is for he and his colleagues to keep peering into the minds of dogs, finding out how deeply the mental similarities truly go.