November 8, 2013
In 1854, in response to a devastating cholera epidemic that was sweeping through London, British doctor John Snow introduced an idea that would revolutionize the field of public health: the epidemiological map. By recording instances of cholera in different neighborhoods of the city and plotting them on a map based on patients’ residences, he discovered that a single contaminated water pump was responsible for a great deal of the infections.
The map persuaded him—and, eventually, the public authorities—that the miasma theory of disease (which claimed that diseases spread via noxious gases) was false, and that the germ theory (which correctly claimed that microorganisms were to blame) was true. They put a lock on the handle of the pump responsible for the outbreak, signaling a paradigm shift that permanently changed how we deal with infectious diseases and thus sanitation.
The mapping technology is quite different, as is the disease, but there’s a certain similarity between Snow’s map and a new project conducted by a group of researchers led by Henry Kautz of the University of Rochester. By creating algorithms that can spot flu trends and make predictions based on keywords in publicly available geotagged tweets, they’re taking a new approach to studying the transmission of disease—one that could change the way we study and track the movement of diseases in society.
“We can think of people as sensors that are looking at the world around them and then reporting what they are seeing and experiencing on social media,” Kautz explains. “This allows us to do detailed measurements on a population scale, and doesn’t require active user participation.”
In other words, when we tweet that we’ve just been laid low by a painful cough and a fever, we’re unwittingly providing rich data for an enormous public health experiment, information that researchers can use to track the movement of diseases like flu in high resolution and real time.
Kautz’ project, called SocialHealth, has made use of tweets and other sorts of social media to track a range of public health issues—recently, they began using tweets to monitor instances of food poisoning at New York City restaurants by logging everyone who had posted geotagged tweets from a restaurant, then following their tweets for the next 72 hours, checking for mentions of vomiting, diarrhea, abdominal pain, fever or chills. In doing so, they detected 480 likely instances of food poisoning.
But as the season changes, it’s their work tracking the influenza virus that’s most eye-opening. Google Flu Trends
has similarly sought to use Google searchers to track the movement of flu, but the model greatly overestimated last year’s outbreak, perhaps because media coverage of flu prompted people to start making flu-related queries. Twitter analysis represents a new dataset with a few qualities—a higher geographic resolution and the ability to capture the movement of a user over time—that could yield better predictions.
To start their flu-tracking project [PDF], the SocialHealth researchers looked specifically at New York, collecting around 16 million geotagged public tweets per month from 600,000 users for three months’ time. Below is a time-lapse of one New York Twitter day, with different colors representing different frequencies of tweets at that location (blue and green mean fewer tweets, orange and red mean more):
To make use of all this data, his team developed an algorithm that determines if each tweet represents a report of flu-like symptoms. Previously, other researchers had simply done this by searching for keywords in tweets (“sick,” for example), but his team found that the approach leads to false positives: Many more users tweet that they’re sick of homework than they’re feeling sick.
To account for this, his team’s algorithm looks for three words in a row (instead of one), and considers how often the particular sequence is indicative of an illness, based on a set of tweets they’d manually labelled. The phrase “sick of flu,” for instance, is strongly correlated with illness, whereas “sick and tired” is less so. Some particular words—headache, fever, coughing—are strongly linked with illness no matter what three-word sequence they’re part of.
Once these millions of tweets were coded, the researchers could do a few intriguing things with them. For starters, they looked at changes in flu-related tweets over time, and compared them with levels of flu as reported by the CDC, confirming that the tweets accurately captured the overall trend in flu rates. However, unlike CDC data, it’s available in nearly real-time, rather than a week or two after the fact.
But they also went deeper, looking at the interactions between different users—as represented by two users tweeting from the same location (the GPS resolution is about half a city block) within the same hour—to model how likely it is that a healthy person would become sick after coming into contact with someone with the flu. Obviously, two people tweeting from the same block 40 minutes apart didn’t necessarily meet in person, but the odds of them having met are slightly higher than two random users.
As a result, when you look at a large enough dataset of interactions, a picture of transmission emerges. They found that if a healthy user encounters 40 other users who report themselves as sick with flu symptoms, his or her odds of getting flu symptoms the next day increases from less than one percent to 20 percent. With 60 interactions, that number rises to 50 percent.
The team also looked at interactions on Twitter itself, isolating pairs of users who follow each other and calling them “friendships.” Even though many Twitter relationships exist only on the Web, some correspond to real-life interactions, and they found that a user who has ten friends who report themselves as sick are 28 percent more likely to become sick the next day. In total, using both of these types of interactions, their algorithm was able to predict whether a healthy person would get sick (and tweet about it) with 90 percent accuracy.
We’re still in the early stages of this research, and there are plenty of limitations: Most people still don’t use Twitter (yes, really) and even if they do, they might not tweet about getting sick.
But if this sort of system could be developed further, it’s easy to imagine all sorts of applications. Your smartphone could automatically warn you, for instance, if you’d spent too much time in the places occupied by people with the flu, prompting you to go home to stop putting yourself in the path of infection. An entire city’s residents could even be warned if it were on the verge of an outbreak.
Despite the 150 years we’re removed from John Snow’s disease-mapping breakthrough, it’s clear that there are still aspects of disease information we don’t fully understand. Now, as then, mapping the data could help yield the answers.
November 6, 2013
Brain-machine interfaces were once the stuff of science fiction. But the technology—which enables direct communication between a person or animal’s brain and an external device or another brain—has come a long way in the past decade.
Scientists have developed interfaces that allow paralyzed people to type letters on a screen, let one person move another’s hand with his or her thoughts and even make it possible for two rats to trade thoughts—in this case, the knowledge of how to solve a particular task—when they’re located in labs thousands of miles apart.
Now, a team led by Miguel Nicolelis of Duke University (the scientist behind the rat thought-trading scheme, among other brain-machine interfaces) has created a new setup that allows monkeys to control two virtual arms simply by thinking about moving their real arms. They hope that the technology, revealed in a paper published today in Science Translational Medicine, could someday lead to similar interfaces that allow paralyzed humans to move robotic arms and legs.
Previously, Nicolelis’ team and others had created interfaces that allowed monkeys and humans to move a single arm in a similar fashion, but this is the first technology that lets an animal to move multiple limbs simultaneously. “Bimanual movements in our daily activities—from typing on a keyboard to opening a can—are critically important,” Nicolelis said in a press statement. “Future brain-machine interfaces aimed at restoring mobility in humans will have to incorporate multiple limbs to greatly benefit severely paralyzed patients.”
Like the group’s previous interfaces, the new technology relies upon ultra thin electrodes that are surgically embedded into the cerebral cortex of monkeys’ brains, a region of the brain that controls voluntary movements, among other functions. But unlike many other brain-machine interfaces, which use electrodes that monitor brain activity in just a handful of neurons, Nicolelis’ team recorded activity in nearly 500 brain cells distributed over a range of cortex areas in the two rhesus monkeys who were test subjects for this study.
Then, over the course of a few weeks, they repeatedly set the monkeys in front of a monitor, where they saw a pair of virtual arms from a first-person perspective. Initially, they controlled each of the arms with joysticks, and completed a task in which they had to move the arms to cover up moving shapes to receive a reward (a taste of juice).
As this happened, the electrodes recorded the brain activity in the monkeys that correlated with the various arm movements, and algorithms analyzed it to determine which particular patterns in neuron activation were linked with what sorts of arm movements—left or right, and forward or back.
Eventually, once the algorithm could accurately predict the monkey’s intended arm movement based upon the brain patterns, the setup was altered so that the joysticks no longer controlled the virtual arms—the monkeys’ thoughts, as recorded by the electrodes, were in control instead. From the monkeys’ perspective, nothing had changed, as the joysticks were still put out in front of them, and the control was based on brain patterns (specifically, imagining their own arms moving) that they were producing anyway.
Within two weeks, though, both the monkeys realized they didn’t need to actually move their hands and manipulate the joysticks to move the virtual arms—they only had to think about doing so. Over time, they got better and better at controlling the virtual arms through this machine-brain interface, eventually doing it just as effectively as they’d moved the joysticks.
Future advances in this sort of interface could be enormously valuable for people who’ve lost control of their own limbs, due to paralysis or other causes. As high-tech bionic limbs continue to develop, these types of interfaces could eventually be the way they’ll be used on a daily basis. A person with a spinal cord injury, for example, could learn how to effectively imagine moving two arms so that an algorithm could interpret his or her brain patterns to move two robotic arms in the desired way.
But brain-machine interfaces could also someday serve a much broader population, too: users of smartphones, computers and other consumer technology. Already, companies have developed headsets that monitor your brainwaves so that you can move a character around in a video game merely by thinking about it, essentially using your brain as a joystick. Eventually, some engineers envision that brain-machine interfaces could enable us to manipulate tablets and control wearable technology such as Google Glass without saying a word or touching a screen.
October 31, 2013
For the past 10 years, the European Space Agency’s Mars Express probe has flown around and around the red planet, orbiting it more than 12,500 times in total.
All the while, it’s been collecting detailed topographic data on Mars’ surface with a suite of remote sensing instruments, including high-resolution cameras, radar-sensing devices and spectrometers that can detect the minerals present on the planet by analyzing the spectrum of infrared light they emit.
This video, released earlier this week by the ESA, gives you a look at some of the probe’s most dramatic views to date. The ESA built this simulated flyover from computer graphics based off real-world data, so this clip is (currently) the closest you can possibly get to flying over Mars’ surface yourself.
October 8, 2013
In 2011, Felix Liechti and his colleagues at the Swiss Ornithological Institute attached electronic tags that log movement to six alpine swifts. The small birds—each weighs less than a quarter of a pound—spend the summer breeding in Europe, then migrate to Africa for the winter, thousands of miles away.
“We wanted to learn about energy demands during migration. We expected to see how often they fly, how often they stop, that sort of thing,” he said.
But a year later, when three of the birds returned to the same breeding site and the scientists removed their tags to collect the data, the electronic tags revealed something unexpected. “When we looked at the data, we were totally blown away,” Liechti said. “During their non-breeding period in Africa, they were always in the air.”
For more than 200 straight days straight, as revealed by his team’s study published today in Nature Communications, the birds stayed aloft over West Africa. The tags only collect data every four minutes, so it’s impossible to rule out the chance that they touched down occasionally in between these intervals—but every single one of the data points collected for more than six months in a row indicated that, at the time, they were either actively flying or at least gliding in the air.
Ornithologists had previously speculated that a closely related common swift was capable of staying in flight for extremely long periods of time, but this is the first time anyone has collected hard data. The new finding was, in part, enabled by developments in technology—this was the first time that this particular kind of sensor, developed by at Bern University, was attached to birds for research.
Its tiny size allowed the researchers to attach it to relatively small birds without interfering with their free movement. The tags solely collected data on acceleration, the
pitch of the bird’s body (the angle of its body relative to the ground) and light hitting the bird at any given time. From the latter, scientists were able to infer latitude, due to the timing of sunrise and sunset.
By comparing the acceleration and pitch data to that of birds under observation, Liechti and the others could match particular data patterns with different types of movement—flying (with flapping wings), passively gliding in the air and resting on the ground. “They stayed in the air for all time they spent south of the Sahara, day and night,” he said. “Sometimes they just glide for a few minutes, so there’s no movement, but the pitch of the body indicates that they’re still gliding in the air.”
It’s still a mystery how the birds are able to physiologically accomplish this feat. The diet aspect is relatively straightforward—they largely feed on airborne insects—but until now, opinions differed over the question of whether birds could sleep while aloft. Sleep patterns in birds are fundamentally different than in mammals, and the difficulty of studying the brainwaves of migrating birds makes it very hard to fully understand how they rest while in motion. But the fact that these swifts never touch down for such a long time indicates that they’re able to rest in midair.
There’s also the deeper (and perhaps more confounding) question of why the birds would bother staying aloft for their entire time in Africa. At this point, it’s pure speculation, but Liechti suggests that diet could play a role. “We observed that the further north they go, the more they stay on the ground at night,” he said. “Additionally, the further north you go, the less insects there are in the air—so it might be related.” He also proposes that staying in air could reduce the risk of predation or perhaps the chance of catching a disease.
Perhaps most exciting is the fact that this finding came after just the first time the new, ultra-lightweight movement sensor was used in avian research. Tagging other sorts of birds that are too small for conventional sensors might tell us similarly surprising things about their movement or migrations habits. “It’s fascinating,” Liechti said, “and it opens up a whole new window for us into these species.”
October 1, 2013
Last Tuesday, a 7.7-magnitude earthquake hit Pakistan, causing widespread destruction, the creation of a new island off the country’s coastline and at least 515 deaths.
Of course, there’s nothing we can do to prevent such disasters—earthquakes result from the shifting and collision of enormous, continent-scale tectonic plates over which we have no control. If we know a massive quake is about to strike, though, there may be measures we can take to better protect ourselves.
But how could we possibly know when a quake is about to hit? Seismologists are extremely good at characterizing the overall hazards that those living in fault zones face, but they’re far away from being able (and may never have the ability) to predict exactly when an earthquake will strike.
Undeterred, several different teams of scientists are hatching plans for a new kind of solution. And the key to their success may may be the smartphone in your pocket.
Their idea takes advantage of the fact that most new smartphones include a tiny chip called an accelerometer. These chips measure the movement of the phone in three directions (up-down, left-right, and backward-forward) to customize your experience as you use the phone—for example, rotating the display if you turn the device.
As it happens, seismometers (the large, expensive instruments used by geologists to detect and measure earthquakes) do essentially the same thing, albeit with much more accuracy. Still, the tiny accelerometers we already carry around with us all the time could allow scientists to gather much more real-time data than is currently available—there are countless times more smartphones than seismometers, they’re much cheaper and they’re already deployed in a wide range of locations—if they can actually measure earthquake movement with sufficient precision.
Recently, Antonino D’Alessandro and Giuseppe D’Anna, a pair of seismologists at Italy’s Istituto Nazionale di Geofisica e Vulcanologia, set out to resolve this question. To assess the accelerometers—specifically, the LIS331DLH MEMS accelerometer used in iPhones—the duo placed five iPhones on a vibrating table in a variety of positions (flat, angled on top of a wedge-shaped piece, and vertical) and compared the data they recorded with a professional-quality earthquake sensor for reference.
Their results, published Sunday in the Bulletin of the Seismological Society of America, showed that the iPhone accelerometers performed even better than they expected. “When we compared the signals, we pleasantly surprised by the result—the recordings were virtually identical,” D’Alessandro says. “An accelerometer that costs a few dollars was able to record acceleration with high fidelity, very similar to a professional accelerometer that costs a few thousand.”
There are some limitations: the iPhone accelerometers aren’t as sensitive to weak vibrations, so during the tests, they were only able to record movements that correspond to earthquakes that would register as magnitude 5 or higher. But ”these limits will be overcome in the near future,” says D’Alessandro. “Because these chips are widely used in laptops, games controllers and mobile phones, research into improving them is going on around the world.”
The next step would be developing software to allow normal users to harness these accelerometers’ capabilities, turning their smartphones into mobile earthquake sensing systems. Last December, Berkeley researchers announced plans to develop an app that would allow users to donate their accelerometer data to earthquake research. Stanford’s Quake-Catcher Network and Caltech’s Community Seismic Network—both of which use small purpose-built seismometers that are distributed to volunteers and plugged into their computers—could serve as a model for this sort of network.
Once in place, the network would be able to gather a huge amount of data from thousands of geographically-dispersed users, allowing researchers to see how quakes move with finer resolution. If enough phones are on this network, emergency workers may be able to quickly gauge where they could most efficiently devote their time after a quake hits.
But how do you go from documenting earthquakes to warning people about when dangerous shaking will occur? As The Atlantic points out, the key is that earthquakes are actually comprised of two types of waves that ripple through the earth: P-waves, which arrive first and are difficult for humans to sense, and S-waves, which typically come a few seconds later and cause the majority of the physical damage.
If we had software installed on our phones that automatically detected strong P-waves and sounded an alarm, we might have a few scant seconds to take cover before the S-waves hit (officials recommend dropping to the ground, huddling under a stable table or desk and getting away from windows and doors). It’s not much, but in some cases, a just a few crucial seconds of warning could make all the difference.