May 10, 2013
It’s hard to appreciate just how quickly and thoroughly Twitter has taken over the world. Just seven years ago, in 2006, it was an idea sketched out on a pad of paper. Now, the service is used by an estimated 554 million users—a number that amounts to nearly 8 percent of the all humans on the planet—and an estimated 170 billion tweets have been sent, with that number climbing by roughly 58 million every single day.
All these tweets provide an invaluable source of news, entertainment, conversation and connection between people. But for scientists, they’re also valuable as something rather different: raw data.
Because Twitter features an open API (which allows for tweets to be downloaded as raw, analyzable data) and many tweets are geotagged, researchers can use billions of these tweets and analyze them by location to learn more about the geography of humans across the planet. Last fall, as part of the Global Twitter Heartbeat, a University of Illinois team analyzed the language and location of over a billion tweets from across the U.S. to create sophisticated maps of things like positive and negative emotions expressed during Hurricane Sandy, or support for Barack Obama or Mitt Romney during the Presidential election.
As Joshua Keating noted on Foreign Policy‘s War of Ideas blog, members of the same group, led by Kalev Leetaru, have recently gone one step further. As published in a new study earlier this week in the online journal First Monday, they analyzed the locations and languages of 46,672,798 tweets posted between October 23 and November 30 of last year to create a stunning portrait of human activity around the planet, shown at the top of the post. They made use of the Twitter decahose, a data stream that captures a random 10 percent of all tweets worldwide at any given time (which totaled 1,535,929,521 for the time period), and simply focused on the tweets with associated geographic data.
As the researchers note, the geographic density of tweets in many regions—especially in the Western world, where computers, mobile devices, and Twitter are all used at peak levels—closely matches rates of electrification and lighting use. As a result, the maps of tweets (such as the detail view of the continental U.S., below) end up looking a lot like satellite images of artificial light at night.
As a test to see how well tweets matched artificial light use, they created the composite map below, in which tweets are shown as red dots and nighttime lighting is shown as blue. Areas where they correspond in frequency (and effectively cancel each other out) are shown as white, and areas where one outweighs the other remain red or blue. Many areas end up looking pretty white, with some key exceptions: Iran and China, where Twitter is banned, are noticeably blue, while many countries with relatively low electrification rates (but where Twitter is still popular) appear as red.
The project got even more interesting when the researchers used an automated system to break down tweets by language. The most common language in Twitter is English, which is represented in 38.25 percent of all Tweets. After that came Japanese (11.84 percent), Spanish (11.37 percent), Indonesian (8.84 percent), Norwegian (7.74 percent) and Portugese (5.58 percent).
The team constructed a map of all tweets written in the 26 most popular languages, with each represented by a different color, below:
While most countries’ tweets are dominated by their official languages, many are revealed to include tweets in a variety of other languages. Look closely enough, and you’ll see a rainbow of colors subtly popping out from the grey dots (English tweets) that blanket the U.S.:
Among other analyses, the research team even looked at the geography of retweeting and referencing—the average distance between a user and someone he or she retweets, as well as the average distance between that user and someone he or she simply references in a tweet. On average, the distance for a retweet was 1,115 miles and 1,118 for a reference. But, counterintuitively, there was a positive relationship between the number of times a given user retweeted or referenced another user and their distance: Pairs of users with just a handful of interactions, on the whole, were more likely to be closer together (500-600 miles apart) than those with dozens of retweets and references between them.
This indicates that users who live far apart are more likely to use Twitter to interact on a regular basis. One explanation might be that the entities with the most followers—and thus the most references and retweets—are often celebrities, organizations or corporations, users that people are familiar with but don’t actually have a personal relationship with. A global map of retweets between users is below:
The paper went into even more detail on other data associated with tweets: the ratio between mainstream news coverage and number of tweets in a country (Europe and the U.S. get disproportionate media coverage, while Latin America and Indonesia are overlooked), the places Twitter has added the most users recently (the Middle East and Spain) and the places where users have, on average, the most followers (South America and the West Coast).
There are a few caveats to all this data. For one, though the tweets analyzed number in the tens of millions, they are still just 0.3 percent of all tweets sent, so they might not adequately represent all Twitter patterns, especially if users who enable geotagging behave differently than others. Additionally, in the fast-changing world of Twitter, some trends might have already changed significantly since last fall. But as Twitter continues to grow and as more data become available, it stands to reason that this sort of analysis will only become more popular for demographers, computer scientists and other researchers.
May 2, 2013
In the wealthy world, improving the energy system generally means increasing the central supply of reliable, inexpensive and environmentally-friendly power and distributing it through the power grid. Across most of the planet, though, simply providing new energy sources to the millions who are without electricity and depend on burning wood or kerosene for heat and light would open up new opportunities.
With that in mind, engineers and designers have recently created a range of innovative devices that can increase the supply of safe, cheap energy on a user-by-user basis, bypassing the years it takes to extend the power grid to remote places and the resources needed to increase a country’s energy production capacity. Here are a few of the most promising technologies.
1. VOTO: Millions of people around the world use charcoal and wood-fueled stoves on a daily basis. VOTO (above), developed by the company Point Source Power, converts the energy these fires release as heat into electricity, which can power a handheld light, charge a phone or even charge a spare battery. The company initially designed VOTO for backpackers and campers in wealthy countries so they can charge their devices during trips, but is also trying to find a way to make it accessible to residents of the developing world for daily use.
2.Window Socket: This is perhaps the simplest solar charger in existence: Just stick it on a sunny window for 5 to 8 hours with the built-in suction cup, and the solar panels on the back will store about 10 hours worth of electricity that can be used with any device. If there’s no window available, a user can just leave it on any sunny surface, including the ground. Once it’s fully charged, it can be removed and taken anywhere—inside a building, stored around in a bag or carried around in a vehicle. The designers, Kyuho Song and Boa Oh of Yanko Design, created it to resemble a normal wall outlet as closely as possible, so it can be used intuitively without any special instructions.
3. The Berkeley-Darfur Stove: In the past few years, a number of health researchers have come to the same conclusion: that providing a safe, energy-efficient wood-burning cookstove to millions of people in the developing world can directly improve health (by reducing smoke inhalation), aid the environment (by reducing the amount of wood needed for fuel) and alleviate poverty (by reducing the amount of time needed to devote to gather wood every day).
Many projects have pursued this goal, but Potential Energy, a nonprofit dedicated to adapting and scaling technologies to help improve lives in the developing world, is the furthest along, having distributed more than 25,000 of their Berkeley-Darfur Stoves in Darfur and Ethiopia. Their stove’s design achieves these aims with features such as a tapered wind collar, a small fire box opening, nonaligned air vents that reduce the amount of wind allowed to stoke or snuff the fire (which wastes fuel) and ridges that ensure the optimal distance between the fire and pot in terms of fuel efficiency.
4. GravityLight: Along with wood-burning stoves, the kerosene-burning lamps that provide light throughout the developing world have recently become a target for replacement for one of the same reasons: The fumes generated by burning kerosene in closed corners are a major health problem. A seemingly simple solution is GravityLight, developed by the research initiative deciwatt.org.
To power the device, a user fills an included bag with about 20 pounds of rock or dirt, attaches it to the cord hanging down from the device and lifts it upward. The potential energy stored in that lifting motion is then gradually converted to electricity by the GravityLight, which slowly lets the bag downward over the course of about 30 minutes and powers a light or other electrical device during that time. It’s currently priced at about $10, and because it requires no running costs, the development team estimates that the investment will be paid back in about 3 months, as compared to the cost of kerosene.
5. SOCCKET: Soccer—known simply as football in nearly every English-speaking country besides the U.S.—is easily the most popular sport in the world. The newest product of Uncharted Play, a for-profit social enterprise, seeks to take advantage of the millions of people already playing the sport to replace kerosene lamps with electric light generated in a much different manner. Their ball uses an internal kinetically-powered pendulum to generate and store electricity. After about 30 minutes of play, the ball stores enough energy to power an attachable LED lamp for 3 hours. Development of the product was funded via Kickstarter, and the first ones will ship in the next few weeks. A percentage of all retail sales will go to providing SOCCKETs to schools in the developing world.
April 26, 2013
Your breath says a lot about you. Recent research has found that the chemicals present in each person’s breath can provide a unique “breathprint” that differs from person to person, while other scientists have worked on breathalyzer-like tests that can indicate the presence of a bacterial infection inside someone’s body.
In the decades since the 1960s, though, when the first electronic breathalyzer for blood alcohol content was developed, research has led to relatively little advancement in the use of chemical breath analysis for law enforcement purposes. Police have long been able to instantly test a person’s level of alcoholic intoxication by the side of a road, but testing for other drugs has required blood or saliva—substances that are more invasive acquire and that typically have to be sent to a crime lab for processing. Both factors make it difficult to figure out who’s under the influence at, say, the scene of a car accident right after it occurs.
But new research suggests that the status quo might be changing in a hurry. A study published yesterday in the Journal of Breath Research reveals that scientists can now use breath analysis to test for the presence of 12 different drugs in the body, including cocaine, marijuana and amphetamines. Previous work has shown that such technology can reliably test for several of these drugs, and this new study is the first time the drugs alprazolam (commercially known as Xanax, useful for treating anxiety disorders) and benzoylecgonine (a topical pain killer) have been detected. Members of the research group, from Sweden’s Karolinska Institutet, have already created a commercially-available breath testing system, called SensAbues—and it’s easy to speculate that law enforcement across the U.S. (and around the world) would love to get their hands on such technology as soon as possible.
The research team, led by Olof Beck, conducted the new study by testing the breath of 46 individuals who were checked into a drug addiction emergency clinic, had taken drugs about 24 hours earlier and volunteered to participate in the study. Each participant exhaled about 20 deep breaths into the SensAbues filter (which takes 2-3 minutes), and the solid and liquid microparticles suspended in their breath were trapped on a disk for analysis.
These tiny quantities of microparticles reflect the substances in a person’s bloodstream, because small amounts of the molecules from our blood diffuse into the fluid that lines our lungs’ bronchioles and then into our breath. By isolating these particles and analyzing them with liquid chromatography and mass spectrometry, the research team was able to determine the drugs present in each person’s body with a decent level of accuracy.
They compared the results to blood and urine samples taken from each of the participants, as well as their own reports of what drugs they’d taken in the previous 24 hours, and on the whole, the tests performed pretty well—although some progress clearly still needs to be made. All 46 people had reported taking one of the 12 detectable illegal substances, and drugs were detected in the breath of 40 of them (87 percent). Most of the particular drugs detected matched with self-reports and blood tests, but 23 percent of the time, the breath tests also indicated the presence of a drug that hadn’t actually been taken. This level of accuracy was higher than previous studies the team has done, as they’ve slowly refined the system to cut down on false positives and improve the detection rate.
Currently, using the SensAbues system would only allow officials to collect a sample and send it elsewhere for analysis. But the researchers say that advances in the cost and portability of chemical analysis could eventually allow for the same sort of roadside breath testing for drugs that we currently have for alcohol.
Another scientific hurdle is data: Unlike for alcohol, we still don’t know what a given quantity of drug molecules detected on someone’s breath means in terms of how much of the drug is actually in their bloodstream (although an accurate detection of any illegal substance might be all law enforcement officials may be after). We also don’t know how long traces of these drugs remain on a person’s breath, and how quickly they degrade.
If scientists are able to make some progress in figuring this all out, though—and if they can make the testing procedure more accurate—roadside drug tests could become a routine part of law enforcement protocol.
April 25, 2013
Google, as many researchers know well, is more than a search engine—it’s a remarkably comprehensive barometer of public opinion and the state of the world at any given time. By using Google Trends, which tracks the frequency particular search terms are entered into Google over time, scientists have found seasonal patterns, for example, in searches for information about mental illnesses and detected a link between searching behavior and a country’s GDP.
A number of people have also had the idea to use these trends to try achieving a more basic desire: making money. Several studies in recent years have looked at the number of times investors searched for particular stock names and symbols and created relatively successful investing strategies based on this data.
A new study published today in Scientific Reports by a team of British researchers, though, harnesses Google Trends data to produce investing strategies in a more nuanced way. Instead of looking at the frequency that the names of stocks or companies were searched, they analyzed a broad range of 98 commonly used words—everything from “unemployment” to “marriage” to “car” to “water”—and simulated investing strategies based on week-by-week changes in the frequencies of each of these words as search terms by American internet users.
The changes in the frequency of some of these words, it turns out, are very useful predictors of whether the market as a whole—in this case, the Dow Jones Industrial Average—will go down or up (the Dow is a broad index commonly considered a benchmark of the overall performance of the U.S. stock market).
The strategy was relatively straightforward: The system tracked whether a word such as “debt” increased in search frequency or decreased in search frequency from one week to the next. If the term was suddenly searched much less frequently, the investment simulation bought all the stocks of the Dow on the first Monday afterward, then sold all the stocks one week later, essentially betting that the overall market would rise in value.
If a term such as “debt” was suddenly searched much more frequently, the simulation did the opposite: It bought a “short” position in the Dow, selling all its stocks on the first Monday and then buying them all a week later. The concept of a “short” position like this might seem a bit confusing to some, but the basic thing to remember is that it’s the exact opposite of conventionally buying a stock—if you have a “short” position, you make money when the stock goes down in price, and lose money when it goes up. So for any given term, the system predicted that more frequent searches meant the market as a whole would decline, and less frequent searched meant it would rise.
During the period of time studied (2004-2011), making investment choices based on a few of these words in particular would have yielded overall profits several times higher than a conservative investment strategy of simply buying and holding the stocks of the Dow for the entire time. For example, basing a strategy solely on the search frequency of the word “debt,” which turned out to be the single most profitable term in the study, would have generated a profit of 326% over the seven years studied—compared to a profit of just 16% if you owned all the stocks of the Dow for the whole period.
So if you systematically bought a “short” position in the market every time the word “debt” suddenly started getting searched more often, you’d have made a ton of money over the seven years studied. But what about other words? The system simulated how this strategy would have performed for each of 98 words chosen, listed in the chart at right from most useful at predicting the movement of the markets (debt) to least useful (ring). As seen in the chart, for some of these terms the frequency that we type them into Google seems to serve as a very effective early-warning system for declines in the market.
Stock market declines typically reflect investors’ overall belief that, at any given time, it’s better to sell stock than buy it, and they often happen suddenly, when investors move in a herd to a new position—so the researchers speculate that rises in the terms’ frequencies in search convey a nascent feeling of concern about the market, before it’s expressed via actual transactions. All these searches might also reflect countless investors in an information-gathering phase, seeking to find out as much as they possibly can about an industry or a stock before selling it.
Even beyond the practical investment strategies that this type of analysis might generate, simply looking through the words provides a striking—and oftentimes confusing—window into the collective American psyche. It’s seemingly obvious why a sudden increase in the amount of people searching for the word “debt” might signal overall negative feelings about the market, and would likely precede a drop in stock values, and why “fun” might precede increases in the market. But why do searches for the words “color” and “restaurant” predict declines nearly as accurately as “debt”? Why do “labor” and “train” also predict stock market rises?
April 23, 2013
If, while watching WALL-E, your heart broke just a little bit when you saw the title character desperately travel across outer space in search of true love, it doesn’t mean you’re crazy. Sure, WALL-E is a robot. But its cute, anthropomorphized look and all too human desire to end its loneliness made us subconsciously forget that it is not human.
The ability to forget that key point wasn’t just a matter of clever storytelling. New research shows that, at least in a small sample of people tested, the same neural patterns that occur when we feel empathy for a human onscreen are present in our brains when we see a robot onscreen.
A group of researchers from the University of Duisburg Essen in Germany used fMRI (functional magnetic resonance imaging) to come to the finding, tracking blood flow in the brains of 14 study participants when they were shown videos of humans, robots and inanimate objects being treated either affectionately or harshly. The researchers, who will present their findings at the June International Communication Association conference in London, found that when participants were shown videos of a robot (a product called Pleo, which resembles a dinosaur) petted, tickled and fed, areas in their limbic structures—a region of the brain believed to be involved in emotional responses—activated. When they were shown videos of a human getting a massage, the same sorts of neural activity occurred.
The same pattern also occurred when the participants were shown videos of the robots and humans being treated harshly—shaken, dropped or suffocated with a plastic bag—but with a twist. Interestingly, their fMRI results showed levels of limbic activity much greater when they saw humans treated poorly than when they saw the robots. This correlated with the responses on surveys that the participants took after watching the videos, on which they reported some empathy for the robots, but more for the humans.
The results suggest that the reason we feel empathy for robots like WALL-E is that, when we see them treated a certain manner, it triggers the same sort of neural activity as seeing a human treated that way. In a sense, our mind interprets the robot to be human-like in a way that it doesn’t for, say, a rock. On the other hand, one possible explanation for why, despite this pattern, they still arouse less empathy than humans when being treated harshly is that we interpret them as something slightly less than human—something more like a pet.
Of course, this explanation comes with an important caveat: correlation vs. causation. We don’t know for sure that these neurological patterns cause empathy, per se, just that they reliably occur at the same time. (Further, we can’t say for sure that this effect is unique to robots—stuffed animals and dolls might engender the same feelings of empathy.)
Even if the patterns only correlate with empathy, though, they could be an effective objective measure of how much empathy people feel when observing various types of robots—and research into that area has practical implications that go far beyond Hollywood. One of the main areas, the scientists say, is in the engineering of robots that engage with humans on a frequent and long-term basis.
“One goal of current robotics research is to develop robotic companions that establish a long-term relationship with a human user, because robot companions can be useful and beneficial tools. They could assist elderly people in daily tasks and enable them to live longer autonomously in their homes, help disabled people in their environments, or keep patients engaged during the rehabilitation process,” Astrid Rosenthal-von der Pütten, the study’s lead author, said in a press statement. “A common problem is that a new technology is exciting at the beginning, but this effect wears off especially when it comes to tasks like boring and repetitive exercise in rehabilitation. The development and implementation of uniquely humanlike abilities in robots like theory of mind, emotion and empathy is considered to have the potential to solve this dilemma.”
In one previous long-term study, two out of six elderly participants appeared to develop emotional attachments with a companion robot—giving it a name, speaking to it and at times even smiling at it—while the other four did not. Further exploring our empathy for robots and figuring out just which of their characteristics (whether physical, such as having a human-like face, or behavioral, such as smiling or walking on two legs) lead more people to feel for them could help engineers design robotic devices that elicit more empathy over the long-term—and devices that people can readily connect with on an emotional level might make more effective rehab coaches and home companions over the long-term.