May 10, 2013
It’s hard to appreciate just how quickly and thoroughly Twitter has taken over the world. Just seven years ago, in 2006, it was an idea sketched out on a pad of paper. Now, the service is used by an estimated 554 million users—a number that amounts to nearly 8 percent of the all humans on the planet—and an estimated 170 billion tweets have been sent, with that number climbing by roughly 58 million every single day.
All these tweets provide an invaluable source of news, entertainment, conversation and connection between people. But for scientists, they’re also valuable as something rather different: raw data.
Because Twitter features an open API (which allows for tweets to be downloaded as raw, analyzable data) and many tweets are geotagged, researchers can use billions of these tweets and analyze them by location to learn more about the geography of humans across the planet. Last fall, as part of the Global Twitter Heartbeat, a University of Illinois team analyzed the language and location of over a billion tweets from across the U.S. to create sophisticated maps of things like positive and negative emotions expressed during Hurricane Sandy, or support for Barack Obama or Mitt Romney during the Presidential election.
As Joshua Keating noted on Foreign Policy‘s War of Ideas blog, members of the same group, led by Kalev Leetaru, have recently gone one step further. As published in a new study earlier this week in the online journal First Monday, they analyzed the locations and languages of 46,672,798 tweets posted between October 23 and November 30 of last year to create a stunning portrait of human activity around the planet, shown at the top of the post. They made use of the Twitter decahose, a data stream that captures a random 10 percent of all tweets worldwide at any given time (which totaled 1,535,929,521 for the time period), and simply focused on the tweets with associated geographic data.
As the researchers note, the geographic density of tweets in many regions—especially in the Western world, where computers, mobile devices, and Twitter are all used at peak levels—closely matches rates of electrification and lighting use. As a result, the maps of tweets (such as the detail view of the continental U.S., below) end up looking a lot like satellite images of artificial light at night.
As a test to see how well tweets matched artificial light use, they created the composite map below, in which tweets are shown as red dots and nighttime lighting is shown as blue. Areas where they correspond in frequency (and effectively cancel each other out) are shown as white, and areas where one outweighs the other remain red or blue. Many areas end up looking pretty white, with some key exceptions: Iran and China, where Twitter is banned, are noticeably blue, while many countries with relatively low electrification rates (but where Twitter is still popular) appear as red.
The project got even more interesting when the researchers used an automated system to break down tweets by language. The most common language in Twitter is English, which is represented in 38.25 percent of all Tweets. After that came Japanese (11.84 percent), Spanish (11.37 percent), Indonesian (8.84 percent), Norwegian (7.74 percent) and Portugese (5.58 percent).
The team constructed a map of all tweets written in the 26 most popular languages, with each represented by a different color, below:
While most countries’ tweets are dominated by their official languages, many are revealed to include tweets in a variety of other languages. Look closely enough, and you’ll see a rainbow of colors subtly popping out from the grey dots (English tweets) that blanket the U.S.:
Among other analyses, the research team even looked at the geography of retweeting and referencing—the average distance between a user and someone he or she retweets, as well as the average distance between that user and someone he or she simply references in a tweet. On average, the distance for a retweet was 1,115 miles and 1,118 for a reference. But, counterintuitively, there was a positive relationship between the number of times a given user retweeted or referenced another user and their distance: Pairs of users with just a handful of interactions, on the whole, were more likely to be closer together (500-600 miles apart) than those with dozens of retweets and references between them.
This indicates that users who live far apart are more likely to use Twitter to interact on a regular basis. One explanation might be that the entities with the most followers—and thus the most references and retweets—are often celebrities, organizations or corporations, users that people are familiar with but don’t actually have a personal relationship with. A global map of retweets between users is below:
The paper went into even more detail on other data associated with tweets: the ratio between mainstream news coverage and number of tweets in a country (Europe and the U.S. get disproportionate media coverage, while Latin America and Indonesia are overlooked), the places Twitter has added the most users recently (the Middle East and Spain) and the places where users have, on average, the most followers (South America and the West Coast).
There are a few caveats to all this data. For one, though the tweets analyzed number in the tens of millions, they are still just 0.3 percent of all tweets sent, so they might not adequately represent all Twitter patterns, especially if users who enable geotagging behave differently than others. Additionally, in the fast-changing world of Twitter, some trends might have already changed significantly since last fall. But as Twitter continues to grow and as more data become available, it stands to reason that this sort of analysis will only become more popular for demographers, computer scientists and other researchers.
June 29, 2012
Imagine that a debilitating illness, stroke or accident has left you entirely paralyzed. You’re fully conscious but unable to move or even communicate with those around you. People in this condition—known as Locked-in Syndrome—suffer greatly, locked in their own minds, appearing superficially to be in a persistent vegetative state despite a full inner life.
A new device, described in a paper published yesterday in the journal Current Biology, may offer hope to those locked-in: a new use of fMRI technology to read minds. The experimental setup allows individuals to “type” 27 symbols (26 letters and a space) without saying a word or moving a muscle, but rather by simply engaging in different thought patterns. The system could someday provide a practical means of daily communication for those who are unable to move.
According to Scientific American, the lead author of the study, Bettina Sorger of Maastricht University in the Netherlands, first started thinking about communicating with paralyzed patients after an experience she had about 10 years ago while working as a nurse. A patient who was recovering from anesthesia in the intensive care unit and seemed incapable of movement or speech suddenly tried to choke her. Then, a week later, he ran into Sorger while fully conscious and promptly apologized. She was stunned to realize that although he had little control over his movements while partially anesthetized, he was fully conscious and could even remember his actions a week later. Perhaps there could be some way to enable such patients in such a situation to communicate via mental activity alone, she thought.
Now Sorger is a researcher in neurocognition, and she and her colleagues have created a proof-of-concept device that could someday be used for those either temporarily or permanently paralyzed to achieve this goal. In the study, six healthy adults learned how to answer questions by mentally “typing” individual letters on a computer screen.
The participants first underwent one hour of training to learn how to pick various letters by distinct thought patterns. While lying inside a functional magnetic resonance imaging (fMRI) machine—which precisely measures activity in different parts of the brain by detecting the amount of blood flow—they stared at a table that included all 26 letters and a symbol for space. The letters were arranged in three rows, and each row was associated with a different type of mental task: motor imagery (such as tracing a shape in the mind), mental calculation (such as performing a multiplication problem), or inner speech (such as silently reciting a piece of text). Additionally, different columns of letters were illuminated on the screen at different times for different durations, in a consistent sequence.
To select a letter, participants waited for that letter’s column to light up, and then performed the specific type of mental task associated with that letter’s row as long as the letter stayed lit. For example, to select the letter ‘L’ in the graph below, the participant would wait 10 seconds for the onset delay until the row lit up, then would perform a mental calculation for a full 10 seconds, until the column with ‘C’, ‘L’ and ‘U’ dimmed. If they continued the mental task for 20 seconds, instead of 10, an ‘M’ would be detected.
Because the fMRI machinery is capable of distinguishing among the mental activity patterns that fit each of the three tasks, and can also track when and for exactly how long the task was mentally performed, the system could use both of these parameters to figure out which letter the participant meant to select. The participants painstakingly “typed” out letters to answer a variety of questions, such as “what is your name?” and “what movie did you see last?”
The system was able to accurately determine the first letter of each response a respectable 82 percent of the time, but an innovative use of context-dependent text recognition software—the same type that enables your smartphone to figure out you meant to type “great” instead of “grear”—meant that the first letter was correctly detected 95 percent of the time when the second letter was taken into account, and 100% once the third one was typed.
Although the system required a bulky fMRI machine and was performed with healthy participants, it’s easy to imagine how this could eventually be adapted to be used for those unable to speak or move. The grid could even be changed based on the context, with, say, pictures of food choices or room temperatures presented instead of letters. With practice, the process of choosing letters could likely be accelerated, too, and entire commonly used words could even be selected instead of letters.
This new system joins a number of other mind-reading interface devices developed in recent years to give locked-in patients the chance to communicate. As these systems are improved and refined, we might see them in hospitals such as the one Sorger once worked in to let these patients literally share their thoughts. “Even if one person benefits,” she says, “I would be very happy.”
August 11, 2011
In the new movie Rise of the Planet of the Apes, the leader of the ape revolution can talk. In the real world, apes can’t speak; they have thinner tongues and a higher larynx, or vocal box, than people, making it hard for them to pronounce vowel sounds. But that doesn’t necessarily mean they don’t have the capacity for language—sign language, after all, doesn’t require any vocalization.
Over the years, researchers have succeeded—and failed—in teaching apes to use language. Here’s a look at some of the more famous “talking” apes.
Viki: Viki, a chimpanzee, came closest to being a real talking ape. In the late 1940s and early 1950s, Keith and Catherine Hayes of the Yerkes Laboratories of Primate Biology, then located in Orange Park, Florida, adopted Viki and raised her at home as if she were a human baby. With the Hayeses moving her lips for her, Viki learned to utter “mama.” Eventually, with much difficulty, she managed to say three other words—papa, cup and up—on her own. Viki’s tenure as a talking ape didn’t last long; she died at the age of seven of viral meningitis.
Washoe: In the 1960s, psychologists Allen and Beatrix Gardner of the University of Nevada, Reno recognized that chimpanzees naturally gesture a lot and thought chimps would be well suited for sign language. In 1966, they started working with Washoe. Later, psychologists Roger and Deborah Fouts, now retired from Central Washington University, continued the work. By the end of Washoe’s life in 2007, she knew about 250 signs and could put different signs together to make simple combinations like “Gimmie Sweet” and “You Me Go Out Hurry.” Washoe’s adopted son Loulis also learned to sign—by watching his mother. He was the first ape to learn signs from other apes, not humans. For more on Washoe’s life, read Roger Fouts’ Next of Kin.
Nim: After the success with Washoe, psychologist Herbert Terrace of Columbia University decided to replicate the project. At first, Nim—full name Nim Chimpsky, named after linguist Noam Chomsky who thought language was unique to humans—was raised in a human household. (Washoe had been treated like a person too but had her own trailer.) Later, Nim was removed from the family and his language lessons moved to a lab on Columbia’s campus. In the end, Terrace concluded Nim never really learned language; he had merely been trained to imitate his teachers to get rewards. The sad story of Nim’s life after the project ended is told in the new documentary Project Nim.
Chantek: Chimpanzees are not the only talking apes. In 1978, anthropologist Lyn Miles of the University of Tennessee at Chattanooga began studying an orangutan named Chantek. During eight years of study, Chantek learned 150 signs. He also showed signs of being self-aware: he could recognize himself in a mirror. Today, you can visit Chantek at Zoo Atlanta, his home since 1997.
Koko: Koko the gorilla is probably best known for her love of kittens and Mr. Rogers (and maybe less well-known for her encounter with Captain James T. Kirk). Koko’s sign-language training began in 1972 with then-graduate student Francine (Penny) Patterson of Stanford University. According to the Gorilla Foundation, Koko knows 1,000 signs and understands spoken English. It also claims the gorilla has an IQ somewhere between 70 and 95 (the average human IQ is 100). (Critics, however, remain skeptical about some of Koko’s supposed abilities due to the lack of recent scientific publications supporting the claims. (PDF))
Kanzi: Kanzi, a bonobo, doesn’t use sign language; he uses different combinations of lexigrams, or symbols, to communicate. In the early 1980s, psychologist Sue Savage-Rumbaugh, then of Georgia State University, was trying to teach Kanzi’s mom, Matata, to use the lexigrams; instead, Kanzi was the one who mastered the symbols. Kanzi understands spoken English and knows close to 400 symbols. When he “speaks,” his lexigram usage follows rules of grammar and syntax, according to researchers at the Great Ape Trust in Iowa, where Kanzi now resides. Kanzi is also an accomplished stone-tool maker.