December 3, 2013
Scientific equipment that’s left unattended in the field can provide all sorts of interesting information. It can, for instance, snap photographs of exotic and shy wild animals, or analyze the noises coming from an ecosystem to identify the species living there.
But often, leaving valuable instruments scattered outside risks interference or destruction from an unavoidable force of nature: humans. Data isn’t regularly collected or published on the topic, but it’s well known among field scientists that leaving equipment in highly-trafficked areas leaves it vulnerable to petty vandalism and theft.
This unfortunate trend was recently on the mind of Holger Goerlitz, a scientist at Germany’s Max Planck Institute for Ornithology who studies animal behavior (specifically, how bats use echolocation within their environment) and leaves his field equipment unattended to do so. “As part of a project, I wanted to deploy automated equipment that would sit in the field for several nights,” he says. “So my colleagues and I were concerned about how to protect this equipment. We considered using chains and video cameras. And then suddenly we had this random idea: Let’s see what would be a good label to attach to the equipment to reduce vandalism.”
His team’s results, based on 60 pieces of fake scientific equipment scattered in four different Munich parks and published last week in Methods in Ecology and Evolution, may surprise you. They found that friendly labels that gave information about the project and the researcher led to about 40 percent fewer instances of equipment interference—either theft, vandalism or park visitors simply moving or trying to open up the equipment—as compared to neutral labels or ones that threatened surveillance and punishment.
They found this out by distributing boxes that looked vaguely like scientific instruments (above)—basically, plastic boxes with car antennas and fake cameras attached—for one week in each park, right next to a footpath, and checking for interference with the objects daily. All of the boxes had a similar warning label identifying it as properly of the Planck Institute and giving Goerlitz’ contact information, but the tone of the messages differed slightly.
A third of the boxes had neutral messages (translated to English, they read “Part of an experiment—Please do not touch”), while another third had threatening warnings (“‘Part of an experiment—Every theft will be reported to the police! GPS monitored!”) and the remaining third had a friendly, personalized note (“Part of my thesis–Please do not touch–Please call me if you have any questions and would like to know more.”) The friendly label also had a picture of a juvenile squirrel, to give passersby an idea of the researcher’s work:
The researchers checked the boxes daily, and had even placed pebbles inside so they could determine if the boxes had been picked up and moved. They also counted how many were pried open, vandalized or stolen.
Apparently, at least in these German city parks, the impulse to steal or vandalize is one that can be dislodged by just a bit of information about the potential victim, presented in a friendly way. Simply knowing that the device was important to one particular researcher—likely a student, given the word “thesis”—led visitors to interfere with the equipment about half as frequently. Over the course of a month, 39 instances of interference occurred with a box with a friendly label, compared to 59 with a neutral label and 67 with a threatening label.
Although the researchers were heartened by the finding, they were surprised, especially due to the fact that the threatening labels were the least successful. They imagine that the friendly label worked primarily by establishing the personal connection, perhaps aided by the squirrel photo, but the threatening label’s ineffectiveness is a mystery. “We don’t know why this was the case,” Goerlitz says. “It could be that people didn’t believe the threatening label, or that they thought, ‘oh, there’s a GPS device inside, this could be valuable.’” The fact that it was the only label that included the word “theft” might indicate that simply implanting the idea in people’s minds influenced them to engage in it.
But, although there’s obviously a lot more work to be done—this was conducted with a small sample of people over a short time period in one particular German city—this finding about friendly labels is a positive and potentially helpful one. If presented with the chance, people can be influenced to help science succeed if they’re treated in a respectful way and informed about what’s going on. Despite the enormous amount of money spent annually on scientific equipment, very little research has actually been done in this area to date, and more work to see how this might apply to people in different cultures, for instance, may aid scientists around the world in their efforts to protect their surveying instruments.
Goerlitz, for one, is ready to start using this finding to better safeguard his own equipment that monitors bat echolocation. “In my labels, I’ll try to be informative and friendly to people,” he says. “I think if you expose people to what you’re doing, they’ll be much more supportive of it.”
June 6, 2013
Thirteen years after the release of On the Origin of Species, Charles Darwin published another report on the evolution of mankind. In the 1872 book The Expression of the Emotions in Man and Animals, the naturalist argued that people from different cultures exhibit any given emotion through the same facial expression. This hypothesis didn’t quite pan out—last year, researchers poked a hole in the idea by showing that the expression of emotions such as anger, happiness and fear wasn’t universal (PDF). Nonetheless, certain basic things—such as the urge to cry out in pain, an increase in blood pressure when feeling anger, even shrugging when we don’t understand something—cross cultures.
A new study, published today in the journal Frontiers in Psychology, compares such involuntary responses, but with an added twist: Some observable behaviors aren’t only universal to the human species, but to our closest relatives too—chimpanzees and bonobos.
Using video analysis, a team of UCLA researchers found that human, chimpanzee and bonobo babies make similar gestures when interacting with caregivers. Members of all three species reach with their arms and hands for objects or people, and point with their fingers or heads. They also raise their arms up, a motion indicating that they want to be picked up, in the same manner. Such gestures, which seemed to be innate in all three species, precede and eventually lead to the development of language in humans, the researchers say.
To pick up on these behaviors, the team studied
hree babies of differing species through videos taken over a number of months. The child stars of these videos included a chimpanzee named Panpanzee, a bonobo called Panbanisha and a human girl, identified as GN. The apes were raised together at the Georgia State University Language Research Center in Atlanta, where researchers study language and cognitive processes in chimps, monkeys and humans. There, Panpanzee and Panbanisha were taught to communicate with their human caregivers using gestures, noises and lexigrams, abstract symbols that represent words. The human child grew up in her family’s home, where her parents facilitated her learning.
Researchers filmed the child’s development for seven months, starting when she was 11 months old, while the apes were taped from 12 months of age to 26 months. In the early stages of the study, the observed gestures were of a communicative nature: all three infants engaged in the behavior with the intention of conveying how their emotions and needs. They made eye contact with their caregivers, added non-verbal vocalizations to their movements or exerted physical effort to elicit a response.
By the second half of the experiment, the production of communicative symbols—visual ones for the apes, vocal ones for the human—increased. As she grew older, the human child began using more spoken words, while the chimpanzee and bonobo learned and used more lexigrams. Eventually, the child began speaking to convey what she felt, rather than only gesturing. The apes, on the other hand, continued to rely on gestures. The study calls this divergence in behavior “the first indication of a distinctive human pathway to language.”
The researchers speculate that the matching behaviors can be traced to the last shared ancestor of humans, chimps and bobonos, who lived between four and seven million years ago. That ancestor probably exhibited the same early gestures, which all three species then inherited. When the species
diverged, humans managed to build on this communicative capacity by eventually graduating to speech.
Hints of this can be seen in how the human child paired her gestures with non-speech vocalizations, the precursors to words, far more than the apes did. It’s this successful combination of gestures and words that may have led to the birth of human language.
May 10, 2013
It’s hard to appreciate just how quickly and thoroughly Twitter has taken over the world. Just seven years ago, in 2006, it was an idea sketched out on a pad of paper. Now, the service is used by an estimated 554 million users—a number that amounts to nearly 8 percent of the all humans on the planet—and an estimated 170 billion tweets have been sent, with that number climbing by roughly 58 million every single day.
All these tweets provide an invaluable source of news, entertainment, conversation and connection between people. But for scientists, they’re also valuable as something rather different: raw data.
Because Twitter features an open API (which allows for tweets to be downloaded as raw, analyzable data) and many tweets are geotagged, researchers can use billions of these tweets and analyze them by location to learn more about the geography of humans across the planet. Last fall, as part of the Global Twitter Heartbeat, a University of Illinois team analyzed the language and location of over a billion tweets from across the U.S. to create sophisticated maps of things like positive and negative emotions expressed during Hurricane Sandy, or support for Barack Obama or Mitt Romney during the Presidential election.
As Joshua Keating noted on Foreign Policy‘s War of Ideas blog, members of the same group, led by Kalev Leetaru, have recently gone one step further. As published in a new study earlier this week in the online journal First Monday, they analyzed the locations and languages of 46,672,798 tweets posted between October 23 and November 30 of last year to create a stunning portrait of human activity around the planet, shown at the top of the post. They made use of the Twitter decahose, a data stream that captures a random 10 percent of all tweets worldwide at any given time (which totaled 1,535,929,521 for the time period), and simply focused on the tweets with associated geographic data.
As the researchers note, the geographic density of tweets in many regions—especially in the Western world, where computers, mobile devices, and Twitter are all used at peak levels—closely matches rates of electrification and lighting use. As a result, the maps of tweets (such as the detail view of the continental U.S., below) end up looking a lot like satellite images of artificial light at night.
As a test to see how well tweets matched artificial light use, they created the composite map below, in which tweets are shown as red dots and nighttime lighting is shown as blue. Areas where they correspond in frequency (and effectively cancel each other out) are shown as white, and areas where one outweighs the other remain red or blue. Many areas end up looking pretty white, with some key exceptions: Iran and China, where Twitter is banned, are noticeably blue, while many countries with relatively low electrification rates (but where Twitter is still popular) appear as red.
The project got even more interesting when the researchers used an automated system to break down tweets by language. The most common language in Twitter is English, which is represented in 38.25 percent of all Tweets. After that came Japanese (11.84 percent), Spanish (11.37 percent), Indonesian (8.84 percent), Norwegian (7.74 percent) and Portugese (5.58 percent).
The team constructed a map of all tweets written in the 26 most popular languages, with each represented by a different color, below:
While most countries’ tweets are dominated by their official languages, many are revealed to include tweets in a variety of other languages. Look closely enough, and you’ll see a rainbow of colors subtly popping out from the grey dots (English tweets) that blanket the U.S.:
Among other analyses, the research team even looked at the geography of retweeting and referencing—the average distance between a user and someone he or she retweets, as well as the average distance between that user and someone he or she simply references in a tweet. On average, the distance for a retweet was 1,115 miles and 1,118 for a reference. But, counterintuitively, there was a positive relationship between the number of times a given user retweeted or referenced another user and their distance: Pairs of users with just a handful of interactions, on the whole, were more likely to be closer together (500-600 miles apart) than those with dozens of retweets and references between them.
This indicates that users who live far apart are more likely to use Twitter to interact on a regular basis. One explanation might be that the entities with the most followers—and thus the most references and retweets—are often celebrities, organizations or corporations, users that people are familiar with but don’t actually have a personal relationship with. A global map of retweets between users is below:
The paper went into even more detail on other data associated with tweets: the ratio between mainstream news coverage and number of tweets in a country (Europe and the U.S. get disproportionate media coverage, while Latin America and Indonesia are overlooked), the places Twitter has added the most users recently (the Middle East and Spain) and the places where users have, on average, the most followers (South America and the West Coast).
There are a few caveats to all this data. For one, though the tweets analyzed number in the tens of millions, they are still just 0.3 percent of all tweets sent, so they might not adequately represent all Twitter patterns, especially if users who enable geotagging behave differently than others. Additionally, in the fast-changing world of Twitter, some trends might have already changed significantly since last fall. But as Twitter continues to grow and as more data become available, it stands to reason that this sort of analysis will only become more popular for demographers, computer scientists and other researchers.
June 29, 2012
Imagine that a debilitating illness, stroke or accident has left you entirely paralyzed. You’re fully conscious but unable to move or even communicate with those around you. People in this condition—known as Locked-in Syndrome—suffer greatly, locked in their own minds, appearing superficially to be in a persistent vegetative state despite a full inner life.
A new device, described in a paper published yesterday in the journal Current Biology, may offer hope to those locked-in: a new use of fMRI technology to read minds. The experimental setup allows individuals to “type” 27 symbols (26 letters and a space) without saying a word or moving a muscle, but rather by simply engaging in different thought patterns. The system could someday provide a practical means of daily communication for those who are unable to move.
According to Scientific American, the lead author of the study, Bettina Sorger of Maastricht University in the Netherlands, first started thinking about communicating with paralyzed patients after an experience she had about 10 years ago while working as a nurse. A patient who was recovering from anesthesia in the intensive care unit and seemed incapable of movement or speech suddenly tried to choke her. Then, a week later, he ran into Sorger while fully conscious and promptly apologized. She was stunned to realize that although he had little control over his movements while partially anesthetized, he was fully conscious and could even remember his actions a week later. Perhaps there could be some way to enable such patients in such a situation to communicate via mental activity alone, she thought.
Now Sorger is a researcher in neurocognition, and she and her colleagues have created a proof-of-concept device that could someday be used for those either temporarily or permanently paralyzed to achieve this goal. In the study, six healthy adults learned how to answer questions by mentally “typing” individual letters on a computer screen.
The participants first underwent one hour of training to learn how to pick various letters by distinct thought patterns. While lying inside a functional magnetic resonance imaging (fMRI) machine—which precisely measures activity in different parts of the brain by detecting the amount of blood flow—they stared at a table that included all 26 letters and a symbol for space. The letters were arranged in three rows, and each row was associated with a different type of mental task: motor imagery (such as tracing a shape in the mind), mental calculation (such as performing a multiplication problem), or inner speech (such as silently reciting a piece of text). Additionally, different columns of letters were illuminated on the screen at different times for different durations, in a consistent sequence.
To select a letter, participants waited for that letter’s column to light up, and then performed the specific type of mental task associated with that letter’s row as long as the letter stayed lit. For example, to select the letter ‘L’ in the graph below, the participant would wait 10 seconds for the onset delay until the row lit up, then would perform a mental calculation for a full 10 seconds, until the column with ‘C’, ‘L’ and ‘U’ dimmed. If they continued the mental task for 20 seconds, instead of 10, an ‘M’ would be detected.
Because the fMRI machinery is capable of distinguishing among the mental activity patterns that fit each of the three tasks, and can also track when and for exactly how long the task was mentally performed, the system could use both of these parameters to figure out which letter the participant meant to select. The participants painstakingly “typed” out letters to answer a variety of questions, such as “what is your name?” and “what movie did you see last?”
The system was able to accurately determine the first letter of each response a respectable 82 percent of the time, but an innovative use of context-dependent text recognition software—the same type that enables your smartphone to figure out you meant to type “great” instead of “grear”—meant that the first letter was correctly detected 95 percent of the time when the second letter was taken into account, and 100% once the third one was typed.
Although the system required a bulky fMRI machine and was performed with healthy participants, it’s easy to imagine how this could eventually be adapted to be used for those unable to speak or move. The grid could even be changed based on the context, with, say, pictures of food choices or room temperatures presented instead of letters. With practice, the process of choosing letters could likely be accelerated, too, and entire commonly used words could even be selected instead of letters.
This new system joins a number of other mind-reading interface devices developed in recent years to give locked-in patients the chance to communicate. As these systems are improved and refined, we might see them in hospitals such as the one Sorger once worked in to let these patients literally share their thoughts. “Even if one person benefits,” she says, “I would be very happy.”
August 11, 2011
In the new movie Rise of the Planet of the Apes, the leader of the ape revolution can talk. In the real world, apes can’t speak; they have thinner tongues and a higher larynx, or vocal box, than people, making it hard for them to pronounce vowel sounds. But that doesn’t necessarily mean they don’t have the capacity for language—sign language, after all, doesn’t require any vocalization.
Over the years, researchers have succeeded—and failed—in teaching apes to use language. Here’s a look at some of the more famous “talking” apes.
Viki: Viki, a chimpanzee, came closest to being a real talking ape. In the late 1940s and early 1950s, Keith and Catherine Hayes of the Yerkes Laboratories of Primate Biology, then located in Orange Park, Florida, adopted Viki and raised her at home as if she were a human baby. With the Hayeses moving her lips for her, Viki learned to utter “mama.” Eventually, with much difficulty, she managed to say three other words—papa, cup and up—on her own. Viki’s tenure as a talking ape didn’t last long; she died at the age of seven of viral meningitis.
Washoe: In the 1960s, psychologists Allen and Beatrix Gardner of the University of Nevada, Reno recognized that chimpanzees naturally gesture a lot and thought chimps would be well suited for sign language. In 1966, they started working with Washoe. Later, psychologists Roger and Deborah Fouts, now retired from Central Washington University, continued the work. By the end of Washoe’s life in 2007, she knew about 250 signs and could put different signs together to make simple combinations like “Gimmie Sweet” and “You Me Go Out Hurry.” Washoe’s adopted son Loulis also learned to sign—by watching his mother. He was the first ape to learn signs from other apes, not humans. For more on Washoe’s life, read Roger Fouts’ Next of Kin.
Nim: After the success with Washoe, psychologist Herbert Terrace of Columbia University decided to replicate the project. At first, Nim—full name Nim Chimpsky, named after linguist Noam Chomsky who thought language was unique to humans—was raised in a human household. (Washoe had been treated like a person too but had her own trailer.) Later, Nim was removed from the family and his language lessons moved to a lab on Columbia’s campus. In the end, Terrace concluded Nim never really learned language; he had merely been trained to imitate his teachers to get rewards. The sad story of Nim’s life after the project ended is told in the new documentary Project Nim.
Chantek: Chimpanzees are not the only talking apes. In 1978, anthropologist Lyn Miles of the University of Tennessee at Chattanooga began studying an orangutan named Chantek. During eight years of study, Chantek learned 150 signs. He also showed signs of being self-aware: he could recognize himself in a mirror. Today, you can visit Chantek at Zoo Atlanta, his home since 1997.
Koko: Koko the gorilla is probably best known for her love of kittens and Mr. Rogers (and maybe less well-known for her encounter with Captain James T. Kirk). Koko’s sign-language training began in 1972 with then-graduate student Francine (Penny) Patterson of Stanford University. According to the Gorilla Foundation, Koko knows 1,000 signs and understands spoken English. It also claims the gorilla has an IQ somewhere between 70 and 95 (the average human IQ is 100). (Critics, however, remain skeptical about some of Koko’s supposed abilities due to the lack of recent scientific publications supporting the claims. (PDF))
Kanzi: Kanzi, a bonobo, doesn’t use sign language; he uses different combinations of lexigrams, or symbols, to communicate. In the early 1980s, psychologist Sue Savage-Rumbaugh, then of Georgia State University, was trying to teach Kanzi’s mom, Matata, to use the lexigrams; instead, Kanzi was the one who mastered the symbols. Kanzi understands spoken English and knows close to 400 symbols. When he “speaks,” his lexigram usage follows rules of grammar and syntax, according to researchers at the Great Ape Trust in Iowa, where Kanzi now resides. Kanzi is also an accomplished stone-tool maker.