April 26, 2013
Your breath says a lot about you. Recent research has found that the chemicals present in each person’s breath can provide a unique “breathprint” that differs from person to person, while other scientists have worked on breathalyzer-like tests that can indicate the presence of a bacterial infection inside someone’s body.
In the decades since the 1960s, though, when the first electronic breathalyzer for blood alcohol content was developed, research has led to relatively little advancement in the use of chemical breath analysis for law enforcement purposes. Police have long been able to instantly test a person’s level of alcoholic intoxication by the side of a road, but testing for other drugs has required blood or saliva—substances that are more invasive acquire and that typically have to be sent to a crime lab for processing. Both factors make it difficult to figure out who’s under the influence at, say, the scene of a car accident right after it occurs.
But new research suggests that the status quo might be changing in a hurry. A study published yesterday in the Journal of Breath Research reveals that scientists can now use breath analysis to test for the presence of 12 different drugs in the body, including cocaine, marijuana and amphetamines. Previous work has shown that such technology can reliably test for several of these drugs, and this new study is the first time the drugs alprazolam (commercially known as Xanax, useful for treating anxiety disorders) and benzoylecgonine (a topical pain killer) have been detected. Members of the research group, from Sweden’s Karolinska Institutet, have already created a commercially-available breath testing system, called SensAbues—and it’s easy to speculate that law enforcement across the U.S. (and around the world) would love to get their hands on such technology as soon as possible.
The research team, led by Olof Beck, conducted the new study by testing the breath of 46 individuals who were checked into a drug addiction emergency clinic, had taken drugs about 24 hours earlier and volunteered to participate in the study. Each participant exhaled about 20 deep breaths into the SensAbues filter (which takes 2-3 minutes), and the solid and liquid microparticles suspended in their breath were trapped on a disk for analysis.
These tiny quantities of microparticles reflect the substances in a person’s bloodstream, because small amounts of the molecules from our blood diffuse into the fluid that lines our lungs’ bronchioles and then into our breath. By isolating these particles and analyzing them with liquid chromatography and mass spectrometry, the research team was able to determine the drugs present in each person’s body with a decent level of accuracy.
They compared the results to blood and urine samples taken from each of the participants, as well as their own reports of what drugs they’d taken in the previous 24 hours, and on the whole, the tests performed pretty well—although some progress clearly still needs to be made. All 46 people had reported taking one of the 12 detectable illegal substances, and drugs were detected in the breath of 40 of them (87 percent). Most of the particular drugs detected matched with self-reports and blood tests, but 23 percent of the time, the breath tests also indicated the presence of a drug that hadn’t actually been taken. This level of accuracy was higher than previous studies the team has done, as they’ve slowly refined the system to cut down on false positives and improve the detection rate.
Currently, using the SensAbues system would only allow officials to collect a sample and send it elsewhere for analysis. But the researchers say that advances in the cost and portability of chemical analysis could eventually allow for the same sort of roadside breath testing for drugs that we currently have for alcohol.
Another scientific hurdle is data: Unlike for alcohol, we still don’t know what a given quantity of drug molecules detected on someone’s breath means in terms of how much of the drug is actually in their bloodstream (although an accurate detection of any illegal substance might be all law enforcement officials may be after). We also don’t know how long traces of these drugs remain on a person’s breath, and how quickly they degrade.
If scientists are able to make some progress in figuring this all out, though—and if they can make the testing procedure more accurate—roadside drug tests could become a routine part of law enforcement protocol.
April 25, 2013
Google, as many researchers know well, is more than a search engine—it’s a remarkably comprehensive barometer of public opinion and the state of the world at any given time. By using Google Trends, which tracks the frequency particular search terms are entered into Google over time, scientists have found seasonal patterns, for example, in searches for information about mental illnesses and detected a link between searching behavior and a country’s GDP.
A number of people have also had the idea to use these trends to try achieving a more basic desire: making money. Several studies in recent years have looked at the number of times investors searched for particular stock names and symbols and created relatively successful investing strategies based on this data.
A new study published today in Scientific Reports by a team of British researchers, though, harnesses Google Trends data to produce investing strategies in a more nuanced way. Instead of looking at the frequency that the names of stocks or companies were searched, they analyzed a broad range of 98 commonly used words—everything from “unemployment” to “marriage” to “car” to “water”—and simulated investing strategies based on week-by-week changes in the frequencies of each of these words as search terms by American internet users.
The changes in the frequency of some of these words, it turns out, are very useful predictors of whether the market as a whole—in this case, the Dow Jones Industrial Average—will go down or up (the Dow is a broad index commonly considered a benchmark of the overall performance of the U.S. stock market).
The strategy was relatively straightforward: The system tracked whether a word such as “debt” increased in search frequency or decreased in search frequency from one week to the next. If the term was suddenly searched much less frequently, the investment simulation bought all the stocks of the Dow on the first Monday afterward, then sold all the stocks one week later, essentially betting that the overall market would rise in value.
If a term such as “debt” was suddenly searched much more frequently, the simulation did the opposite: It bought a “short” position in the Dow, selling all its stocks on the first Monday and then buying them all a week later. The concept of a “short” position like this might seem a bit confusing to some, but the basic thing to remember is that it’s the exact opposite of conventionally buying a stock—if you have a “short” position, you make money when the stock goes down in price, and lose money when it goes up. So for any given term, the system predicted that more frequent searches meant the market as a whole would decline, and less frequent searched meant it would rise.
During the period of time studied (2004-2011), making investment choices based on a few of these words in particular would have yielded overall profits several times higher than a conservative investment strategy of simply buying and holding the stocks of the Dow for the entire time. For example, basing a strategy solely on the search frequency of the word “debt,” which turned out to be the single most profitable term in the study, would have generated a profit of 326% over the seven years studied—compared to a profit of just 16% if you owned all the stocks of the Dow for the whole period.
So if you systematically bought a “short” position in the market every time the word “debt” suddenly started getting searched more often, you’d have made a ton of money over the seven years studied. But what about other words? The system simulated how this strategy would have performed for each of 98 words chosen, listed in the chart at right from most useful at predicting the movement of the markets (debt) to least useful (ring). As seen in the chart, for some of these terms the frequency that we type them into Google seems to serve as a very effective early-warning system for declines in the market.
Stock market declines typically reflect investors’ overall belief that, at any given time, it’s better to sell stock than buy it, and they often happen suddenly, when investors move in a herd to a new position—so the researchers speculate that rises in the terms’ frequencies in search convey a nascent feeling of concern about the market, before it’s expressed via actual transactions. All these searches might also reflect countless investors in an information-gathering phase, seeking to find out as much as they possibly can about an industry or a stock before selling it.
Even beyond the practical investment strategies that this type of analysis might generate, simply looking through the words provides a striking—and oftentimes confusing—window into the collective American psyche. It’s seemingly obvious why a sudden increase in the amount of people searching for the word “debt” might signal overall negative feelings about the market, and would likely precede a drop in stock values, and why “fun” might precede increases in the market. But why do searches for the words “color” and “restaurant” predict declines nearly as accurately as “debt”? Why do “labor” and “train” also predict stock market rises?
April 24, 2013
Who you’re physically attracted to might seem like a frivolous, random preference. In recent years, though, science has told us that our seemingly arbitrary tastes often reflect unconscious choices that are based upon very relevant biological traits.
In general, we find symmetric faces more attractive, likely because they reflect a healthy underlying genome. Women typically prefer men with more distinctively masculine facial features because they indicate high testosterone levels and physical strength, while men prefer women with exaggerated youthful features, possibly because of the evolutionary advantages a male gets when coupling with a younger mate.
Despite all this research into our visual appearances, though, scientists have done relatively little digging into our auditory preferences when it comes to sexual attraction. Why do we find certain peoples’ voices attractive–and why do we sometimes find other types of voices such a turn-off? Specifically, why do women generally prefer men with deep voices, and men prefer women with higher ones?
At least according to a paper published today in PLOS ONE, the explanation is relatively simple: It’s all about body size. Researchers from University College London found that, at least among a sample of 32 participants, high-pitched female voices females were found to be attractive because they indicated the speaker had a small body. Deep male voices, on the other hand, were judged as more attractive because they conveyed that the speaker had a large frame—but were found to be most attractive when tempered by a touch of “breathiness,” suggesting the speaker had a low level of aggression despite his large size.
The group, led by Yi Xu, figured this out by playing recordings of digitally manipulated voices to the participants. The males in the study heard a computer-generated female voice saying phrases such as “I owe you a yo-yo” in which the voice was manipulated with a number of digital alterations in terms of pitch, formant (the particular peaks and valleys in a sound’s frequency spectrum) and other qualities.
The specific manipulations either conveyed a smaller body size or a larger one, based upon previous research that matched various voice qualities with different body sizes in humans. When asked to rate the voice’s attractiveness on a 1 to 5 scale, the men preferred the voices that suggested a smaller female. Past a certain point, though, higher voices were judged as no more attractive that slightly deeper ones. Listen to the most and least attractive (both, admittedly creepy) voices below:
The female participants’ voice preferences were similar, but slightly more nuanced. On the whole, they preferred deeper voices, which signaled a large body size, but another trait was also crucial: “breathiness.” The researchers hypothesized that this breathiness effectively takes the edge off a voice, making a man with a presumed large frame seem less aggressive and angry. They also polled the participants on whether they thought the simulated voices sounded angry or happy, and the breathy deep males voices were generally perceived as much happier and less angry than the less breathy (i.e. “pressed”) deep ones. Listen to the most and least attractive male voices below:
Beyond explaining the popularity of Barry White, the researchers say these findings correspond to much of what we know about voice preferences in the rest of the animal kingdom. Birds and other mammals, it turns out, have long been known to advertise their physical characteristics via the sound qualities in their mating calls.
All this points to an obvious question, though: Why would males prefer smaller females, and females prefer larger males in the first place? The researchers don’t attempt to address this question, but this duality reflects the sexual dimorphism present in most animal species. These differences generally result from sexual selection giving incentive to different mating strategies—so in this case, our voice preferences suggest that women benefit, in evolutionary terms, by mating with larger, but less aggressive men, while males benefit from mating with smaller females.
As the same time, what we commonly consider attractive varies dramatically over time and location—for example, dozens of prehistoric “Venus figurines,” discovered all over the world, portray extremely voluptuous female figures. So, if we tested the preferences of all humans throughout history, we might find a less obvious trend. This preference for small-voiced females and big-voiced males, then, might simply be an artifact of our contemporary cultural concepts of “attractiveness,” rather than a deep-seated evolutionary choice after all.
April 23, 2013
If, while watching WALL-E, your heart broke just a little bit when you saw the title character desperately travel across outer space in search of true love, it doesn’t mean you’re crazy. Sure, WALL-E is a robot. But its cute, anthropomorphized look and all too human desire to end its loneliness made us subconsciously forget that it is not human.
The ability to forget that key point wasn’t just a matter of clever storytelling. New research shows that, at least in a small sample of people tested, the same neural patterns that occur when we feel empathy for a human onscreen are present in our brains when we see a robot onscreen.
A group of researchers from the University of Duisburg Essen in Germany used fMRI (functional magnetic resonance imaging) to come to the finding, tracking blood flow in the brains of 14 study participants when they were shown videos of humans, robots and inanimate objects being treated either affectionately or harshly. The researchers, who will present their findings at the June International Communication Association conference in London, found that when participants were shown videos of a robot (a product called Pleo, which resembles a dinosaur) petted, tickled and fed, areas in their limbic structures—a region of the brain believed to be involved in emotional responses—activated. When they were shown videos of a human getting a massage, the same sorts of neural activity occurred.
The same pattern also occurred when the participants were shown videos of the robots and humans being treated harshly—shaken, dropped or suffocated with a plastic bag—but with a twist. Interestingly, their fMRI results showed levels of limbic activity much greater when they saw humans treated poorly than when they saw the robots. This correlated with the responses on surveys that the participants took after watching the videos, on which they reported some empathy for the robots, but more for the humans.
The results suggest that the reason we feel empathy for robots like WALL-E is that, when we see them treated a certain manner, it triggers the same sort of neural activity as seeing a human treated that way. In a sense, our mind interprets the robot to be human-like in a way that it doesn’t for, say, a rock. On the other hand, one possible explanation for why, despite this pattern, they still arouse less empathy than humans when being treated harshly is that we interpret them as something slightly less than human—something more like a pet.
Of course, this explanation comes with an important caveat: correlation vs. causation. We don’t know for sure that these neurological patterns cause empathy, per se, just that they reliably occur at the same time. (Further, we can’t say for sure that this effect is unique to robots—stuffed animals and dolls might engender the same feelings of empathy.)
Even if the patterns only correlate with empathy, though, they could be an effective objective measure of how much empathy people feel when observing various types of robots—and research into that area has practical implications that go far beyond Hollywood. One of the main areas, the scientists say, is in the engineering of robots that engage with humans on a frequent and long-term basis.
“One goal of current robotics research is to develop robotic companions that establish a long-term relationship with a human user, because robot companions can be useful and beneficial tools. They could assist elderly people in daily tasks and enable them to live longer autonomously in their homes, help disabled people in their environments, or keep patients engaged during the rehabilitation process,” Astrid Rosenthal-von der Pütten, the study’s lead author, said in a press statement. “A common problem is that a new technology is exciting at the beginning, but this effect wears off especially when it comes to tasks like boring and repetitive exercise in rehabilitation. The development and implementation of uniquely humanlike abilities in robots like theory of mind, emotion and empathy is considered to have the potential to solve this dilemma.”
In one previous long-term study, two out of six elderly participants appeared to develop emotional attachments with a companion robot—giving it a name, speaking to it and at times even smiling at it—while the other four did not. Further exploring our empathy for robots and figuring out just which of their characteristics (whether physical, such as having a human-like face, or behavioral, such as smiling or walking on two legs) lead more people to feel for them could help engineers design robotic devices that elicit more empathy over the long-term—and devices that people can readily connect with on an emotional level might make more effective rehab coaches and home companions over the long-term.
April 22, 2013
Over the past few decades, researchers have developed biofuels derived from an remarkable variety of organisms—soybeans, corn, algae, rice and even fungi. Whether synthesized into ethanol or biodiesel, though, all of these fuels suffer from the same limitation: They have to be refined and blended with heavy amounts of conventional, petroleum-based fuels to run in existing engines.
Though this is far from the only current problem with biofuels, a new approach by researchers from the University of Exeter in the UK appears to solve at least this particular issue with one fell swoop. As they write today in an article in Proceedings of the National Academy of Sciences, the team has genetically engineered E. coli bacteria to produce molecules that are interchangeable to the ones in diesel fuels already sold commercially. The products of this bacteria, if generated on a large-scale, could theoretically go directly into the millions of car and truck engines currently running on diesel worldwide—without the need to be blended with petroleum-based diesel.
The group, led by John Love, accomplished the feat by mixing and matching genes from several different bacteria species and inserting them into the E. coli used in the experiment. These genes each code for particular enzymes, so when the genes are inserted into the E. coli, the bacteria gains the ability to synthesize these enzymes. As a result, it also gains the ability to perform the same metabolic reactions that those enzymes perform in each of the donor bacteria species.
By carefully selecting and combining metabolic reactions, the researchers built an artificial chemical pathway piece-by-piece. Through this pathway, the genetically modified E. coli growing and reproducing in a petri dish filled with a high-fat broth were able to absorb fat molecules, convert them into hydrocarbons and excrete them as a waste product.
Hydrocarbons are the basis for all petroleum-based fuels, and the particular molecules they engineered the E. coli to produce are the same ones present in commercial diesel fuels. So far, they’ve only produced tiny quantities of this bacterial biodiesel, but if they were able to grow these bacteria on a massive scale and extract their hydrocarbon products, they’d have a ready-made diesel fuel. Of course, it remains to be seen whether fuel produced in this way will be able to compete in terms of cost with conventional diesel.
Additionally, energy never comes from thin air—and the energy contained within this bacterial fuel mostly originates in the broth of fatty acids that the bacteria are grown on. As a result, depending on the source of these fatty acids, this new fuel could be subject to some of the same criticisms leveled at biofuels currently in production.
For one, there’s the argument that converting food (whether corn, soybeans or other crops) into fuel causes ripple effects in global food market, increasing the volatility of food prices, as a UN study from last year found. Additionally, if the goal of developing new fuels is to fight climate change, many biofuels fall dramatically short, despite their environmentally-friendly image. Using ethanol made from corn (the most widely used biofuel in the U.S.), for example, is likely no better than burning conventional gasoline in terms of carbon emissions, and maybe actually be worse, due to all the energy that goes into growing the crop and processing it info fuel.
Whether this new bacteria-derived diesel suffers from these same problems largely depends upon what sort of fatty acid source is eventually used to grow the bacteria on a commercial scale—whether it would by synthesized from a potential food crop (say, corn or soy oil), or whether it could come from a presently-overlooked energy source. But the new approach already has one major advantage: Just the steps needed to refine other biofuels so they can be used in engines use energy and generate carbon emissions. By skipping these steps, the new bacterial biodiesel could be an energy efficient fuel choice from the start.