January 14, 2013
Beneath massive communication towers, fallen bird bodies pile up like confetti. They collide with the steel structures—which can reach heights twice that of the Empire State Building—or fly into the miles of cables radiating around the beacons. Each year, nearly 7 million birds lose their lives to these web-like traps of wire and metal—27 times more birds than were killed in the infamous 1989 Exxon Valdez spill.
The killing season peaks during the time nocturnal migratory birds make their way between Canada and the U.S. Flying in the darkness, they spot the tower lights, become disoriented and begin circling the beams. After a storm, when natural navigational cues like the stars or moon are obscured, mortalities are particularly high.
While the magnitude of causalities is worrying, until now researchers did not know whether or not the avian victims were species of conservation concern or just common sparrows. Research recently published in the journal Biological Conservation, however, confirms scientists’ fears. Members of thirteen threatened North American species succumb each year to the towers. The fallen birds represent between 1 and 9 percent of those species’ total population numbers.
“Certain species of birds, including many already in decline, are killed at communication towers in far greater proportions than their abundance would suggest,” said lead author Travis Longcore, the science director of the Urban Wildlands Group and an associate professor of research at the Spatial Science Institute at the University of Southern California, in an email. “And it’s not just these thirteen species we have to worry about—they’re just the ones being killed at the highest rates,” he continued. “Many more species of concern are killed at lower rates, too.”
To figure out mortality by species and regions, Longcore and his co-authors constructed a database of species deaths based on verifiable, available records. Then, they calculated the mean proportion of each species killed and compared those statistics with overall mortality rates for each species’ total population in the U.S. and Canada.
All in all, they found, 97 percent of the birds being killed are passerines, or songbirds. Among the threatened birds that are dying are the Yellow Rail, with 2,200 annuals mortalities, representing 8.9 percent of the species’ total population; the Golden-winged Warbler, with 5,300 annual deaths, representing 2.5 percent of the population; and the Swainson’s Warbler, with 7,500 annual deaths, representing 8.9 percent of the population. Other species, though not currently of conservation concern, still suffer formidable losses. Red-eyed Vireos, for example, relinquish 581,000 lives to communication towers each year, and around 499,000 Ovenbirds die this way, too.
Last year, the same team found that around 1,000 of the towers, used for television and radio broadcast, are responsible for 70 percent of the bird deaths. Those 1,000 towers, the team noted, stand 900 feet or higher, representing the largest of North America’s 70,000-odd communication towers included in the original study. In their follow up study, they identified the deadliest sites, which are in Texas, Louisiana, Florida and the Midwest. The findings are no surprise; the Southeastern coastal plain and the Midwest regions contain the highest concentrations of the tallest towers on the continent.
The Migratory Bird Treaty Act of 1918 makes it illegal to kill migratory birds in the U.S., so the researchers hope their findings may be used to better regulate communication towers. Eliminating the steady-glow red lights from the towers and replacing them with blinking lights—the same fix adopted by the Federal Aviation Administration—may reduce bird mortality by 50 to 70 percent.
The study also carries another lesson, Longcore said. Simply counting up the total number of birds killed by wind turbines, cats, windows, pesticides or communication towers across the country and then making crude comparisons between mortality sources can be misleading, he pointed out. The most impactful data—the types of species killed, and where, and when and how—often lurk beneath those surface figures. “Simple estimates of total ‘bird’ mortality are insufficient; it matters which species are being killed,” he said. “Each mortality source may be significant, but for different species and in different places.”
January 9, 2013
El Niño, the climate pattern that increases Pacific Ocean surface temperatures every three to seven years, has long been known to pummel the Sierra Nevada with snow, limit Peruvian anchovy fishermen’s harvest and bless the Hawaiian Islands with dry, beach-friendly weather. The question of whether the effects of El Niño have become more extreme in recent decades, as climate change has intensified, hasn’t accrued a consensus among scientists. But now, new research released last week, sponsored by the National Science Foundation and published in Science, strengthens the link between El Niño activity and climate change.
During an El Niño season (the next one has been delayed, but is expected to begin later this year) the force of trade winds in the western and central Pacific diminishes or even reverses, causing a spike in surface water temperatures. As the slackened winds allow–or the reversed winds slowly push–the warmer water east across the ocean, rainfall follows it.
El Niño and its cold-water counterpart La Niña, which occurs between El Niño episodes when the regular trade winds intensify their westward push, have global ramifications. Wildfires in Australia and famines in India have been associated with the climate pattern. The cycle of El Niño and La Niña also appears to have intensified in recent years. Searching for reasons why, scientists debated a link with climate change as long ago as 1997, when researchers at the National Center for Atmospheric Research published a study titled “El Niño and Climate Change.” They couldn’t identify a clear connection, but they believed there was an unidentified force at work–one that required further investigation. “[A]t least part of what is happening… can not be accounted for solely by natural variability,” they wrote.
A year later, experts at the Nevada-based Western Regional Climate Center, which disseminates climate data and conducts research, also contemplated whether global warming was goosing El Niño. They were more overtly suspicious of a linkage, but again, lacked specific evidence. In a post on the center’s website, they noted:
It is plausible that a warmer earth would produce more and stronger El Niños. There is some evidence that the earth has warmed over the past two decades, and there is no doubt that El Niño has been much more frequent in that time. If the evidence of a warming earth is taken at face value (not universally accepted), there still remains a wide spectrum of opinions on whether we are seeing a manifestation of human modification of global climate, or whether the natural climate system would be exhibiting this behavior anyway.
In the new study, conducted by the Georgia Institute of Technology and the Scripps Institute of Oceanography, scientists traveled to the central tropical Pacific, where the variations in El Niño-driven temperature and precipitation patterns are most acute. Studying the region’s coral gave them a window into the historical effects of El Niño.
They extracted core samples from large coral rocks that had been pushed by storm activity onto Christmas (Kiritimati) and Fanning Islands, tiny spits of land within Kiribati’s Northern Line Islands. Using radioactive dating, they ascertained the ages of 17 samples, each of which spanned 20 to 80 years in time, allowing them to create a patchwork timeline covering 7,000 years.
Then they looked at the ratio of oxygen isotopes within the coral skeletons as a way of measuring variations in weather patterns. Since temperature and rainfall affect isotope ratios, they were able to glean the environmental conditions present during each phase of the corals’ lifespans. Dips and surges in rain and sea surface temperatures left an imprint in the coral samples, and in their analysis, scientists found significantly more intense and variable El Niño activity in the 20th century than most other periods represented.
“The level of [El Niño] variability we see in the 20th century is not unprecedented,” said the study’s lead author, Georgia Institute of Technology’s Kim Cobb in a statement, noting a similarly severe period in the 17th century. “But the 20th century does stand out, statistically, as being higher than the fossil coral baseline.”
The researchers reluctantly went a step further to connect the increase in El Niño activity to climate change: “We kind of answered the question, is El Niño changing with respect to recent natural variability?” said Cobb. “The answer is yes, tentatively so.” Yet despite the bounty of new data, researchers say they would need to go back even further in time to make a more definitive linkage between climate change and El Niño activity.
They were less ambiguous about the impact of the study on future climate change research. The new data will help other scientists investigate past climate change events in both paleoclimate records and model simulations, Cobb said. “Prior to this publication, we had a smattering of coral records from this period of interest,” she explained. “We now have tripled the amount of fossil coral data available to investigate these important questions.”
January 8, 2013
Despite a lack of evidence in the scientific literature, we’ve seen aspartame—the calorie-free sugar sweetener found in diet drinks—blamed for a variety of health problems, everything from multiple sclerosis to migraine headaches. But here’s a new one: clinical depression?
In a preliminary release of a study to be published by Honglei Chen and colleagues from the National Institutes of Health, a survey of 263,925 adults nationwide indicated that consumption of sweetened drinks—especially diet sodas—was associated with an increased chance of a depression diagnosis. The authors, who will present their work at the American Academy of Neurology’s annual meeting in March, released only a summary of their study today.
To come to the findings, the researchers combined old data with new. They began by examining a survey originally conducted in 1995 and 1996 in which adults between the ages of 50 and 71 recorded their daily soda, tea, fruit punch and coffee consumption. Then, for this study, they returned to the same survey participants more than a decade later and asked if they had been diagnosed with depression in the years since 2000.
They found that those who drank four or more cans of sweetened drinks (whether soda, diet soda or fruit punch) had a significantly higher chance of being among the 11,311 study participants who were later diagnosed with clinical depression that those who didn’t. For sodas as a whole, there was a 30 percent greater chance of depression, but diet sodas carried a further 22 percent increase as compared to regular ones. Interestingly, regular coffee consumption was associated with a 10 percent lower chance of depression.
Does this mean you should stop drinking diet Coke and starting chugging coffee immediately? Probably not. This type of suggested link between two seemingly unrelated factors is an ideal time to bring up the difference between causation and correlation. Do the ingredients in both diet sodas and normally-sweetened drinks trigger changes in brain chemistry that lead to depression? Or are people with the tendency to become depressed simply more likely to drink these beverages in the first place?
Without the full paper, it’s hard to know for sure—we don’t know if the study’s authors controlled for all relevant factors, making sure to compare study participants who were alike in all ways except for their beverage consumption. As a result, a third, unrelated factor may cause people to both drink more soda and become depressed more frequently. Since the study is backward-looking, it’s especially hard to rule this out: The researchers can’t go back to 1996 and make sure to ask the participants every potentially relevant question to ensure that all potentially important factors have been taken into consideration.
Additionally, the fact that an association was found for both regular and diet sodas makes a causative link seem less likely. For that to be the case, either both sugar and aspartame must trigger depression, but at different frequencies, or a third ingredient in both sodas is responsible, but is somehow modulated by the presence of the natural or artificial sweetener.
So, what’s the simplest explanation? Those who like to drink a lot of diet soda are more likely to already be at risk of developing depression. And people who like to drink a lot of coffee are already less likely to be among this group. Perhaps, then, your underlying preference for how you get your energy buzz—whether through coffee or sweet drinks—may reflect something about your mental state.
November 14, 2012
More than 40 countries around the world force cigarette companies to print graphic images of things like decaying teeth, open-heart surgeries and cancer patients on their packs, in an effort to discourage smoking by directly linking cigarettes with their most gruesome effects. The United States, however, is not one of these countries: The U.S. Food and Drug Administration unveiled graphic designs in November 2010, but repeated lawsuits by the tobacco industry have delayed implementation of the new warnings.
If and when the labels do hit, the images could go a long way towards continuing the decline in smoking rates across the country. That’s because, as new research demonstrates, seeing these images every time a person reaches for a pack is a more effective deterrent than a text-only warning. The research also indicates that the graphic warnings are especially powerful in discouraging low-health literacy populations from smoking—the one group in which smoking rates have remained stubbornly high over the past few decades.
The study, published yesterday in the American Journal of Preventive Medicine [PDF], was conducted by James Thrasher of the University of South Carolina and colleagues. A control group of 207 smokers saw text-only warning labels, while 774 smokers evaluated nine different graphic labels, both images proposed by the FDA and a selection of others currently used in foreign countries.
The smokers were asked to judge each label on a scale of one to ten for credibility, relevance and effectiveness. The results were unequivocal: The text-only warnings’ average ratings were mostly in the fives and sixes, while simpler text messages combined with striking graphics scored in the sevens and eights across the board.
These differences were especially large for the group the researchers called low-health literacy smokers–people with less education who are less likely to be knowledgeable about the risks of smoking. This group gave much higher ratings for credibility, in particular, to the labels that showed them the health problems that arise from smoking, rather than text labels that merely told them. “The present study provided the first direct test of the hypothesis that pictorial health warning labels work better than text-only labels among people with low health literacy,” Thrasher said in a statement.
Among the labels with images, the study compared three different types: graphic (those that directly showed body parts damaged by smoking), human suffering (those that showed someone in a hospital bed, for example) and symbolic (more abstract images, such as a gravestone). Perhaps unsurprisingly, the first category was consistently rated as the most effective in discouraging smoking. It seems nothing so powerfully scares someone away from taking another puff than a picture of what their teeth, lungs or throat will look like after a lifetime of doing so.
Thrasher feels that these types of findings should be taken into account when agencies such as the FDA design cigarette warning labels, to be sure they reach all demographics. “The FDA should consider implementing warning labels with more graphic imagery in order to maximize the impact of warnings across different populations of adult smokers, including more disadvantaged smokers,” Thrasher said.
October 1, 2012
Gallup’s annual poll of which professions are the most trustworthy doesn’t ask about scientists, but it’s safe to say that at the very least they’d rank far higher than the used car salespeople and members of Congress at the bottom.
At the same time, among the thousands of people globally who practice science and publish their results, some minority likely yield to the temptation to massage data to achieve attention-getting (and funding-friendly) results. In recent years, it has become politically useful for some to seize upon this possibility and allege deliberate scientific fraud. (Charges that man-made climate change is a widespread scientific conspiracy have only become more common since the so-called Climategate scandal of 2009, despite several investigations that have failed to find any evidence of fraud or scientific misconduct.)
But how often do scientists actually lie about their data? In other words, how much should we trust in them?
The answer, at least according to a study published today in the Proceedings of the National Academy of Sciences, is that on the whole, scientists are a pretty honest group. In the paper, medical researchers from the University of Washington and elsewhere found that of the more than 25 million biomedical research-related articles published in the National Institutes of Health PubMed database that date back to the 1940s, 2,047 were retracted at some point since their publication. That’s less than 0.01 percent of all the papers in the database.
The researchers broke their results down further, attempting to attribute each retraction to a type of cause. By their accounting, 21.3 percent were due to honest error, such as unintentional misinterpretation of data. Meanwhile, 67.4 percent of the retractions could be attributed to some sort of misconduct, including fraud or fabrication (43.4 percent), plagiarism (9.8 percent) and duplicate publication (14.2 percent). When compared with the articles retracted before 1975, those retracted afterward were ten times more likely to be fraudulent, as opposed to an honest mistake.
The overall modest rate of fraud could explain why the authors of the blog Retraction Watch, which documents retracted papers, have encountered opposition. Some say that directing attention towards isolated cases of dishonesty disproportionately increases public mistrust in science as a whole. “The argument goes something like this,” they wrote in May in Lab Times. “Scientific fraud is rare, so focusing on misconduct gives a distorted picture of research that will only give ammunition to critics, who want to cast doubt on subjects such as climate change and vaccine safety.”
One response might be that we don’t actually know how rare fraud is, despite the 0.01 percent retraction figure out this new PNAS study. As the study’s authors note, in many cases an article might be suspect but a journal doesn’t have enough proof to actually retract it. In 2005, for example, The Lancet “expressed concern” about the results of a study that found a correlation between a Mediterranean diet and a reduced risk of heart disease, but they didn’t ultimately retract the paper.
Moreover, we have no way of knowing how many suspect data sets never even come to light. A fabricated data set might not prove replicable by other researchers, but in many cases, it’s doubtful this would prompt them to allege dishonesty. Historically, many cases of scientific fraud are exposed only by internal whistle-blowers.
Recent events, though, indicate that we might be entering an age in which scientific discoveries actually help us detect fraud, or at least some types of it. This past July, social psychologist Uri Simonsohn of the University of Pennsylvania garnered headlines by using an innovative statistical analysis to detect fabricated data in the work of social psychologist Dirk Smeesters, who had written a paper finding a positive effect for color on consumer behavior.
Simonsohn’s technique is complex but relies upon the fact that people are notoriously bad at faking sets of data characterized by the same sort of randomness that occurs in real events. Simonsohn told Nature that “The basic idea is to see if the data are too close to the theoretical prediction, or if multiple estimates are too similar to each other.”
Soon after Smeesters’ resignation, Simonsohn made his algorithm public, encouraging researchers to publish their raw data and for others to put it to the test. He hopes that the real possibility that any researchers tempted to manipulate their data could be caught will act as a powerful deterrent. This, in theory, would not only decrease the amount of fraud but it would also increase the trust we can put in the products of science as a whole.