August 23, 2013
This distressing situation nonetheless presents scientists with an opportunity. Because the climate change is so widespread, it can be studied by examining a tremendous range data. Many of these data are collected from satellite images, extracted through analyzing ice cores or found from sifting through atmospheric temperature records. But some are collected from a bit more unorthodox sources. In no particular order, here’s our rundown of 5 unusual ways scientists are currently studying the changing climate:
1. Fossilized Urine
The hyrax—a small, herbivorous mammal native to Africa and the Middle East—has a pair of uncommon habits. The animals tend to inhabit the same cracks in rock for generations, and they also like to urinate in the exact same spot, over and over and over again. Because their urine contains traces of leaves, grasses and pollen, the layers of dried urine that build up and fossilize over thousands of years have given a team of scientists (led by Brian Chase of Montpellier University) a rare look at ancient plant biodiversity and how it’s been affected by broader changes in climate.
Further, the nitrogen in the urine—an element that’s long been important to those who utilize the scientific properties of pee—along with the urine’s carbon content tell an important story as layer after layer of the dessicated substance, called hyraceum, is analyzed. In drier times, plants are forced to incorporate heavier isotopes of these elements into their tissues, so urine layers that contain an abundance of heavy isotopes indicate that the hyrax relieved themselves after ingesting relatively parched plants. Stacked layers of the excretions thus allow scientists to track humidity through time.
“Once we have found a good layer of solid urine, we dig out samples and remove them for study,” Chase told The Guardian in an article about his unusual work. “We are taking the piss, quite literally—and it is proving to be a highly effective way to study how climate changes have affected local environments.” His team’s most valuable data set? One particular pile of fossilized urine that has been accreting for an estimated 55,000 years.
2. Old Naval Logbooks
Few people care more about the weather than sailors. Old Weather, a citizen science project, hopes to take advantage of that fact to better understand the daily weather of 100 years ago. As part of the project, anyone can create an account and manually transcribe the daily logbooks of 18th and 19th century vessels that sailed the Arctic and elsewhere.
The work is still in its beginning stages: So far, 26,717 pages of records from 17 different ships have been transcribed, with roughly 100,000 pages to go. Eventually, once enough data has been transcribed, scientists from around the world who are coordinating the project will use these ultra-detailed weather reports to paint a fuller picture of how microvariations in Arctic weather correspond with long-term climate trends.
Although there’s no pay offered, there’s the satisfaction of adding to our record on climate variations over the past few centuries. Plus, transcribe enough and you’ll get promoted from “cadet” to “lieutenant” to “captain.” Not bad for a modern day scrivener.
3. Satellite Speeds
Not long ago, a group of scientists who study how the atmosphere behaves at high altitudes noticed something strange about several satellites in orbit: They were consistently moving faster than calculations indicated they should. When they tried to figure out why, they discovered that the thermosphere—the uppermost layer of the atmosphere, starting roughly 50 miles up, through which many satellites glide—was slowly losing its thickness over time. Because the layer, made of up sparsely distributed gas molecules, was losing its bulk, the satellites were colliding with fewer molecules as they orbited and thus experienced less drag.
Why, though, was the thermosphere undergoing such change? It turned out that higher levels of carbon dioxide emitted at the surface were gradually drifting upwards into the thermosphere. At that altitude, the gas actually cools things down, because it absorbs energy from collisions with oxygen molecules and emits that stored energy into space as infrared radiation.
For years, scientists had assumed the carbon dioxide released from burning fossil fuels didn’t reach higher than about 20 miles above the Earth’s surface, but this research—the first to measure the concentrations of the gas this high up—showed that climate change can even affect our uppermost atmospheric layers. The group plans to look back and see how historical changes in satellite speeds might reflect carbon dioxide levels in the past. They will also continue to track satellite speeds and levels of carbon dioxide in the thermosphere to see how our aeronautical calculations might have to take climate change into account in the future.
4. Dog Sleds
Unlike many sorts of climate data, information on sea ice thickness can’t be directly collected by satellites—scientists instead infer thicknesses from satellite measurements of the ice’s height above sea level and a rough approximation of ice’s density. But getting true measurements of sea ice thicknesses must be done manually with sensors that send magnetic fields through the ice and pick up signals from the water below it—the fainter the signals, the thicker the ice. So our knowledge of real ice thicknesses is constrained to the locations where researchers have actually visited.
In 2008, when Scottish researcher Jeremy Wilkinson first traveled to Greenland to collect such measurements on ice thickness, his team interviewed dozens of local Inuit people who spoke about the difficulties thinner sea ice posed for their traditional mode of transportation, the dog sled. Soon afterward, Wilkinson got an idea. ”We saw the large number of dog teams that were on the ice everyday and the vast distances they covered. Then came the light bulb moment—why don’t we put sensors on these sleds?” he told NBC in 2011 when the idea was finally implemented.
Since then, his team has attached the sensors to the sleds owned by a few dozen volunteers. As the Inuits glide over the sea ice on their sleds, the instruments take a measurement of the ice’s thickness every second. His team has now deployed the sled-mounted sensors in each of the last three years to collect the data. The information collected not only helps scientists gauge the accuracy of thicknesses derived from orbiting satellites, but also helps climate scientists better understand how sea ice is locally responding to warmer temperatures as seasons and years change.
5. Narwhal-Mounted Sensors
Narwhals are renowned for their ability to dive to extreme depths: They’ve been measured going as far as 5,800 feet down, among the deepest dives of any marine mammal. Starting in 2006, NOAA researchers have used this ability to their advantage, by strapping sensors that measure temperature and depth to the animals and using the data to track Arctic water temperatures over time.
The strategy gives scientists access to areas of the Arctic ocean that are normally covered by ice during the winter—because the Narwhals’ dives, which can last as long as 25 minutes, often take them under areas of the water that are frozen on top—and is much less expensive than equipping a full icebreaker ship and crew to take measurements. Before using narwhals, temperatures of the Arctic waters at remote depths were inferred from long-term historical averages. Using the unorthodox method has helped NOAA document how these historical averages have underrepresented the extent to which Arctic waters are warming, particularly in Baffin Bay, the body of water between Greenland and Canada.
August 15, 2013
If the last Fuji apple you grabbed from your grocery store’s produce section was mealier and less flavorful than the Fujis you remember from childhood, you’re not alone. Your memory isn’t at fault, and it’s not as though you’re particularly bad at picking apples, either.
The truth, though, is much more distressing than either of those possibilities. In chemically comparing modern-day Fujis with tests on samples during the 1970s, a team of Japanese researchers found that today’s apples are less firm and have lower concentrations of a specific acid that contributes to their taste. Their conclusion, published today in the journal Scientific Reports, is that by making apple trees’ blooming time earlier in the year and raising temperatures during apple maturation, climate change has slowly but surely changed the taste and texture of the apples we hold so dear.
They started off by testing two types of newly harvested apples: Fujis—which happen to be the world’s leading apple cultivar—and the Tsugaru. In Japan, apples are taken seriously (the country produces roughly 900,000 tons of apples annually, amounting to 14 pounds per person), and records on these same parameters have been kept on this apples dating back into the 1980s, and in some cases, the 70s.
When the researchers compared modern-day Fujis and Tsugarus to their predecessors, they found that their firmness and concentration of malic acid, which corresponds with an apple’s taste intensity, had slowly declined over the decades. Additionally, the modern apples were more susceptible to watercore, a disease that causes water-soaked regions in the apple’s flesh to break down internally over time. In other words, today’s apples were consistently mealier, less flavorful, and more disease-prone according to objective measurements such as titrating their juices to determine acid concentration, or using mechanical plungers on the fruit’s flesh to test firmness.
To see if climate change might have played a role, they analyzed the long-term climate trends in the two regions of Japan where the apples were grown (Nagano and Aomori prefectures), and found that during the 40-year period, temperatures had gradually risen by a total of about 2°
C in each location. Records also indicated that, over time, the date on which apple trees in the two regions began to flower steadily crept earlier, by one or two days per decade. The last 70 days before harvest in each locale—i.e. the days during which the apples hung on the trees, ripening in the sun—were also, on average, hotter.
It’s hard to pin the blame entirely on climate change, because the process of growing apples—along with agriculture as a whole—has changed so drastically over the past few decades. A new harvesting technique or machine, for example, could have played a role in the taste decline. But other studies, conducted in closed, controlled chambers, have demonstrated that higher temperatures during the 70-day ripening window can significantly decrease taste and texture. If the case against climate change isn’t airtight, there’s at least strong circumstantial evidence.
And though the way apples taste is certainly a crucial part of modern life, the most distressing part of this whole saga might be the way in which the changes in these apples resemble climate change itself. You might eat hundreds of apples each year, and they might vary widely in quality, taste and texture. Thus, when they slowly, steadily get worse over the course of decades, it’s nearly impossible to discern the change firsthand. In these cases—both apples and climate change itself—there’s really only one option: Look to the data.
July 16, 2013
Imagine you’re a scientist and you want to track the population of an endangered frog species in, say, the Puerto Rican rainforest.
In the old days, you’d have to write a proposal, win a grant, put together a team, trek out into the field and spend a few weeks or months manually collecting and cataloging samples. A few years later, if you wanted to know whether the frog population had recovered or gotten even smaller, you’d have to go through the same process all over again.
A new way of collecting this information, presented today by scientists from the University of Puerto Rico in the journal PeerJ, promises to make this process much easier, faster and more comprehensive. Their idea—a network of widely distributed microphones and web-based audio recognition software, which they call ARBIMON (for Automated Remote Biodiversity Monitoring Network)—could someday make it possible for us to eventually have real-time estimates on critical animal population levels in spots all over the world.
The researchers designed the distributed hardware part of the system to be built from relatively inexpensive, widely available components—such and iPods and car batteries—along with waterproof cases and solar panels, which would enable the microphones, once placed, to last several years. The idea is that a network of such microphones, with one placed roughly 50 square meters, could act as remote ears listening in on the ecosystem: Every ten minutes, each microphone will record one minute of the local ecosystem’s sounds (amounting to 144 recordings per day) and send it via a radio antenna to a nearby base station.
Each base station will then send the recordings on to a centralized server in Puerto Rico, from where they’ll be made public in near-real time at Arbimon.com. Simultaneously, software will analyze sounds from the recording to pick out the different noises made by different species. Using an existing bank of identified species calls, the software will assign particular sounds to particular birds, frogs and other creatures.
Verified users—perhaps a biologist working on research on a particular species, or a member of the general public with a background in birding, for example—can contribute to the project by listening to the recordings and verifying whether the software is correctly identifying sounds and matching them to right species. Over time, input by users will train the software to become more accurate.
Eventually, once the software is trained to identify each call, the researchers say it’ll be able to process more than 100,000 minute-long recordings in less than an hour. As a result, a biologist will be able access a constant stream of data on the levels of a specific species in spots around the world, or the fluctuating populations of various species in one ecosystem.
Initially, biologists can index certain frequencies of a species’ calls to known populations of that species in each location—for example, 400 coqui chirps per hour means that 10 coquis are in the area. Later on, when the frequency of calls changes, this data can be extrapolated to infer fluctuations in the population present.
In the published paper, the system’s capability was demonstrated by tracking populations of a number of birds, frog, insect and mammals species in Puerto Rico and Costa Rica over the past few years. At the Puerto Rico research site in the Sabana Seca wetland, the researchers focused on tracking populations of the Plains coqui frog, an endangered amphibian discovered in 2005 with a high-pitched, distinctive chirp. Listen to a clip of the Sabana Seca filled with coqui chirps:
Microphones were first installed there in 2008, and over the subsequent few years, the researchers trained the software to become increasingly accurate at analyzing the various sounds picked up and determining which were the plains coqui’s chirp. Eventually, scientists charted variations in the chirp’s frequency on both daily and seasonal timescales and were able to match these with surveyed data on changes in the c
One of the reasons these researchers are most excited about the new system is the way it’ll standardize and permanently store the audio samples indefinitely. 50 years from now, they say, if a conservation biologist wants to look back at the way populations of a species have fluctuated over time, he or she can simply access the recordings and have them analyzed. Not only will this help to track endangered populations, but could also pinpoint when invasive species began to dominate certain ecological niches.
The next step, according to the researchers, is installing these microphone setups in all sorts of ecosystems—every place where there’s a species that merits attention.
July 10, 2013
Every time you switch on a light, charge your electronics or heat your home in the winter, you’re relying upon a tremendous network of energy infrastructure that literally stretches across the country: power plants, pipelines, transmission wires and storage facilities.
It can be hard to visualize all this infrastructure and understand how it makes abundant energy available throughout the country. A map, though, can be a beautiful way of seeing a bigger picture—and a new map, released yesterday by the U.S. Energy Information Administration, combines a wide range of data (the locations of different types of power plants, electricity lines, natural gas pipelines, refineries, storage facilities and more) into an elegant, interactive interface that helps you understand how it all fits together. You can also zoom in on your own city or region to see the types of power plants generating electricity nearby.
The map also includes layers of real-time information on storm movement and risks, and the main intention of making all this data public is to allow utility officials and energy analysts to better understand the potential impact of storms, with hurricane season set to start. But simply playing around with the map can provide interesting insights about the state of our energy infrastructure today.
Here are a few of them, along with the percentage of U.S. electricity generation each power source currently provides:
Fossil Fuels Still Rule (Coal, 37%; Natural Gas, 30%; Petroleum, 1%)
Our capacity to generate renewable energy has certainly grown in recent years, but looking at the map (and the data), one thing is clear: coal (black), natural gas (light blue) and oil-burning (tan) power plants are still the most plentiful forms of electricity generation we have. Coal plants are especially common east of the Mississippi—a relic of the fact that most U.S. coal was once mined in West Virginia, Pennsylvania and Kentucky (PDF), even though the majority now comes from Wyoming’s Powder River Basin. Oil and natural gas plants, meanwhile, are distributed pretty evenly among population centers across the country, with the former slightly more common in the North and East, and the latter a bit more common across the South.
Nuclear Power Could be in Your Backyard (19%)
Although no new nuclear power reactors have been built since 1997, there are still 65 in operation nationally, and most are relatively close to large population centers. More than 16 million people live within 18 miles of one of these plants, the radius that Japanese officials evacuated after the 2011 Fukushima disaster. Despite the potential danger they might pose, though, nuclear plants provide far more electricity than any other non-fossil fuel option—and as a result, they reduce the amount of carbon dioxide emitted by our country as a whole.
Hydroelectric is Crucial (7%)
Hydropower was among the first electricity technologies to be implemented on a wide scale—a power station situated on Niagara Falls began supplying electricity way back in 1881—and it’s still way ahead of the other renewable options. Hydroelectric plants are largely clustered in three areas: New England, the Middle South (partly as a result of the Depression-era Tennessee Valley Authority Project) and the West.
Of all new electricity capacity built from 2008 to 2012, 36.5 percent came from wind, and it shows: Turbines can now be found in most regions of the country with sufficient wind speeds. They’re especially prevalent in the Midwest, where consistent and strong winds blow across the plains year-round. In total, large-scale wind projects have been built in 39 states, with many more in the works. The map above shows turbines (grey) against a background displaying real-time wind speeds, with green arrows indicating slowest winds, then orange showing middle speeds and red showing fastest.
Compared to wind, another main source of renewable energy—solar power—has grown at a considerably slower rate, mostly because it’s much more expensive. Still, though, several major projects have been built, including the Agua Caliente Solar Project in Arizona, which produces more photovoltaic energy than any other plant globally, and the Solar Energy Generating Systems in California’s Mojave Desert, which is the largest solar thermal energy project (generating electricity by harnessing solar power to produce heat) in the world.
It’s hard to truly appreciate how much natural gas pipeline has been laid in this country until you look at the map and see for yourself. To put it in perspective, there are more than 305,000 miles of pipeline nationally, as compared to about 47,000 miles of interstate highway.
When it’s discussed in the news, the Strategic Petroleum Reserve is mainly discussed in the abstract, an emergency supply of oil that we can use if our supply were to be disrupted. As a result, many people imagine it as a distributed, perhaps even hypothetical entity. Not true: This supply of nearly 700 million barrels of petroleum is held in four particular storage locations in Louisiana and Texas, near many of the refineries where it’s made from crude oil.
Of course, these are far from the only insights to be gained from tinkering with the map, packed with more than 20 layers of data on everything from geothermal power to offshore oil platforms to electricity transmission lines. Play around with the map yourself, turning on and off layers of data, and drop us a comment with your most interesting insights below.
July 9, 2013
In extreme Northern Scotland, between the mainland and the Orkney Islands, lies the Pentland Firth, a roughly ten-mile-wide seaway between the North Sea and the Atlantic. Along with seals, porpoises and the occasional killer whale, the Firth is known for its uncommonly strong and fast tides—they’ve been recorded at speeds as high as 18 miles per hour, among the fastest in the world—the result of an enormous quantity of water rushing back and forth through a narrow passage roughly every six hours.
For centuries, these tides have been considered a hazard to sailors and fishing vessels. More recently, though, Scottish officials have pointed out that the Pentland Firth’s powerful tides could present an unexpected benefit: As countries search for new sources of renewable energy, these tides could make Scotland the “Saudi Arabia” of tidal power.
Observers have long speculated about the potential for electricity generation using tidal energy, and though there are still only a handful of tidal power plants completed worldwide, many other projects are nearing construction or have been proposed. Of these, none equals the Pentland Firth in terms of estimated power generation capacity—Scotland has suggested it could provide as much as 10 gigawatts of electricity on average over the course of a day, enough to supply a quarter of the European Union’s daily needs—and as a result, a number of energy companies have recently acquired leases to install turbines in the waterway.
Until now, though, despite the lofty predictions, no scientists had conducted a systematic study to figure out exactly how much energy the Firth might supply. Today, a group from the University of Oxford and elsewhere released the results of their review of the waterway’s total capacity.
Though their numbers might not justify comparing Scotland with the Persian Gulf in terms of overall energy potential, they do suggest that it could certainly be a Saudi Arabia for tidal power, and that the Pentland Firth could play a major role in powering the U.K. Their analysis shows that the seaway could potentially provide an average of 1.9 gigawatts of electricity at any given time, a number that equals about half of Scotland’s electrical consumption.
The analysis, published in the Proceedings of the Royal Society A, modeled the maximum potential electricity generation of a scheme that would involve three rows of underwater tidal turbines, each consisting of hundreds of posts that stretch across the entire passage. These turbines harness the energy in the passing tides in essentially the same way that wind turbines capture the energy in passing gusts of wind—by using the flow of water to spin the turbine, which turns a magnet located in the center, thereby generating an electrical field. Because water is much denser than air, though, tidal turbines will spin faster and can potentially generate much more power than wind turbines of the same size.
The researchers looked at the construction of multiple rows of these sorts of turbines, placed in a variety of locations within the Firth. Their models took into account the depth of the water at each given location, observed tidal speeds and heights over the course of each month, and a number of other variables.
Ultimately, the team found that the maximum practical capacity of 1.9 gigawatts would be possible with three rows of turbines, built in the locations mapped below (B, C, and D on the map). Because each row slows down the movement of the tides that pass through it, building more then three would only marginally improve the power capacity, while increasing the overall cost of the project at a constant rate. (A, on the map, is a proposed alternate scheme that would produce a similar level of energy but at a higher cost.)
Of course, there are numerous impediments to constructing tidal turbines on such a huge scale, which would dwarf any current tidal energy project in existence. Some are concerned that tidal turbines could have negative ecological effects, disrupting fish and other wildlife communities. Research into just how these sorts turbines would affect local ecosystems is in its beginning stages. Additionally, in areas like the Pentland Firth, turbines would have to be constructed with large enough gaps for ships to pass through, since the channel is a crucial shipping waterway, but the authors of this paper took this sort of spacing into account when making their calculations.
As of now, the biggest hurdle is price: without any carbon pollution regulation schemes in place, most renewable sources of energy, including tidal power, just aren’t as cheap as burning coal or other fossil fuels. But many energy companies have already recognized that, long-term, the cost of fossil fuel production will increase—both because of eventual regulations of the emissions of greenhouse gases and because of fossil fuels are becoming increasingly costly to extract—and harnessing the power of the tides could provide a reliable way to meet a portion of our energy demands.