November 29, 2013
A few years ago, University of Maryland PhD student Nathan Jud was routinely examining a batch of ancient plant fossils in the Smithsonian Natural History Museum‘s collections when one in particular caught his eye.
“It looked sort of like a little piece of fern, so I tried to remove a bit of the rock that was covering it to get a sense of what type of fern it was,” he says. “But the more of the rock I would lift off the surface, the more fossil I found buried. What I thought had been one little piece of a leaf actually turned out to be two, connected to each other.”
As he labored to carefully flake the rock without defacing the fossil, he noticed a series of curious characteristics that suggested the preserved plant was no ordinary fern: It had a closed network of veins, rather than a series of branching ones that split off from each other without coming back together, and at its tips, there were tiny structures called glandular teeth, used to shed excess water.
“Eventually, I realized this wasn’t a fern at all, but some kind of early flowering plant,” he says. Its features wouldn’t be at all out of the ordinary in a plant growing outside today. The fact that they occur in a fossil from the Early Cretaceous period, though, is remarkable. At somewhere between 125 and 115 million years old, this fossil, described in a paper Jud published today in the American Journal of Botany, is among the oldest flowering plants ever found in North America.
Flowering plants—which replicate with sexual structures (i.e. flowers) to produce seeds—now dominate the planet, but for the first 300 million years or so of plant existence, beginning around 450 million years ago, the only types of vegetation belonged to older, more primitive families, such as algae, mosses and ferns, which all reproduce with spores rather than seeds, or gymnosperms, which produce seeds but not flowers.
During the Early Cretaceous, some of the first primitive flowering plants began to evolve. Researchers know that the layer in which this new fossil was found dates to this time period due to a few factors: Pollen analysis (which considers the chemical makeup of pollen embedded in the surrounding rock) and as well as study of the surrounding sediment itself. The same layer has previously produced several other flowering plant fossils of a similar age—together, they’re the oldest ever discovered in North America—but this is the oldest example of a eudicot, a group that includes roughly 70 percent of flowering plants worldwide today that share a distinctively-shaped pollen structure.
Compared to the other fossils found in the same layer, this one is especially remarkable for its derived traits, anatomical characteristics that were previously thought to have developed much more recently in flowers. Their existence so long ago suggests that some early plants were actually quite complex.
“When I compared it to living plants, I realized it was remarkably similar to the leaves of a certain group of modern poppies,” Jud says. “I didn’t expect to see a group that seemingly modern in a collection that old.” The fact that these features existed so long ago, both in this plant and other ancient fossils recently excavated in China, tells us that the evolution of flowering plants (which Charles Darwin famously called an “abominable mystery“) did not happen gradually, but instead occurred very rapidly during a narrow time interval in the Early Cretaceous period between when flowering plants first emerged and the date of this fossil.
There’s also a much more recent history of this fossil that’s just as fascinating. Jud did a bit of research and found that it’d been excavated in 1971 by a former Smithsonian curator, Leo Hickey, who went on to Yale and died in February before working with Jud to re-analyze the fossil after all these years. Hickey had found it during a dig at the Dutch Gap, in Virginia, in sediments that were exposed over a century earlier, by freed slaves who were forcibly taken from the Roanoke Island Freedmen’s Colony by Union troops and forced to dig a canal in August 1864.
While digging, they exposed ancient fossil-filled rocks, and a few decades later, in the 1870s and 1880s, scientists worked there to collect fossils and create some of the Smithsonian’s first fossil collections. Later, Hickey and other researchers returned to collect remaining specimens.
Jud honored this recent history in naming the ancient species that this specimen represents. “Potomac refers to the Potomac Group beds where the fossil was found, capnos is a reference to living poppies that are quite similar to the fossil and apeleutheron is the Greek word for freedmen,” he says. “So the new plant will be named Potomacapnos apeleutheron: roughly, ‘freedmen’s poppy of the Potomac.’”
November 27, 2013
When we think of ants as builders, we normally imagine them digging intricate tunnel networks as part of underground colonies.
But David Hu, Nathan Mlot and a team of other researchers at Georgia Tech are studying a very different type of building behavior specific to one ant species: The ability of Solenopsis invicta to construct bridges, rafts and even temporary shelters using their own bodies as building material.
“Fire ants are capable of building what we call ‘self-aggregations,’” Hu says. “They can build little boats, little houses called bivouacs and even bridges to cross streams by being the building material, linking their bodies together and forming strong networks.”
The ants are now considered an invasive species in 25 states, Asia and Australia, but their unusual behavior is a survival strategy shaped by their native environment: a particular area of wetlands in western Brazil that are flooded frequently. “The ants live underground, but when it begins to flood, they have to gather the colony members, pull them out of the ground and build a floating raft,” Hu says.
When this raft hits land, the ants keep building. To cross small streams during their subsequent migration, they make living bridges that allow the entire colony to scramble to safety. Afterward, using their bodies, they construct a temporary aboveground encampment to provide shelter for the few days it takes to re-dig underground tunnels. All the while, the ants that form the temporary shelter are continuously moving, but still preserving the structure. “It’s really living architecture—it has well-constructed, organized tunnels, brooding rooms,” Hu says. At least for the ants in the inside, this provides protection against hostile weather or predators.
Hu, an engineer, is primarily interested in studying the swarming ants as a novel material with unprecedented characteristics. As part of his group’s recent research, presented yesterday at an annual meeting of the American Physical Society, he and colleagues considered the ants within the context of other “active materials”—substances that can respond to changing conditions, such as self-healing cements that can use the energy in sunlight to expand and fill their own fractures.
“We wanted to characterize what kind of material it is—is it a fluid, or is it a solid, and how does it respond to stress?” he says. “In nature, for instance, these rafts might float down a river and bump into rocks, or raindrops might hit them.”
To test these self-aggregations, Hu’s team used a few techniques, comparing live ant structures to clumped dead ants as a control. Using a rheometer—a device that can precisely measure the stress response and flow of a fluid, and is often applied in industrial situations (such as the development of a new shampoo)—they found that the ants continuously reorganize their structure to maintain stability.
Many materials behave like a solid when stressed by forces moving at certain speeds, and a fluid when stressed by slowed ones. Water, for instance, behaves like a fluid when you stick your hand in it, but a solid when hit by a human body jumping off a diving board—the reason that a belly flop hurts so much.
But the ant structures are a combination of solid and fluid when stressed by forces at all speeds, the researchers found. They actively deform their structure to accommodate a stress (like a fluid) but then bounce back into place afterward (like a solid). Check out what happens when one of their structures is compressed by a petri dish, for instance:
“This makes sense, based on their natural environment,” Hu says. “If they’re floating in a raft down a river, they have no control over where it floats, so if there’s something in the way—say, a twig—you see respond and flow around the twig, kind of like an amoeba.”
The ants’ sheer resiliency and buoyancy is also remarkable. When the researchers tried to push the floating rafts below the water’s surface, they found they could resist a significant amount of force and float back up:
This is enabled, in part, by the ants’ exoskeletons, which are naturally hydrophobic (i.e. they chemically repel water). When many ants clump together to form a structure, water doesn’t penetrate into the gaps between then, so when they’re forced underwater, the air that remains in these cavities helps them float.
Perhaps the biggest mystery of these ants’ remarkable living structures is how the creatures communicate to build them. Most ant communication is based on trails of pheromones left on the ground, but in such an interconnected form, that type of communication seems unlikely. Microscopic examination reveals that the ants grasp each other using both their jaws and little claws on the end of their legs. Noting this, Hu adds, ”We think they’re communicating through touch, but we really don’t understand it yet.”
November 26, 2013
Seahorses belong to the genus Hippocampus, which gets its name from the Greek words for “horse” and “sea monster.” With their extreme snouts, weirdly coiled bodies and sluggish movements produced by to two puny little fins, these oddly shaped fish seem like an example of evolution gone terribly awry. And yet, new research published today in Nature Communications shows that it is precisely the seahorse’s uncanny looks and slow motions that allow it to act as one of the most stealthy predators under the sea.
Seahorses, like their close relatives the pipefish and sea dragons, sustain themselves by feasting on elusive, spastic little crustaceans called copepods. To do this, they use a method called pivot feeding: they sneak up on a copepod and then rapidly strike before the animal can escape, much like a person wielding a bug swatter tries to do to take out an irritating but otherwise impossible-to-catch fly. But like that lumbering human, the seahorse will only be successful if it is able to get near enough to its prey to strike at very close range. In the water, however, this is an even greater feat than on land because creatures like copepods are extremely sensitive to any slight hydrodynamic change in the currents around them.
So how do those ungainly little guys manage to feed themselves? As it turns out, the seahorse is a more sophisticated predator than appearance might suggest. In fact, it is precisely its looks that make it an ace in the stealth department. To arrive at this surprising conclusion, researchers from the University of Texas at Austin and the University of Minnesota used holographic and particle image velocimetry–fancy ways of visualizing 3D movements and water flow, respectively–to monitor the hunting patterns of dwarf seahorses in the lab.
In dozens of trials, they found that 84 percent of the seahorses’ approaches successfully managed not to sound the copepod’s retreat alarms. The closer the seahorse could get to its unsuspecting prey and the faster it struck, the greater its odds of success, they observed. Once in range of the copepod, seahorses managed to capture those crustaceans 94 percent of the time. Here, you can see that method of attack, in which the seahorse’s giant head looks like a floating bit of marine sludge drifting toward the blissfully ignorant copepod:
The way the seahorse’s movements and morphology–especially its head–interact with the water particles, the researchers found, likely take the credit for its exceptional hunting skill. The animal’s arched neck acts like a spring for generating an explosive strike, they describe, while the shape of its snout–a thin tube with the mouth positioned at the very end–allows it to drift through the water while creating minimal disturbance.
To emphasize this pinnacle of engineering, the team compared water disruptions caused by seahorses with those of sticklebacks, a relative of the seahorse but with a more traditional fishy look. Thanks to the shape and contours of the seahorse’s head, that predator produced significantly less fluid deformation in the surrounding water than the stickleback. The poor stickleback possesses neither the morphology nor posture to generate “a hydrodynamically quiet zone where strikes occur,” the authors describe. In other words, while the seahorse may appear a bit odd so far as fishes go, evolution was obviously looking out for that funny but deadly animal’s best interests.
November 25, 2013
Official estimates of U.S. emissions of the greenhouse gas methane may be far too low, according to a report published today by the Proceedings of the National Academy of Sciences. Oil and gas production is contributing much more methane than either the U.S. Environmental Protection Agency (EPA) or the best global survey of the greenhouse gas assume.
Carbon dioxide tends to get the most attention in climate change discussions because it’s the greenhouse gas most responsible for the changes we’re now seeing on Earth. But methane (CH4) has similar heat-trapping effects, and pound for pound, it traps 70 times more heat than carbon dioxide (CO2). However, methane has a shorter atmospheric lifespan, sticking around only for about ten years, compared to a century for CO2.
Like carbon dioxide, methane has been on the rise. Atmospheric concentrations of CH4 have increased from around 680 to 715 parts per billion (ppb) before the Industrial Revolution to approximately 1,800 ppb today. Determining where all that extra methane is coming from is important for efforts to reduce greenhouse gas emissions and limit future climate change effects.
The EPA currently lists livestock production as the biggest methane contributor, followed by, in order, natural gas production, landfills and coal mining. Methane measurements made from aircraft, however, are calling that order, and the EPA’s methane estimates, into question. The EPA and the Emissions Database for Global Atmospheric Research (EDGAR) both use a “bottom up” method of estimating methane, which depends on taking samples and calculating how much methane comes from known emitters, such as livestock herds and petroleum fields, then adding it all up. The aircraft studies take a “top-down” approach instead, starting with measurements of methane in atmospheric samples.
In the new study, Scot M. Miller of Harvard University and colleagues used aircraft-based sampling and a National Oceanic and Atmospheric Administration/Department of Energy air-sampling network to tally 12,694 observations of methane from across the United States in 2007 and 2008. They then used those observations and a computer model to create estimates of monthly methane emissions. The analysis found large differences between their observations and the EPA and EDGAR estimates: The new figures were 1.5 times greater than those of the EPA and 1.7 times those from EDGAR.
Nearly a quarter of the nation’s methane emissions came from just three states—Texas, Oklahoma and Kansas. The estimates for CH4 emissions from these three states were 2.7 times higher than those of EDGAR. “Texas and Oklahoma were among the top five natural gas producing states in the country in 2007,” the researchers note in their paper. The team was able to trace the methane to oil and gas production not simply through coincidences of geography but also because of
their observations found propane in the atmosphere above certain areas in these states. Propane is not produced by methane sources such as livestock or landfills–rather, it is released during fossil fuel extraction. Thus, its presence indicates that some fraction of the methane over those those regions must come from fossil fuels.
“This is the first study to quantify methane emissions at regional scales within the continental United States with enough spatial resolution to significantly criticize the official inventories,” study co-author Marc L. Fischer, of the University of California Berkeley, said in a statement. “Even if we made emissions from livestock several times higher than inventory estimates would suggest for the southwest, you still don’t get enough to cover what’s actually being observed. That’s why it looks like oil and gas are likely responsible for a large part of the remainder…Cows don’t produce propane; oil and gas does.”
Cow farts aren’t getting off the hook here, and clearly the oil and gas industry is already known to be a big contributor to climate change. But one of the selling points of natural gas has been that it is more climate-friendly–or at least less climate-damaging–than other forms of fossil fuels, such as coal. If producing that natural gas results in more methane emissions than currently assumed, then it might not be such a good choice after all.
Once upon a time, scientists thought that the human brain was a rigid, predictable organ, not tremendously different from the lungs or liver. Based on a person’s genetics, it developed in a predetermined way, endowing an individual with a particular level of learning capabilities, problem-solving abilities and baseline intelligence.
Now, though, as part of emerging research into brain plasticity, neuroscientists are recognizing that the brain is a responsive, constantly evolving organ that can change at both the cellular and large-scale levels due to environmental influences and experiences. Much of this research is hopeful: It’s shown how in people with impaired vision, for instance, areas of the brain normally devoted to processing sights can be repurposed to analyze sound.
Over the past few months, though, a series of studies have emphasized that the brain can change for worse, as well as for the better. A child’s brain, not surprisingly, is especially vulnerable to such effects—and this research has shown that growing up in difficult circumstances dictated by poverty can wreak damage to a child’s cognitive skills that last a lifetime.
An October study by researchers from the University of Michigan, for instance, used fMRI (functional magnetic resonance imaging)—which detects blood flow in various areas of the brain as a reflection of brain activity—to study the regulation of emotions in young adults who were part of a long-term study on poverty. They compared a participant’s family income at age 9 (based on survey data collected at the time) with his or her current neural activity in different brain regions, and found that those who grew up in poverty showed increased activity in the amygdala (believed to be involved in anxiety, fear and emotional disorders) and decreased activity in the prefrontal cortex (which limits the influence of the amygdala, putting long-term decision making over impulse) when the participants were shown emotionally-upsetting images.
It’s impossible to know for sure, but the researchers suspect that a range of chronic stresses that can accompany growing up in poverty—things like crowding, noise, violence, family turmoil or separation—impact the development of the brain in childhood and adolescence, potentially explaining this correlation.
Another October study, meanwhile, took a more basic approach, examining the relationship between nurturing during childhood and the growth of brain tissue in children between the ages of six and 12. In it, Washington University in St. Louis researchers found that among the 145 children studied, those whose parents had poor nurturing skills had slowed growth in white matter, grey matter and the volumes of several different areas of the brain involved with learning skills and coping with stress. Based on the differing growth rates between children who resembled each other in terms of other key factors, it seemed as though the experience of growing up with adults with less nurturing skills effectively set back their mental development a year or two. And impoverished parents, they found, were more likely to have poor nurturing skills.
Sure, attempting to objectively evaluate the parenting styles of the adults in this study might be a bit heavy-handed, but the study identified chronic stresses experienced by the children as a key element as well: Children who grew up in poverty but had fewer stressful life events (as part of a larger program, they’d gone through annual assessments from the age of three onward) demonstrated smaller reductions in neural development.
Others have even looked into very specific behavioral effects of poverty. A recent Northwestern University study found a link that children with lower socioeconomic status tended to have less efficient auditory processing abilities—that is, the area of their brains responsible for processing sound showed more response to distracting noise and less activity as a result of a speaker’s voice than control participants. This might be an effect, the researchers say, of the known correlation between low income and the amount of noise exposure in urban populations.
Of course, most of these are limited by the very nature of a longitudinal study in that they’re correlations, rather than causations—ethics aside, it’s impossible to actively alter a person’s childhood circumstances in a controlled manner and then check the results, so researchers are forced to observe what happens in the real world and draw conclusions. Additionally, in most of these cases, it’s unknown whether the effects are temporary or permanent—whether children exposed to poverty are permanently left behind their peers, or whether they’re able to catch up if given the chance.
But the fact that correlations between poverty and altered mental function when stressed has been repeatedly observed across a range of study designs, circumstances and research groups makes it likely that these effects aren’t aberrations. Additionally, even if they are temporary effects that can be resolved by changing a child’s environment, there’s other recent research that dishearteningly reveals a neurological mechanism that helps to perpetuate poverty, by making it difficult for parent to make choices that change these circumstances.
An August study in Science found that being preoccupied with the all-consuming concerns of poverty—struggling to pay medical bills, for instance—taxes the brain, leaving less extra bandwidth to solve complex cognitive problems and harming long-term decision making ability. In a pair of study groups (shoppers in a New Jersey mall and sugar cane farmers in rural India), simply getting the participants thinking about economic problems (asking them what they’d do if they had to pay $1500 to repair their car, for instance) caused them to perform more poorly on tests that measure IQ and impulse control than otherwise.
The bandwidth problem they identified is temporary, not permanent, but it does explain how making the difficult decisions that might allow someone to get ahead are harder for a person immersed in poverty. It also highlights yet another stressor for parents seeking to ensure that their children escape poverty—they might be inadvertently contributing to an environment that keeps their children from rising above their circumstances.