May 13, 2013
Humans drew the short end of the toothbrush when it comes to our pearly whites’ longevity. Other animals such as reptiles and fish frequently lose and replace their teeth by growing new ones, but people are stuck with the same set of mature adult teeth their entire lives. If they lose a tooth–or all 32–dentures are usually the only option.
Oddly enough, alligators’ deadly chomps may hold a clue for how scientists could coax humans into regrowing teeth. These reptiles belong to the order Crocodilia, who, with their famous cheerful grins, caused songwriters to warn that you should never smile at a crocodile. To the bane of Captain Hook and other victims of gator and croc attacks, the large reptiles often regrow their razor teeth multiple times. Researchers think that, given time, technology may advance so that we can borrow these reptilian smiles. But first, scientists need to understand just how these animals keep their smiles toothy.
In a paper published this week in the Proceedings of the National Academy of Sciences, an international team of researchers attempted to get at the mechanisms behind the superior tooth regenerating abilities of one species of Crocodilia–the American alligator–in the hopes of applying the results to humans.
In humans, organs such as hair, scales, nails and teeth “are at the interface between an organism and its external environment and therefore, face constant wear and tear,” the researchers write. But alligators have evolved ways to deal with these challenges. The carnivores can replace any of their 80 teeth up to 50 times throughout their 35 to 75-year lives. Small replacement teeth grow under each mature alligator tooth, ready to spring into action the moment a gator loses a tooth.
To figure out the molecules and cells responsible for replacement, the researchers used X-rays and small tissue samples from alligator embryos, hatchlings and 3-year old juveniles’ developing teeth. They also grew tooth cells in the laboratory and created computer models of the process. Alligator teeth appear to cycle continuously, they write, but in fact the animals’ teeth seem to go through three distinct phases: pre-initiation, initiation and growth.
Once an alligator loses a tooth, these three phases kick off. The dental lamina, or a band of tissue associated with the initial stages of tooth formation in many animals, begins to bulge. This triggers stem cells and an array of signaling molecules that direct the process of forming a new tooth.
These results may be applicable to humans’ pearly whites. Alligators’ flesh-chomping incisors are surprisingly similar to well-organized, complex vertebrate teeth such as ours. In humans, a remnant of the dental lamina–the structure crucial to tooth formation–still exists and sometimes wrongly activates and begins forming toothy tumors. If the researchers could better tease out the molecular signaling pathways behind alligator tooth replacement, they reason, they they may be able to induce those same chemical instructions in humans to coax the body into forming a new tooth after one gets kicked out in a soccer game or has to be removed after becoming infected.
Alternatively, doctors may be able to shut off the molecules responsible for conditions that cause uncontrolled tooth formation. Individuals suffering from cleidocranial dysplasia syndrome grow many unusually shaped, peg-like teeth, for example, and people with Gardner syndrome also grow supernumerary, or extra, teeth.
While the researchers still need to clarify more molecular details behind alligator tooth growth, this initial study does hint that doctors and dentists may someday be able to selectively bestow patients with the reptiles’ tooth-regenerating abilities.
“Based on our study, it may be possible to identify the regulatory network for tooth cycling,” the researchers conclude. “This knowledge will enable us to either arouse latent stem cells in the human dental lamina remnant to restart a normal renewal process in adults who have lost teeth or stop uncontrolled tooth generation in patients with supernumerary teeth.”
Either way, they note that “Nature is a rich resource from which to learn how to engineer stem cells for application to regenerative medicine.”
May 8, 2013
Throwing a baseball is hard. As xkcd pointed out just yesterday, accurately throwing a strike requires that a pitcher release the ball at an extremely precise moment—doing so more than half a millisecond too early or too late causes it to miss the strike zone entirely. Because it takes far longer (a full five milliseconds) just for our nerve impulses to cover the distance of our arm, this feat requires the brain to send a signal to to the hand to release the ball well before the arm has reached its proper throwing position.
The one feat even more difficult than throwing a fastball, though, might be hitting one. There’s a 100 millisecond delay between the moment your eyes see an object and the moment your brain registers it. As a result, when a batter sees a fastball flying by at 100 mph, it’s already moved an additional 12.5 feet by the time his or her brain has actually registered its location.
How, then, do batters ever manage to make contact with 100 mph fastballs—or, for that matter, 75 mph change-ups?
In a study published today in the journal Neuron, UC Berkeley researchers used fMRI (functional magnetic resonance imaging) to pinpoint the prediction mechanisms in the brain that enable hitters to track pitches (and enable all sorts of people to envision the paths of moving objects in general). They found that the brain is capable of effectively “pushing” forward objects along in their trajectory from the moment it first sees them, simulating their path based on their direction and speed and allowing us to unconsciously project where they’ll be a moment later.
The research team put participants in an fMRI machine (which measures blood flow to various parts of the brain in real time) and had them watch a screen showing the “flash-drag effect” (below), a visual illusion in which a moving background causes the brain to mistakenly interpret briefly flashed stationary objects as moving. “The brain interprets the flashes as part of the moving background, and therefore engages its prediction mechanism to compensate for processing delays,” said Gerrit Maus, the paper’s lead author, in a press statement.
Because the participants’ brains thought these briefly flashing boxes were moving, the researchers hypothesized, the area of their brain responsible for predicting the motion of objects would show increased activity. Similarly, when shown a video where the background didn’t move but the flashing objects actually did, the same motion-prediction mechanism would cause similar neuron activity to occur. In both cases, the V5 region of their visual cortex showed distinctive activity, suggesting this area is home to the motion-prediction capabilities that allow us to track fast-moving objects.
Previously, in another study, the same team had zeroed in on the V5 region by using transcranial magnetic stimulation (which interferes with brain activity) to disrupt the area and found that participants were less effective at predicting the movement of objects. “Now not only can we see the outcome of prediction in area V5, but we can also show that it is causally involved in enabling us to see objects accurately in predicted positions,” Maus said.
It’s not much of a leap to suppose that this prediction mechanism is more sophisticated in some people than others—which is why most of us would whiff when trying to hit the fastball of a major league pitcher.
A failure in this mechanism might be at work, the researchers say, in people who have motion perception disorders such as akinetopsia, which leaves the ability to see stationary objects completely intact but renders a person essentially blind to anything in motion. Better understanding how neurological activity in the V5 region—along with other areas of the brain—allows us to track and predict movement could, in the long-term, help us develop treatments for these sorts of disorders.
May 7, 2013
In the past century or so, football helmets have come a long way, evolving from crude “leatherheads” crafted by shoemakers to plastic-and-rubber hybrids that can be customized to fit a player’s head and have radios built in.
Nevertheless, the sport currently faces a serious and growing problem: brain injuries. Studies have shown that former NFL players are about three times more likely to die from Alzheimer’s, Parkinson’s and Lou Gehrig’s diseases as the general population, a result of the alarming number of concussions they experience over the course of their careers. The NFL has responded by changing its rules to minimize head impacts, instituting stricter guidelines for concussed players returning to games and pouring money into attempts to develop safer helmets.
But some critics argue that, no matter how much research we undertake, there’s simply no way to create a concussion-proof helmet—no technology can stop a fundamentally violent game from inflicting harm. A 2011 study, in fact, found that with many types of impacts, modern helmets were no better than vintage leather ones at protecting players’ heads.
But now, fans who find their desire for a safe game at odds with their love of it can take comfort in a new study, published today in the Journal of Neurosurgery, that determines otherwise: Compared to “leatherheads,” new helmets are indeed much more effective at protecting the human head. Researchers from Virginia Tech came to the finding by using an automated head impact simulation system to test the effectiveness of a pair of vintage Hutch H-18 leather helmets from the 1930s against 10 plastic helmets currently in use, and found that, depending on the force of impact, modern helmets reduced the concussion risk by anywhere from 45 to 96 percent.
The team used the system to measure four different types of head impacts (on the front, side, rear and top of the head), and dropped the head from a range of heights (12, 24, 36, 48 and 60 inches) with each helmet on to simulate in-game impacts of a variety of intensities. Sensors inside the head were used to measure the force of each type of impact. This same type of testing, developed by the Virginia Tech team, has been used extensively to classify the safety of modern helmets on a 1 through 5-star scale.
The researchers found that there was some difference in the performance of the modern helmets—but, as you’d probably expect just from looking at them, the vintage leather helmets performed significantly more poorly than any of the plastic ones. At the lowest intensity impacts (from a 12 inch drop height), the modern helmets reduced the impact on the head by by 59 to 63 percent, and at the medium-intensity impacts (from 36 inches), they provided a 67 to 73 percent reduction. The researchers didn’t even try dropping the head with the leather helmets on from 48 or 60 inches for fear of damaging them.
At the same time, it’s worth noting that the vintage helmets tested were each about 80 years old, so their age might have meant weaker leather fibers than if pristine leatherheads had been tested. Additionally, the leather helmets had presumably taken some beating during their years of use, while the plastic ones were unused before being subjected to the drops, which might have further skewed the results.
Still, both these factors were also included in the previous 2011 finding that leather helmets were just as effective as modern ones—so what accounts for the fact that this experiment so thoroughly contradicted it? The authors of this study say that the experimental setup used in the previous one—in which two helmeted heads were smashed together, one with a modern helmet and the other with either a modern or leather one—distorted the findings and masked the differences between helmet types. Some of the impact, they say, was actually absorbed by the padding in the modern plastic helmet even when the leather one was being tested.
Of course, given the distressing numbers on the concussions suffered by football players even with the latest helmets, this sort of testing shouldn’t suggest that the goal of designing safer head gear has been achieved. But it should give us a bit of hope in showing that 100 years of helmet design has provided some benefits—and future efforts to create and rigorously test new helmet technologies might be able to cut down on concussions long-term.
May 3, 2013
Lipstick has seen a fair share of funky ingredients in its long history of more than 6,000 years, from seaweed and beetles to modern synthetic chemicals and deer fat
. In recent years, traces of lead have been found in numerous brands of the popular handbag staple, prompting some manufacturers to go the organic route. This week, more dangerous substances joined the roster.
Researchers at Berkeley’s School of Public Health at the University of California tested 32 different types of lipstick and lip gloss commonly found in the brightly lit aisles of grocery and convenience stores. They detected traces of cadmium, chromium, aluminum, manganese and other metals, which are usually found in industrial workplaces, including make-up factories. The report, published in the journal Environmental Health Perspectives, indicated that some of these metals reached potentially health-hazardous levels.
Lipstick is usually ingested little by little as wearers lick or bite their lips throughout the day. On average, the study found, lipstick-clad women consume 24 milligrams of the stuff a day. Those who reapply several times a day take in 87 milligrams.
The researchers estimated risk by comparing consumers’ daily intake of these metals through lip makeup with health guidelines. They report that an average use of some lipsticks and lip glosses results in “excessive exposure” to chromium, and frequent use can lead to overexposure to aluminum, cadmium and manganese.
Minor exposure to cadmium, which is used in batteries, can result in flu-like symptoms such as fever, chills and achy muscles. In the worst cases, the metal is linked to cancer, attacking the cardiovascular, respiratory and other systems in the body. Chromium is a carcinogen linked to stomach ulcers and lung cancer, and aluminum can be toxic to the lungs. Long-term exposure to manganese in high doses is associated with problems in the nervous system. There are no safe levels of chromium, and federal labor regulations require industrial workers to limit exposure to the metal in the workplace. We naturally inhale tiny levels of aluminum present in the air, and many FDA-approved antacids contain the metal in safe levels.
Despite the presence of these metals in lipstick, there’s no need to start abandoning lipstick altogether—rather, the authors call for more oversight when it comes to cosmetics, for which there are no industry standards regulating their metal content if produced in the United States.
After all, cadmium and other metals aren’t an intended ingredient in lipstick—they’re considered a contaminant. They seep into lipstick when the machinery or dyes used to create the product contain the metals themselves. This means trace amounts are not listed on the tiny stickers on lipstick tubes, so there’s no way to know which brands might be contaminated.
Concern about metals in cosmetics came to the forefront of American media in 2007, when an analysis of 33 popular brands of lipstick by the Campaign for Safe Cosmetics showed that 61 percent of them contained lead. The report eventually led the Food and Drug Administration (FDA), which doesn’t regulate cosmetics, to look into the issue, and what it found wasn’t any better: it found lead in all of the samples tested, with levels four times higher than the earlier study, ranging from 0.09 parts per million to 3.06 parts per million. According to the Centers for Disease Control and Prevention, there is no safe level of lead for humans.
So we’ve got cadmium, chromium, aluminum, manganese and lead in our lipstick. What else? Today, most lipstick is made with beeswax, which creates a base for pigments, and castor oil, which gives it a shiny, waxy quality. Beeswax has been the base for lipstick for at least 400 years–England’s Queen Elizabeth I popularized a deep lip rouge derived from beeswax and plants.
Lipstick as we know it appeared in 1884 in Paris, wrapped in silk paper and made from beeswax, castor oil and deer tallow, the solid rendered fat of the animal. At the time, lipstick was often colored using carmine dye. The dye combined aluminum and carminic acid, a chemical produced by cochineals–tiny cacti-dwelling insects–to ward off other insect predators.
That early lipstick wasn’t the first attempt at using insects or to stain women’s mouths. Cleopatra’s recipe for homemade lipstick called for red pigments drawn out from mashed-up beetles and ants.
But really, any natural substance with color was fair game for cosmetics, regardless of its health effects: Historians believe women first starting coloring their lips in ancient Mesopotamia, dotting them with dust from crushed semi-precious jewels—these lovely ancients were eating tiny bits of rocks whenever they licked their lips. Ancient Egyptians used lip color too, mixing seaweed, iodine and bromine mannite, a highly toxic plant-derived chemical that sickened its users.
From mannite to heavy metals, humanity’s quest for painted beauty doesn’t seem to have progressed far from toxic roots. The sacrifices we make for fashion!
If you’ve ever noticed a strange, not-entirely-pleasant scent coming from your urine after you eat asparagus, you’re definitely not alone.
Distinguished thinkers as varied as Scottish mathematician and physician John Arbuthnot (who wrote in a 1731 book that “asparagus…affects the urine with a foetid smell”) and Marcel Proust (who wrote how the vegetable “transforms my chamber-pot into a flask of perfume”) have commented on the phenomenon.
Even Benjamin Franklin took note, stating in a 1781 letter to the Royal Academy of Brussels that “A few Stems of Asparagus eaten, shall give our Urine a disagreable Odour” (he was trying to convince the academy to “To discover some Drug…that shall render the natural Discharges of Wind from our Bodies, not only inoffensive, but agreable as Perfumes”—a goal that, alas, modern science has still not achieved).
But modern science has, at least, shed some light on why this one particular vegetable has such an unusual and potent impact on the scent of urine. Scientists tell us that the asparagus-urine link all comes down to one chemical: asparagusic acid.
Asparagusic acid, as the name implies, is (to our knowledge) only found in asparagus. When our bodies digest the vegetable, they break down this chemical into a group of related sulfur-containing compounds with long, complicated names (including dimethyl sulfide, dimethyl disulfide, dimethyl sulfoxide and dimethyl sulfone). As with many other substances that include sulfur—such as garlic, skunk spray and odorized natural gas—these sulfur-containing molecules convey a powerful, typically unpleasant scent.
All of these molecules also share another key characteristic: They’re volatile, meaning that have a low enough boiling point that they can vaporize and enter a gaseous state at room temperature, which allows them to travel from urine into the air and up your nose. Asparagusic acid, on the other hand, isn’t volatile, so asparagus itself doesn’t convey the same rotten smell. But once your body converts asparagusic acid into these volatile, sulfur-bearing compounds, the distinctive aroma can be generated quite quickly—in some cases, it’s been detected in the urine of people who ate asparagus just 15-30 minutes earlier.
Of course, the whole asparagus-urine scent issue is complicated by an entire separate issue: Some people simply don’t smell anything different when urinate after they eat asparagus. Scientists have long been divided into two camps in explaining this issue. Some believe that, for physiological reasons, these people (which constitute anywhere from 20 to 40 percent of the population) don’t produce the aroma in their urine when they digest asparagus, while others think that they produce the exact same scent, but somehow lack the ability to smell it.
On the whole, the evidence is mixed. Initially, a pair of studies conducted in the 1980s with participants from France and Israel found that everyone produced the characteristic scent, and that a minority of people were simply unable to smell it. People with the ability to detect the scent, though, were able to smell it even in the urine of those who couldn’t smell it, indicating that the differences were rooted in perception, not production.
More recent studies, though, suggest the issue is a bit more complicated. The most recent study, from 2010, found that differences existed between individuals in both the production and detection of the scent.
Overall, scientists now conclude that most of the difference is in perception—that is, if your urine doesn’t seem to smell any differently after you eat asparagus, it’s likely that you simply can’t perceive the sulfurous compounds’ foul odor, but there’s a small chance it’s because your body digests asparagus in a way that reduces the concentration of these chemicals in your urine.
It’s still unclear why some people don’t produce the smell, but we do seem to have a clear explanation of why some people don’t perceive it. In 2010, the genetic sequencing company 23andMe conducted a study in which they asked nearly 10,000 customers if they noticed any scent in their urine after eating asparagus, and looked for genetic similarities among those who couldn’t. This peculiarity—which you might consider useful if you eat asparagus frequently—appears to stem from a single genetic mutation, a switched base-pair among a cluster of 50 different genes that code for olfactory receptors.
We’re still waiting for some enterprising team of scientists to attempt gene therapy to convert smellers into non-smellers—but given other priorities to use genetic modification to cure blindness and breast cancer, it seems likely that those suffering from asparagus-scented urine might have to wait a while.