March 18, 2013
Some scientists investigate the universe’s biggest mysteries, like the Higgs boson, the mysterious particle that endows all other subatomic particles with mass.
Other researchers look into questions that are, well, a bit humbler—like the age-old puzzle of whether roosters simply crow when they see light of any kind, or if they truly know to crow when the morning sun arrives.
Lofty or not, it’s the goal of science to answer all questions that arise from the natural world, from roosters to bosons and everything in between. And a new study by Japanese researchers published today in Current Biology resolves the rooster question once and for all: The birds truly do have an inner circadian rhythm that tells when to crow.
The research team, from Nagoya University, investigated via a fairly straightforward route: They put several groups of four roosters in a room for weeks at a time, turned the lights off, and let a video camera running. Although roosters can occasionally crow at any time of day, the majority of their crowing was like clockwork, peaking in frequency at time intervals roughly 24 hours apart—the time their bodies knew to be morning based on the sunlight they’d last seen before entering the experiment.
This consistency continued for about 2 weeks, then gradually began to die out. The roosters were left in the room for 4 weeks in total, and during the second half of the experiment, their crowing began occurring less regularly, at any time of day, suggesting that they do need to see the sun on a regular basis for their circadian rhythms to function properly.
In the experiment’s second part, the researchers also subjected the roosters to alternating periods of 12 hours of light and 12 hours of darkness, while using bright flashes of light and the recorded crowing of roosters (since crowing is known to be contagious) to induce crowing at different times of day. When they activated these stimuli near at or near the dawn of the roosters’ 12-hour day, crowing rates increased significantly. At other times of day, though, exposing them to sudden flashes of light or playing the sound of crowing had virtually no effect, showing that the underlying circadian cycle played a role in the birds’ response to the stimuli.
Of course, many people who live in close proximity to roosters note that they often crow in response to a random light source turning on, like a car’s headlights, no matter what time of day it is. While this may be true, the experiment shows that the odds of a rooster responding to a car’s headlights depend on how close the current time is to dawn—at some level, the rooster’s body knows whether it should be crowing or not, and responding to artificial stimuli based on this rhythm.
For the research team, all this is merely a prelude to their bigger, more complex questions: Why do roosters have a biological clock that controls crowing in the first place, and how does it work? They see the simple crowing patterns of the rooster as an entry point into better understanding the vocalizations of a range of animals. “We still do not know why a dog says ‘bow-wow’ and a cat says ‘meow,’” Takashi Yoshimura, one of the co-authors, said in a press statement. “We are interested in the mechanism of this genetically controlled behavior and believe that chickens provide an excellent model.”
February 21, 2013
3D printing is big news: During his State of the Union speech, President Obama called for the launch of manufacturing hubs centered around 3D printing, while earlier this week, we saw the birth of one of the most playful applications of the technology yet, the 3D Doodler, which lets you draw solid plastic objects in 3 dimensions.
Yesterday, Cornell doctors and engineers presented a rather different use of the technology: a lifelike artificial ear made of living cells, built using 3D printing technology. Their product, described in a paper published in PLOS ONE, is designed to help children born with congenital defects that leave them with underdeveloped outer ears, such as microtia.
The prosthesis—which could replace previously used artificial materials with styrofoam-like textures, or the use of cartilage tissue harvested from a patient’s ribcage—is the result of a multistep process.
First, the researchers make a digital 3D representation of a patient’s ear. For their prototype, they scanned healthy pediatric ears, but theoretically, they might someday be able to scan an intact ear on the other side of a patient’s head—if their microtia has only affected one of their ears—and reverse the digital image, enabling them to create an exact replica of the healthy ear.
Next, they use a 3D printer to produce a solid plastic mold the exact shape of the ear and fill it with a high-density collagen gel, which they describe as having a consistency similar to Jell-O.
After printing, the researchers introduce cartilage cells into the collagen matrix. For the prototype, they used cartilage samples harvested from cows, but they could presumably use cells from cartilage elsewhere on the patient’s own body in practice.
Over the course of a few days in a petri dish filled with nutrients, the cartilage cells reproduce and begin to replace the collagen. Afterward, the ear can be surgically attached to a human and covered with skin, where the the cartilage cells continue to replace the collagen.
So far, the team has only implanted the artificial ears underneath the skin on the backs of lab rats. After 3 months attached to the rats, the cartilage cells had replaced all the collagen and filled in the entire ear, and the prosthetic retained its original shape and size.
In a press statement, co-author Jason Spector said that using a patient’s own cells would greatly reduce the chance of the body rejecting the implant after surgery. Lawrence Bonassar, another co-author, noted that in addition to congenital defects, the prosthesis could also be valuable for those who lose their outer ear as a result of cancer or an accident. If used for a child with microtia, the ear won’t grow along with the head over time, so the researchers recommend waiting to implant one of their prostheses until the patient is 5 or 6 years old, when ears have normally grown to more than 80 percent of their adult size.
The biggest advantage of the new technology over existing methods is the fact that the production process is customizable, so it could someday produce remarkably realistic-looking ears for each patient on a rapid timescale. The researchers have actually sped up the process since conducting the experiments included in the study, developing the ability to directly print the ear using the collagen as an “ink” and skip making the mold.
There are still a few problems to tackle, though. Right now, they don’t have the means to harvest and cultivate enough of a pediatric patient’s own cartilage to build an ear, which is why they used samples from cows. Additionally, future tests are needed to prove that surgical implantation is safe for humans. The team says they plan to address these issues and could be working on the first implant of such an ear in a human as soon as 2016.
February 18, 2013
No one knows for sure how many individual pages are on the web, but right now, it’s estimated that there are more than 14 billion.
Recently, though, Hungarian physicist Albert-László discovered something surprising about this massive number: Like actors in Hollywood connected by Kevin Bacon, from every single one of these pages you can navigate to any other in 19 clicks or less.
published noted yesterday in Philosophical Transactions of the Royal Society (Correction: initially made way back in 1999), involved a simulated model of the web that he created to better understand its structure. He discovered that of the roughly 1 trillion web documents in existence—the aforementioned 14 billion-plus pages, along with every image, video or other file hosted on every single one of them—the vast majority are poorly connected, linked to perhaps just a few other pages or documents.
Distributed across the entire web, though, are a minority of pages—search engines, indexes and aggregators—that are very highly connected and can be used to move from area of the web to another. These nodes serve as the “Kevin Bacons” of the web, allowing users to navigate from most areas to most others in less than 19 clicks.
Barabási credits this “small world” of the web to human nature—the fact that we tend to group into communities, whether in real life or the virtual world. The pages of the web aren’t linked randomly, he says: They’re organized in an interconnected hierarchy of organizational themes, including region, country and subject area.
Interestingly, this means that no matter how large the web grows, the same interconnectedness will rule. Barabási analyzed the network looking at a variety of levels—examining anywhere from a tiny slice to the full 1 trillion documents—and found that regardless of scale, the same 19-click-or-less rule applied.
This arrangement, though, reveals cybersecurity risks. Barabási writes that knocking out a relatively small number of the crucial nodes that connect the web could isolate various pages and make it impossible to move from one to another. Of course, these vital nodes are among the most robustly protected parts of the web, but the findings still underline the significance of a few key pages.
To get an idea of what this interconnected massive network actually looks like, head over to the Opte Project, an endeavor started by Barrett Lyon in 2003 to create publicly available visualizations of the web. In the map above, for example, red lines represent links between web pages in Asia, green for Europe, the Middle East and Africa, blue for North America, yellow for Latin America and white for unknown IP addresses. Although the most recent visualization is several years old, Lyon reports that he’s currently working on a new version of the project that will be released soon.
January 31, 2013
You may have never seen a zebrafish in person. But take a look at the zebrafish in the short video above and you’ll get to see something previously unknown to science: a visual representation of a thought moving through a living creature’s brain.
A group of scientists from Japan’s National Institute of Genetics announced the mind-boggling achievement in a paper published today in Current Biology. By inserting a gene into a zebrafish larvae—often used in research because its entire body is transparent—and using probe that detects florescence, they were able to capture the fish’s mental reaction to a swimming paramecium in real time.
The key to the technology is a special gene known as GCaMP that reacts to the presence of calcium ions by increasing in florescence. Since neuron activity in the brain involves rapid increases in concentrations of calcium ions, insertion of the gene causes the particular areas in a zebrafish’s brain that are activated to glow brightly. By using a probe sensitive to florescence, the scientists were able to monitor the locations of the fish’s brain that were activated ay any given moment—and thus, capture the fish’s thought as it “swam” around the brain.
The particular thought captured in the video above occurred after a paramecium (a single-celled organism that the fish considers a food source) was released into the fish’s environment. The scientists know that the thought is the fish’s direct response to the moving paramecium because, as an initial part of the experiment, they identified the particular neurons in the fish’s brain that respond to movement and direction.
They mapped out the individual neurons responsible for this task by inducing the fish to visually follow a dot move across a screen and tracking which neurons were activated. Later, when they did the same for the fish as it watched the swimming paramecium, the same areas of the brain lit up, and the activity moved across these areas in the same way predicted by the mental maps as a result of the paramecium’s directional movement. For example, when the paramecium moved from right to left, the neuron activity moved from left to right, because of the way the brain’s visual map is reversed when compared to the field of vision.
This isn’t the first time that GCaMP has been inserted into a zebrafish for imaging purposes, but it is the first time that the images have been captured as a real-time video, rather than a static image after the fact. The researchers accomplished this by developing an improved version of GCaMP that is more sensitive to changes in calcium ion concentration and gives off greater levels of florescence.
The accomplishment is obviously a marvel in itself, but the scientists involved see it leading to a range of practical applications. If, for example, scientists had the ability to quickly map the parts of the brain affected by a chemical under consideration as a drug, new and effective psychiatric medications could be more easily developed.
They also envision it opening the door to a variety of even more amazing—and perhaps a bit troubling (who, after all, really wants their mind read?)—thought-detecting applications. “In the future, we can interpret an animal’s behavior, including learning and memory, fear, joy, or anger, based on the activity of particular combinations of neurons,” said Koichi Kawakami, one of the paper’s co-authors.
It’s clearly some time away, but this research shows that the concept of reading an animal’s thoughts by analyzing its mental activity might move beyond science fiction to enter the realm of real world science applications.
January 23, 2013
Next time you’re reading about a scientific finding and feeling a bit skeptical, you may want to take a look at the study’s authors. One simple trick could give you a hint on whether the work is fraudulent or not: check whether those authors are male or female.
According to a study published yesterday in mBio, men are significantly more likely to commit scientific misconduct—whether fabrication, falsification or plagiarism—than women. Using data from the U.S. Office of Research Integrity, this study’s authors (a group that includes two men and one women but we’re still trusting, for now) found that out of 215 life science researchers who’ve been caught misbehaving since 1994, 65 percent were male, a fraction that outweighs their overall presence in the field.
“A variety of biological, social and cultural explanations have been proposed for these differences,” said lead author Ferric Fang of the University of Washington. “But we can’t really say which of these apply to the specific problem of research misconduct.”
Fang first became interested in the topic of misconduct in 2010, when he discovered that a single researcher had published six fraudulent studies in Infection and Immunity, the journal of which he is editor-in-chief. Afterward, he teamed up with Arturo Casadevall of the Albert Einstein College of Medicine to begin systematically studying the issue of fraud. They’ve since found that the majority of retracted papers are due to fraud and have argued that the intensely competitive nature of academic researcher engenders abuses.
For this study, they worked with Joan Bennett of Rutgers to break down fraud in terms of gender, as well as the time in a scientist’s career when fraud is most likely. They found that men are not only more likely to lie about their findings but are disproportionately more likely to lie (as compared to women) as they ascend from student to post-doctoral researcher to senior faculty.
Of the 215 scientists found guilty, 32 percent were in faculty positions, compared to just 16 percent who were students and 25 perecent who were post-doctoral fellows. It’s often assumed that young trainees are most likely to lie, given the difficulty of climbing the academic pyramid, but this idea doesn’t jive with the actual data.
“Those numbers are very lopsided when you look at faculty. You can imagine people would take these risks when people are going up the ladder,” said Casadevall, “but once they’ve made it to the rank of ‘faculty,’ presumably the incentive to get ahead would be outweighed by the risk of losing status and employment.”
Apparently, though, rising to the status of faculty only increases the pressure to produce useful research and the temptation to engage in fraud. Another (unwelcome) possibility is that those who commit fraud are more likely to reach senior faculty positions in the first place, and many of them just get exposed later on in their careers.
Whichever the explanation, it’s clear that men do commit fraud more often than women—a finding that shouldn’t really be so surprising, since men are more likely to indulge in all sorts of wrongdoing. This trend also makes the fact that women face a systemic bias in breaking into science all the more frustrating.