April 11, 2013
The link between music and pleasure runs deep. A desire to make and listen to music is found in virtually every culture around the world, and dates back at least 43,000 years, when a Cro-Magnon in Europe pierced holes into a bear bone, creating a flute that has become the world’s oldest surviving musical instrument. There are good reasons music is so universal: Experiments have shown that our brains’ reaction to hearing favorite music most closely resembles what happens after we take psychoactive drugs or eat good food.
Now, neuroscientists are digging deeper into the neural patterns responsible for the fact that we love music so much. A new study published today in Science pinpoints the part of the human brain that makes new music in particular so pleasurable, showing that activity in the nucleus accumbens—located near the organ’s center and shown in dark blue in the image at top—is most closely linked to our enjoyment of a new tune. It also reveals that a second area (in the auditory cortex, shown in yellow) is involved, encoding preferences based on the music we hear over time, making it more likely that we’ll enjoy new songs that resemble those we’ve already heard.
The research team, led by Valorie Salimpoor of McGill University, came to the finding by using functional MRI (fMRI) technology, which measures blood flow to each part of the brain in real-time and is frequently used in neuroscientific inquiry. The neural activity of 19 participants was recorded as they listened to 60 songs they’d never heard before while lying in a fMRI scanner.
To specifically look into brain activity exhibited while people listened to new music they liked, though—rather than any new music at all—the researchers introduced a twist: Each of the 19 study participants were able to buy any of the songs afterward, with real money. By including this step, the scientists had an objective measure of which songs each of the participants truly enjoyed and deemed worth purchasing.
They found that when participants listened to songs they ended up buying, neural activity increased most dramatically in their nucleus accumbens, a reward area that releases neurotransmitter chemicals that provoke positive feelings—the same type of neural pathway that causes pleasure from food, drugs or sex. Moreover, the participants were allowed to “bid” varying amounts of money on each song ($0.99, $1.29 or $2, with greater amounts increasing the likelihood that they’d get to own it afterward), and the more a person ended up bidding on the song, the greater the level of activity in this region when the song was originally played, suggesting that the nucleus accumbens is crucial for generating the pleasure people get from hearing new music that they like.
Activity in the nucleus accumbens, though, also depended on signals coming from specific areas in the auditory cortex, which integrate sound stimuli and transmit them to the reward area. Previous work has shown that this cortex is involved in value-based judgements and preferences, so the scientists theorize our personal understanding of what sound patterns make for good music is stored in this area.
The neural patterns in this area can change over time, and are built out of habit, which is why different people have different musical preferences. The music you hear over the course of your life influences what new music you’re likely to enjoy most in the future. It’s a bit like Pandora for your brain: Just as the online service uses algorithms to guess what new music you’ll enjoy based on what you already like, this part of the auditory cortex draws upon encoded beliefs about what constitutes “good” music to activate the reward center, the nucleus accumbens.
So, while a love for music is nearly universal across humanity, this research could explain why particular kinds of new music different cultures (and different people) enjoy diverge wildly—from K-pop to monster ballads to the ritualistic chanting of Tibetan monks.
April 10, 2013
The human brain is one of the most complex objects in the known universe. Packed into just 3 pounds of flesh (on average) is an assembly of roughly 86 billion interconnected neurons, forming countless intricate networks that make up the essence of your personality.
A preserved brain on an examination table, though, conveys none of this complexity: It looks, more or less, like a pile of grey meat, because we can’t see through the outer cells’ membranes to see the individual neurons inside.
This problem is the motivation behind a new technique, developed a Stanford team led by Kwanghun Chung and Karl Deisseroth, to make preserved brains entirely transparent to light. By doing so, and then using specialized chemical markers that attach to certain kinds of cells, they created a way to see whole brains in all their complex, interconnected splendor. Such complexity is readily seen in the mouse brain imaged below, in which certain types of neurons have been labeled with a florescent green dye:
The scientists say their technique, which was announced in a paper published today in Nature, works for preserved human brains as well as those of mice, and can be applied to many other types of organs as well. The method takes advantage of the fact that organs’ color—and hence the reason they’re not clear—is entirely due to the fat molecules that make up each cell’s membrane.
In a living brain, these molecules preserve the organ’s structural integrity. But in a preserved brain, they obscure the internal structure from view. To address this issue, the researchers filled the experimental mouse brains with hydrogels—which bind to the functional elements of the cells (proteins and DNA) but not the fat molecules—and form a jelly-like mesh that preserves the original structure. Then, they cleared away the fat molecules with a detergent, rendering the organ completely transparent.
Producing a fully intact, transparent mouse brain (as shown in the image at top) creates all sorts of interesting imaging opportunities. With the fat molecules flushed out, the elements of experimental or clinical interest (neuron networks or genes, for example) are no longer obscured by cell membranes. (In much the same way, zebrafish, with their transparent embryos, are heavily used in many fields of biological research.)
To see the aspects clearly, the researchers added colored chemical markers that specifically attach to certain kinds of molecules. Once this is done, scientists can examine them with a conventional light microscope, or combine multiple images from digital microscopes to create a 3-D rendering.
As a proof-of-concept, in addition to the mouse brain, the research team performed the procedure on small pieces of a deceased autistic person’s brain that had been in storage for 6 years. With specialized chemical markers, they were able to trace individual neurons across large swaths of tissue. They also found atypical ladder-like neuron structures that have also been seen in the brains of animals with autism-like symptoms.
This sort of detailed analysis has previously only been possible by laboriously examining tiny slices of brain with a microscope to infer a full three-dimensional picture. But now, interconnections between different parts of the brain can be seen on a broader level.
The fact that the technique works on all sorts of tissues could open up many new avenues of research: analysis of an organ’s signaling molecule pathways, clinical diagnosis of disease in a biopsy sample, and, of course, a more detailed examination of the neuron relationships and networks that make up the human brain. For more, watch the video below, courtesy of Nature Video:
April 9, 2013
For thousands of years, humans have shared their beds with blood-sucking parasites. The ancient Greeks complained of bedbugs, as did the Romans. When the lights go off for those suffering from this parasitic infestation today, from under the mattress or behind the bedboard creeps up to 150,000 of the rice grain-sized insects (though average infestations are around 100 insects). While bedbugs are one of the few parasites that live closely with humans yet do not transmit a serious disease, they do cause nasty red rashes in some of their victims, not to mention the psychological terror of knowing that your body becomes a buffet for crawling bloodsuckers after dark.
By the 1940s this age-old parasite was mostly eradicated from homes and hotels in the developing world. But around 1995, the bedbug tides again turned. Infestations began flaring up with a vengeance. Pest managers and scientists aren’t sure what happened, exactly, but it may have been a combination of people traveling more and thus increasing their chances of encountering bedbugs in run down motels or infested apartments; of bedbugs bolstering their resistance to common pesticides; and of people simply letting their guard down against the now unfamiliar parasites.
Large cities such as New York have particularly suffered from this resurgence. Since 2000, the New York Times has run dozens of articles documenting the ongoing plague of bedbugs, with headlines such as Even Health Dept. Isn’t Safe from Bedbugs and Bringing Your Own Plastic Seat Cover to the Movies.
As many hapless New Yorkers have found, detecting stealthy bedbugs is only the first step of what usually turns into a long, desperate eradication battle. Most people have to combine both pesticides and non-chemical methods for purging their apartments. In addition to dousing the apartment and its contents in pesticides, this includes throwing away all furniture the bugs are living on (streetside mattresses in NYC with a “BEDBUGS!” warning scrawled across them are not an out-of-the-ordinary sight), physically removing the bodies of poisoned bugs, subjecting the home to extreme heat or cold, or even hiring a bedbug sniffing dog. Sometimes, after so many sleepless nights and days spent meticulously combing the cracks between the mattress and sheets or searching behind couch cushions, residents simply throw up their hands, move out and start their lives over.
Recognizing this ongoing problem, researchers are constantly trying to come up with new methods for quickly and efficiently killing the pests. The latest technique, described today in the Journal of the Royal Society Interface, takes a hint from mother nature and history. For years, people in Eastern Europe’s Balkan region have known that kidney bean leaves trap bedbugs, sort of like a natural fly paper. In the past, those suffering from infestations would scatter the leaves on the floor surrounding their bed, then collect the bedbug-laden greenery in the morning and destroy it. In 1943, a group of researchers studied this phenomenon and attributed it to microscopic plant hairs called trichomes that grow on the leaves’ surface to entangling bed bug legs. They wrote up their findings in “The action of bean leaves against the bedbug,” but World War II distracted from the paper and they wound up receiving little attention for their work.
Rediscovering this forgotten research gem, scientists from the University of California, Irvine, and the University of Kentucky set out to more precisely document how the beans create this natural bedbug trap and, potentially, how it could be used to improve bedbug purging efforts. “We were motivated to identify the essential features of the capture mechanics of bean leaves to guide the design and fabrication of biomimetic surfaces [or synthetic materials that mimic ones found in nature] for bed bug trapping,” they write in their paper.
They used a scanning electron microscope and video to visualize how the trichomes on the leaves stop the bedbugs in their ravenous tracks. Rather than a Velcro-like entanglement as the 1943 authors had suggested, it seems that the leaves stick into the insects’ feet like giant thorns, physically impaling the pests.
Knowing this, the researchers wondered if they could improve upon the method as a way to treat bedbug infestations, because leaves themselves dry out and can’t be scaled up to larger sizes. “This physical entrapment is a source of inspiration in the development of new and sustainable [or scalable and chemical-free] methods to control the burgeoning numbers of bed bugs,” they write.
They used fresh bean leaves as a template for micro-fabricating produced surfaces that precisely mimicked the leaves. To do this, they created a negative molding of the leaves, then poured in polymers sharing a similar material composition of the living plant’s cell walls.
The team then allowed bedbugs to walk across their synthetic leaves to test their effectiveness compared to the real deal. The fabricated leaves did snag the bugs, but they didn’t hinder the insects’ movements quite as effectively as the living plants. But the researchers are not deterred by these initial results. They plan to continue working on the problem and improving their product by more precisely incorporating the mechanical properties of the living trichomes. The optimistically conclude:
With bed bug populations skyrocketing throughout the world, and resistance to pesticides widespread, bioinspired microfabrication techniques have the potential to harness the bed bug-entrapping power of natural leaf surfaces using purely physical means.
April 4, 2013
What can’t a 3D printer build? The number of possible answers to this question has shrunk exponentially in recent years, as the high-tech machines continue to churn out solid object after object from computer designs.
The last few months alone saw countless new products and prototypes spanning an array of industries, from football cleats and pens to steel rocket parts and guns. Last month, the technology helped replace 75 percent of a person’s damaged skull, and this week it restored a man’s face after he lost half of it to cancer four years ago.
Today, a new study suggests 3D-printed material could one day mimic the behavior of cells in human tissue. Graduate student Gabriel Villar and his colleagues at the University of Oxford developed tiny solids that behave as biological tissue would. The delicate material physically resembles brain and fat tissue, and has the consistency of soft rubber.
To create this material, a specially designed 3D printing machine followed a computer programmed diagram and ejected tens of thousands of individual
droplets according to a specified three-dimensional network. As seen in the video above, its nozzles moved in various angles to establish the position of each tiny bead. Each droplet weighs in at about one picoliter—that’s one trillionth of a liter—a unit used to measure the size of droplets of inkjet printers, whose nozzle technology works much the same way to consolidate tiny dots of liquid into complete images and words on paper.
The droplets of liquid contained biochemicals found in tissue cells. Coated in lipids—fats and oils—the tiny aqueous compartments stuck together, forming a cohesive and self-supporting shape, with each bead partitioned by a thin, single membrane similar to the lipid bilayers that protect our cells.
The shapes that the printed droplets formed remained stable for several weeks. If researchers shook the material slightly, droplets could become displaced, but only temporarily. The engineered tissue quickly sprung back into its original shape, a level of elasticity the researchers say is comparable to soft tissue cells in humans. The intricate latticework of a network’s lipid bilayers appeared to hold the “cells” together.
In some of the droplet networks, the 3D printer built pores into the lipid membrane. The holes mimicked protein channels inside the barriers that protect real cells, filtering molecules important for cell function in and out. The researchers injected into the pores a type of molecule important for cell-to-cell communication, one that delivers signals to numerous cells so that they function
together as a group. While the 3D-printed material couldn’t exactly replicate how cells propagate signals, researchers say the movement of the molecule through defined pathways resembled the electrical communication of neurons in brain tissue
Water readily permeated the network’s membranes, even when pores were not built into its structure. The droplets swelled and shrank by the process of osmosis, trying to establish equilibrium between the amount of water they contained and the amount surrounding them on the outside. The movement of water was enough to lift the droplets against gravity, pulling and folding them, imitating muscle-like activity in human tissue.
The researchers hope that these droplet networks could be programmed to release drugs following a physiological signal. Printed cells could someday also be integrated into damaged or failing tissue, providing extra scaffolding or even replacing malfunctioning cells, perhaps even supplanting some of the 1.5 million tissue transplants that take place in the United States each year. The potential seems greatest for brain tissue transplants, as medical engineers are currently trying to grow brain cells in the lab to treat progressive diseases like Huntington’s disease, which slowly destroys nerve cells.
Whether it’s growing human tissue or entire ears, 3D printing technology is in full swing in the field of medicine, and countless researchers will no doubt jump on the bandwagon in the coming years.
In today’s science-so-weird-it-absolutely-must-be-science-fiction contest, we have a clear winner: a new study in which a team of scientists use an MRI machine, a computer model and thousands of images from the internet to figure out what people see as they dream.
Unbelievable as it sounds, researchers from Kyoto, Japan, say that they’ve built something of a dream-reading machine, which learned enough about the neurological patterns of three research participants to predict their sleeptime visualizations with 60 percent accuracy. The study, published today in Science is believed to be the first case in which objective data has been culled about the contents of a dream.
The seemingly extraordinary idea is built from a straightforward concept: that our brains follow predictable patterns as they react to different kinds of visual stimuli, and over time, a learning algorithm can figure out how to correlate each of these patterns with different classes of visualizations. A 2005 study by one of the researchers accomplished this in a much more primitive way—while subjects were awake—with a learning program correctly using functional MRI readings (fMRI indicates blood flow to various parts of the brain) to determine in which direction a subject was looking.
This study followed the same principle but took it in a much more ambitious direction, seeking to match actual images—not just visual directions—with fMRI readings, and do it while the subjects were asleep.
The research was done on three participants, each of whom took turns sleeping in a MRI scanner for a number of 3-hour-blocks over the course of 10 days. The participants were also wired with an electroencephalography (EEG) machine, which tracks the overall level of electrical activity in the brain and was used to indicate what stage of sleep they were in.
The deepest, longest dreams occur during REM sleep, which typically begins after a few hours of sleeping. But quick, sporadic hallucinations also occur during stage 1 of non-REM sleep, which starts a few minutes after you drift off, and the researchers sought to track the visualizations during this stage.
As the fMRI monitored blood flow to different parts of the subjects’ brains, they drifted off to sleep; then, once the scientists noticed that they’d had entered stage 1, they woke them up and asked them to describe what they were previously seeing while dreaming. They repeated this process nearly 200 times for each of the participants.
Afterward, they recorded the 20 most common classes of items seen by each participant (“building,” “person” or “letter,” for example) and searched for photos on the Web that roughly matched the objects. They showed these images to the participants while they were awake, also in the MRI scanner, then compared the readings to the MRI readouts from when the people had seen the same objects in their dreams. This allowed them to isolate the particular brain activity patterns truly associated with seeing a given object from unrelated patterns that simply correlated with being asleep.
They fed all this data—the 20 most common types of objects that each participant had seen in their dreams, as represented by thousands of images from the Web, along with the participants’ brain activity (from the MRI readouts) that occurred as a result of seeing them—into a learning algorithm, capable of improving and refining its model based on the data. When they invited the three sleepers back into the MRI to test the newly refined algorithm, it generated videos like the one below, producing groups of related images (taken from thousands on the web) and selecting which of the 20 groups of items (the words at bottom) it thought were most likely the person was seeing, based on his or her MRI readings:
When they woke the subjects up this time and asked them to describe their dreams, it turned out that the machine’s predictions were better than chance, although by no means perfect. The researchers picked two classes of items—one the dreamer had reported seeing, and one he or she hadn’t—and checked, of the times the algorithm had reported just one of them, how often it predicted the correct one.
The algorithm got it right 60 percent of the time, a proportion the researchers say can’t be explained by chance. In particular, it was better at distinguishing visualizations from different categories than different images from the same category—that is, it had a better chance of telling whether a dreamer was seeing a person or a scene, but was less accurate at guessing whether a particular scene was a building or a street.
Although it’s only capable of relatively crude predictions, the system demonstrates something surprising: Our dreams might seem like subjective, private experiences, but they produce objective, consistent pieces of data that can be analyzed by others. The researchers say this work could be an initial foray into scientific dream analysis, eventually allowing more sophisticated dream interpretation during deeper stages of sleep.