April 4, 2013
In today’s science-so-weird-it-absolutely-must-be-science-fiction contest, we have a clear winner: a new study in which a team of scientists use an MRI machine, a computer model and thousands of images from the internet to figure out what people see as they dream.
Unbelievable as it sounds, researchers from Kyoto, Japan, say that they’ve built something of a dream-reading machine, which learned enough about the neurological patterns of three research participants to predict their sleeptime visualizations with 60 percent accuracy. The study, published today in Science is believed to be the first case in which objective data has been culled about the contents of a dream.
The seemingly extraordinary idea is built from a straightforward concept: that our brains follow predictable patterns as they react to different kinds of visual stimuli, and over time, a learning algorithm can figure out how to correlate each of these patterns with different classes of visualizations. A 2005 study by one of the researchers accomplished this in a much more primitive way—while subjects were awake—with a learning program correctly using functional MRI readings (fMRI indicates blood flow to various parts of the brain) to determine in which direction a subject was looking.
This study followed the same principle but took it in a much more ambitious direction, seeking to match actual images—not just visual directions—with fMRI readings, and do it while the subjects were asleep.
The research was done on three participants, each of whom took turns sleeping in a MRI scanner for a number of 3-hour-blocks over the course of 10 days. The participants were also wired with an electroencephalography (EEG) machine, which tracks the overall level of electrical activity in the brain and was used to indicate what stage of sleep they were in.
The deepest, longest dreams occur during REM sleep, which typically begins after a few hours of sleeping. But quick, sporadic hallucinations also occur during stage 1 of non-REM sleep, which starts a few minutes after you drift off, and the researchers sought to track the visualizations during this stage.
As the fMRI monitored blood flow to different parts of the subjects’ brains, they drifted off to sleep; then, once the scientists noticed that they’d had entered stage 1, they woke them up and asked them to describe what they were previously seeing while dreaming. They repeated this process nearly 200 times for each of the participants.
Afterward, they recorded the 20 most common classes of items seen by each participant (“building,” “person” or “letter,” for example) and searched for photos on the Web that roughly matched the objects. They showed these images to the participants while they were awake, also in the MRI scanner, then compared the readings to the MRI readouts from when the people had seen the same objects in their dreams. This allowed them to isolate the particular brain activity patterns truly associated with seeing a given object from unrelated patterns that simply correlated with being asleep.
They fed all this data—the 20 most common types of objects that each participant had seen in their dreams, as represented by thousands of images from the Web, along with the participants’ brain activity (from the MRI readouts) that occurred as a result of seeing them—into a learning algorithm, capable of improving and refining its model based on the data. When they invited the three sleepers back into the MRI to test the newly refined algorithm, it generated videos like the one below, producing groups of related images (taken from thousands on the web) and selecting which of the 20 groups of items (the words at bottom) it thought were most likely the person was seeing, based on his or her MRI readings:
When they woke the subjects up this time and asked them to describe their dreams, it turned out that the machine’s predictions were better than chance, although by no means perfect. The researchers picked two classes of items—one the dreamer had reported seeing, and one he or she hadn’t—and checked, of the times the algorithm had reported just one of them, how often it predicted the correct one.
The algorithm got it right 60 percent of the time, a proportion the researchers say can’t be explained by chance. In particular, it was better at distinguishing visualizations from different categories than different images from the same category—that is, it had a better chance of telling whether a dreamer was seeing a person or a scene, but was less accurate at guessing whether a particular scene was a building or a street.
Although it’s only capable of relatively crude predictions, the system demonstrates something surprising: Our dreams might seem like subjective, private experiences, but they produce objective, consistent pieces of data that can be analyzed by others. The researchers say this work could be an initial foray into scientific dream analysis, eventually allowing more sophisticated dream interpretation during deeper stages of sleep.
March 28, 2013
For ants, the pheromone-laden foraging trails they leave behind are like lifelines: they direct the workers toward food hubs discovered earlier and help guide them home back to their nest.
These networks of trails can stretch for hundreds of feet, quite the achievement considering many worker ants are less than half an inch in length. One type of harvester ant can lay down a set of trails (PDF) that stretch 82 feet from the entrance of its nest. The trails of a wood ant, an insect measuring just five millimeters (that’s one-fifth of an inch), reach 656 feet, each one branching out into more pathways at up to 10 spots on each trail. The leafcutter ant can build a network that spreads for almost two and a half acres.
Ant species such as these tend to take the shortest path between their colony’s nest and a food source, following branches that stray as little as possible from the direction in which they began their journey. The forks in their network of trails, known as bifurcations, are not symmetrical and don’t branch out into angles of the same size. But do ants use a sophisticated sense of geometry to trace their path, measuring the angles of the roads before picking one?
To learn more, researchers at the New Jersey Institute of Technology (NJIT) and the Research Centre on Animal Cognition in France used miniature robots to replicate the behavior of a colony of Argentine ants on the move, reported today in the journal PLOS Computational Biology. This ant species has extremely poor eyesight and darts around at high speeds, yet it can maneuver through corridor after corridor, from home to food and vice versa.
When no obstacles are around, ants prefer to walk in a straight line without deviating from their course. People are like that too: if we were walking down a street to a restaurant that’s on the same side of the road as we are, we wouldn’t cross to the opposite sidewalk unless something was blocking our way. To imbue this sense of obstacle avoidance into the robots, researchers programmed them to avoid obstacles and follow light trails, which the researchers used as a substitute for pheromone-coated paths.
The 10 tiny robots in this study, called Alices, were then tasked to navigate a maze-like environment roughly 60 to 70 times their size, from a starting point representing a nest entrance to an end point signifying a food source. Two photoreceptors, mimicking ant antennae, detected beams of light. As the robots traveled through the maze, researchers introduced a wrench in the little machines’ plans—at random points in their journey, the robots were triggered to turn, a mechanism meant to further mimic ants’ meandering gaits as they creep along their paths
. These random turns rotated at angles no greater than 30 degrees, as real ants are not very efficient at physically making U-turns.
In the sped-up video below, the researchers tested the Alices’ navigation skills in a complex network, charging them with choosing the shortest route between their “nest” (on the right) to a “food source (left). Varying beams of light projected onto the maze changed the robots’ movements inside the network as their photoreceptors kicked into action.
The researchers found that, without any knowledge of the geometry of the maze, the robotic ants behaved exactly as real ants do: they made small random turns, but moved in the same general direction. When they reached a fork in the road, this led the robots to choose the path that deviated least from their initial trajectory, even though they weren’t equipped to measure any angles. When they detected a light trail, they turned to follow that path.
The researchers say this means that Argentine ants may not need to use complex cognitive processes to compute the geometry of various trails. But taking the fork in the road that leads to the shortest route to food greatly increases foraging success for an entire colony. So using pheromones with an intuitive spatial knowledge of where food may be, keeps ants on the right track; as more ants follow the path to food, pheromones become more concentrated along the path, further helping to guide ants who have yet to travel. In fact, the navigation method of choosing the correct fork in the road triples the amount of food ants bring back to their nest than if they relied on pheromones alone, says lead author Simon Garnier, a biology professor at NJIT.
“If you have only the pheromones and you don’t have this trick, you’re less efficient because you’re more likely to get the ants trapped into loops,” says Garnier, who runs the institute’s Swarm Lab, which studies insect group behavior. “So they will reinforce their path around the loop, and they’ll just get stuck in this loop and turn and turn forever.”
Such navigation may also help guide ants through underground paths that connect different parts of their nests. Replicating these natural navigation tools allows researchers to better understand the inner-workings of collective animal behavior.
March 21, 2013
Designing a robot that can easily move across loose terrain—say, a rover meant to traverse the surface of Mars—poses a unique engineering challenge: Wheels commonly sink into what engineers call “flowable ground” (mixtures of sand, soil, mud and grass).
Given the many biologically-inspired innovations in robotics, a team of researchers from Georgia Tech had an idea—to base a design on desert creatures such as zebra-tailed lizards that are able to scramble across a loose, sandy surface without slowing down. Their efforts allowed them to create this small six-legged device, presented in an article published today in Science, which can run across a granular surface in a way uncannily reminiscent of a reptile.
The research team, led by Chen Li, designed the device after studying the locomotion of various creatures and mathematically simulating the performance of different types of legs (varying in number, shape and length) in several distinct environments. They hope their research will spur the development of a field they’ve termed “terradynamics”—just as aerodynamics is concerned with the performance of winged vehicles in air, their field will study the motion of legged vehicles on granular surfaces.
To design their robot, they used these simulations to determine the exact leg lengths, movement speeds and levels of force that would propel devices across a loose surface without causing them to sink in too deeply. They then printed a variety of leg types with a 3D printer, and built robots to test them in the lab.
One of their most interesting findings is that the same types of design principles apply for locomotion on a variety of granular surfaces, including poppy seeds, glass beads and natural sand. Their simulations and real-world experiments revealed that C-shaped legs generally worked best, but that any type of bow-shaped limbs worked relatively well because they spread out the weight of the device over long (albeit narrow) leg surfaces as the legs come into contact with the ground over the course of a stride.
The applications of this kind research are broad: This particular robot, the researchers say, could be developed into a useful search-and-rescue or scouting device, while the principles derived from the field of terradynamics could be useful in designing probes to explore other planets in the future. They could also help biologists to better understand the how life forms here on earth have evolved to move across our planet’s surface.
March 20, 2013
Most of the research that goes into producing cutting-edge commercial technology goes on in corporate R&D departments, away from the public eye. Every so often, though, some of that work gets published in a scientific journal, giving us a sneak preview of the capabilities we might see in our smart phones and devices in the years to come.
That’s certainly the case with a study in week’s issue of Nature, in which researchers from Hewlett-Packard detail their new invention: A mini 3D display that can be installed in a millimeter-thick piece of glass and works without special glasses. The system, the researchers say, can project static images or video in a range of colors.
In other words, when you buy a phone (if we’re still calling them “phones”) in ten or twenty years from now, chances are good that it will be equipped with a 3D system like this one, allowing you to see textures and depths-of-view as if you were in the scene rather than through holding a device in your hand. The team presents their display’s features in the video below:
The system works, as all 3D displays do, by sending a different image to each of our eyes, utilizing the fact that each of our eyes receiving a slightly different view of our surroundings is responsible for the fact that we see the world in 3D in the first place. But this display’s means of accomplishing the feat—and thereby simulating, to our brain, an image with depth—is different from previous ones.
Glasses-based 3D systems use various filtering mechanisms to show each of our eyes a different view. Some have shutters that rapidly open and close for each eye, and are synced with alternating images presented on the screen that are intended for one eye or another. A simpler, more common system (that you’re probably familiar with if you’ve gone to see a 3D movie) involve a pair of glasses with a blue lens and a red one, which cause two differently colored images on the screen to each reach one eye.
This new display, though, works without glasses, encoding the mechanism in the screen itself. It does this by reflecting light (produced along its edges) with specialized “grated pixels” that project light in several different directions, rather than straight to the eye. When you look at a screen tiled with grated pixels, each of your eyes sees a slightly different image that’s been projected from the screen, creating the illusion of depth no matter where you stand.
But the technology’s real trick is creating this illusion for a relatively broad viewing angle—in this case, one that’s 90 degrees wide. The Nintendo 3DS, by contrast, uses the same pixel-directed technique, but just sends out light in two directions, so it only works for a user located a particular distance from the machine, directly in the center, where the two beams of light intersect (as in part A of the image to the right). Because the 3DS is a gaming device, this isn’t much of a problem, because users typically hold it right in front of them at arm’s length when they play.
The new HP display, though, is intended to someday be part of smartphones and tablets, so the researchers wanted to create a 3D projection that multiple viewers can crowd around and see from a variety of angles. They did this by using grated pixels that can split light in 14 different directions, instead of merely two.
As a result, differently-directed beams of light intersect at a number of spots in front of the display, so a user can be located nearly anywhere in front of it and still have each eye see a different image—and thus get the 3D illusion (as in part C of the image). The current technology does still leave some blind spots, but the researchers say they plan to increase the number of light directions from 14 to 64 in the future, further improving the amount of viewing angles saturated by the display.
Of course, this is a proof-of-concept, not a technology ready for immediate industrial application, so it’ll probably be a little while before we start to see this sort of 3D display popping up in devices on the market. Still, the technology gives us a hint of what researchers are working on for the future—and suggests that R2-D2′s 3D hologram projector isn’t so far-fetched after all.
March 11, 2013
Odds are, when you decided to “like” a TV show, band, local business or product’s Facebook page, you didn’t imagine that that click would have much consequence. It might show your friends a bit about your interests, and occasionally cause status updates from the page to show up in your news feed.
“Likes,” however, are publicly available for anyone to see on Facebook, even people you haven’t approved as friends. And for a new study published today in the Proceedings of the National Academy of Sciences, a group of researchers created a computer program that can take a user’s “likes” and accurately infer a tremendous range of information about him or her—including age, ethnicity, IQ, political leanings, level of drug use and even sexual orientation.
For the study, the research group—a partnership between the Psychometrics Lab at the University of Cambridge and Microsoft Research Cambridge—analyzed the data of 58,000 American Facebook users who had chosen to supply their profiles and “likes” for analysis through Facebook’s myPersonality app. The researchers fed these “likes” into an algorithm, built for this project in particular, and then compared the model’s predictions on a range of characteristics to what they knew for certain about the users, who had submitted the contents of their Facebook profiles for analysis as well.
For each pair of traits examined—say, Caucasian or African-American, or Democrat or Republican—the researchers picked a pair of users, with one belonging to each category, and the algorithm had to blindly pick which user fit which category merely based on their “likes.” It wasn’t 100 percent perfect at inferring any of the categories, but it was uncannily accurate at predicting many, including some characteristics you probably wouldn’t assume can be guessed from your “likes.”
It correctly inferred, for example, which user was Caucasian and African American 95 percent of the time, Democrat and Republican 88 percent of the time and Christian and Muslim 82 percent of the time. A breakdown of its accuracy in predicting many of the considered traits (as a reminder, a value of 1 would signify that the model is 100 percent accurate) is below.
For most of the users, this level of accuracy didn’t depend upon any obvious “likes” that one might link to the trait considered. For instance, less than 5 percent of the users identified as gay had “liked” gay marriage, or other related pages.
The algorithm, instead, aggregated tons of seemingly unrelated “likes” to group users into classes that shared predictable similarities. By comparing “likes” to the results of a personality test (also part of the myPersonality app), the researchers found that users who “like” “Thunderstorms,” “The Colbert Report,” “Science” or “Curly Fries” are all slightly more likely to have high IQ than those who don’t. Similarly, male users who “liked” “Mac Cosmetics” or “Wicked The Musical” were slightly more likely to be gay, whereas those who liked “Wu-Tang Clan” or “Shaq” were slightly less likely.
Analyzing all of a user’s “likes” enabled the algorithm to create an overall portrait of them, but its accuracy was heavily influenced by the number of “likes” for each user. For those at the low end, with 1-10 likes, the predictions were no better than chance, but for those with 150 to 300 “likes,” the algorithm was able to improve its ability to guess the users traits to an even better degree.
The researchers primarily conducted the study to show just how much our publicly available information can tell about us. You might not publicly post your sexual orientation, political views or whether you use drugs, but this sort of program can analyze your “likes” and make pretty accurate guesses regardless.
Although the users had submitted their “likes” and profiles for analysis via a third-party app, Facebook’s default privacy settings mean that your “likes” are public to anyone. Already, Facebook’s own algorithms use these likes to dictate what stories end up in users’ news feeds, and advertisers can access them to determine which are the most effective ads to show you as you browse.