Scientists Figure Out What You See While You’re Dreaming

A learning algorithm, coupled with brain scans, was able to predict the images seen by dreamers with a 60 percent accuracy

Surprising-Science-Dreams-631.jpg
A learning algorithm, coupled with MRI readings, was able to predict the images seen by dreamers with a 60 percent accuracy. Image via Wikimedia Commons/Mark Sebastian

In today’s science-so-weird-it-absolutely-must-be-science-fiction contest, we have a clear winner: a new study in which a team of scientists use an MRI machine, a computer model and thousands of images from the internet to figure out what people see as they dream.

Unbelievable as it sounds, researchers from Kyoto, Japan, say that they’ve built something of a dream-reading machine, which learned enough about the neurological patterns of three research participants to predict their sleeptime visualizations with 60 percent accuracy. The study, published today in Science is believed to be the first case in which objective data has been culled about the contents of a dream.

The seemingly extraordinary idea is built from a straightforward concept: that our brains follow predictable patterns as they react to different kinds of visual stimuli, and over time, a learning algorithm can figure out how to correlate each of these patterns with different classes of visualizations. A 2005 study by one of the researchers accomplished this in a much more primitive way—while subjects were awake—with a learning program correctly using functional MRI readings (fMRI indicates blood flow to various parts of the brain) to determine in which direction a subject was looking.

This study followed the same principle but took it in a much more ambitious direction, seeking to match actual images—not just visual directions—with fMRI readings, and do it while the subjects were asleep.

The research was done on three participants, each of whom took turns sleeping in a MRI scanner for a number of 3-hour-blocks over the course of 10 days. The participants were also wired with an electroencephalography (EEG) machine, which tracks the overall level of electrical activity in the brain and was used to indicate what stage of sleep they were in.

The deepest, longest dreams occur during REM sleep, which typically begins after a few hours of sleeping. But quick, sporadic hallucinations also occur during stage 1 of non-REM sleep, which starts a few minutes after you drift off, and the researchers sought to track the visualizations during this stage.

As the fMRI monitored blood flow to different parts of the subjects’ brains, they drifted off to sleep; then, once the scientists noticed that they’d had entered stage 1, they woke them up and asked them to describe what they were previously seeing while dreaming. They repeated this process nearly 200 times for each of the participants.

Afterward, they recorded the 20 most common classes of items seen by each participant (“building,” “person” or “letter,” for example) and searched for photos on the Web that roughly matched the objects. They showed these images to the participants while they were awake, also in the MRI scanner, then compared the readings to the MRI readouts from when the people had seen the same objects in their dreams. This allowed them to isolate the particular brain activity patterns truly associated with seeing a given object from unrelated patterns that simply correlated with being asleep.

They fed all this data—the 20 most common types of objects that each participant had seen in their dreams, as represented by thousands of images from the Web, along with the participants’ brain activity (from the MRI readouts) that occurred as a result of seeing them—into a learning algorithm, capable of improving and refining its model based on the data. When they invited the three sleepers back into the MRI to test the newly refined algorithm, it generated videos like the one below, producing groups of related images (taken from thousands on the web) and selecting which of the 20 groups of items (the words at bottom) it thought were most likely the person was seeing, based on his or her MRI readings:

When they woke the subjects up this time and asked them to describe their dreams, it turned out that the machine’s predictions were better than chance, although by no means perfect. The researchers picked two classes of items—one the dreamer had reported seeing, and one he or she hadn’t—and checked, of the times the algorithm had reported just one of them, how often it predicted the correct one.

The algorithm got it right 60 percent of the time, a proportion the researchers say can’t be explained by chance. In particular, it was better at distinguishing visualizations from different categories than different images from the same category—that is, it had a better chance of telling whether a dreamer was seeing a person or a scene, but was less accurate at guessing whether a particular scene was a building or a street.

Although it’s only capable of relatively crude predictions, the system demonstrates something surprising: Our dreams might seem like subjective, private experiences, but they produce objective, consistent pieces of data that can be analyzed by others. The researchers say this work could be an initial foray into scientific dream analysis, eventually allowing more sophisticated dream interpretation during deeper stages of sleep.

Get the latest Science stories in your inbox.