Brain imaging reveals the movies in our minds

Scientists at the University of California, Berkeley, have managed to decode and reconstruct dynamic visual experiences processed by the human brain.


Currently, researchers are only able to reconstruct movie clips people have already viewed. 

However, the breakthrough is expected to pave the way for reproducing the movies inside our heads that no one else sees – such as dreams and memories. 



“This is a major leap toward reconstructing internal imagery,” explained Professor Jack Gallant, a UC Berkeley neuroscientist and coauthor of the study published online today in the journal Current Biology. “We are opening a window into the movies in our minds.”



According to Gallant, practical applications of the technology could eventually include a better understanding of what is happening in the minds of those who cannot communicate verbally, such as stroke victims, coma patients and individuals with neurodegenerative diseases. 



It may also lay the groundwork for brain-machine interface so that people with cerebral palsy or paralysis can guide computers with their minds.


Nevertheless, researchers emphasize that the brain imaging technology is “decades” away from allowing users to read thoughts and intentions – a theme which is prevalent in numerous dystopian science fiction books. 

Yet, the achievements of Galant’s team are particularly impressive, since the scientists actually decoded brain signals generated by moving pictures.

“Our natural visual experience is like watching a movie,” said Shinji Nishimoto, lead author of the study and a post-doctoral researcher in Gallant’s lab. 

”In order for this technology to have wide applicability, we must understand how the brain processes these dynamic visual experiences.” 


Nishimoto and two other research team members served as subjects for the experiment, as the procedure requires volunteers to remain still inside the MRI scanner for hours at a time.

They watched two separate sets of Hollywood movie trailers, while an fMRI measured blood flow through the visual cortex, the part of the brain that processes visual information. On the computer, the brain was divided into small, three-dimensional cubes known as volumetric pixels, or “voxels.”

The brain activity was recorded while subjects viewed the first set of clips which were fed into a computer program that learned, second by second, to associate visual patterns in the movie with the corresponding brain activity.

Brain activity evoked by the second set of clips was used to test the movie reconstruction algorithm. This was done by feeding 18 million seconds of random YouTube videos into the computer program so it could predict the brain activity each film clip would most likely evoke in each subject.

Finally, the 100 clips that the computer program determined were most similar to the clip that the subject had probably seen were merged to produce a blurry, yet continuous reconstruction of the original movie.

Reconstructing movies using brain scans has been somewhat of a challenge because the blood flow signals measured using fMRI change much more slowly than the neural signals that encode dynamic information in movies. As such, most previous attempts to decode brain activity tended to focus on static images.

“We addressed this problem by developing a two-stage model that separately describes the underlying neural population and blood flow signals,” Nishimoto added.

Ultimately, Nishimoto said, scientists want to understand how the brain processes dynamic visual events that are experienced in everyday life.

“We need to know how the brain works in naturalistic conditions… For that, we need to first understand how the brain works while we are watching movies.”