This is pretty cool. Berkeley scientists are starting to be able to construct videos based on brain activity.
They are very loose approximations based on existing video footage, but still cool.
The left clip is a segment of the movie that the subject viewed while in the magnet. The right clip shows the reconstruction of this movie from brain activity measured using fMRI. The reconstruction was obtained using only each subject’s brain activity and a library of 18 million seconds of random YouTube video.
(In brief, the algorithm processes each of the 18 million clips through a model of each individual brain, and identifies the clips that would likely have produced brain activity as similar to the measured brain activity as possible. The clips used to fit the model, those used to test the model and those used to reconstruct the stimulus were entirely separate.) Brain activity was sampled every one second, and each one-second section of the viewed movie was reconstructed separately.