9/24/2011

Computer Reconstructs Movie Scenes From Brain Scans



Πηγή: Huffington Post
By MALCOLM RITTER
Sep. 22 2011


NEW YORK -- It sounds like science fiction: While volunteers watched movie clips, a scanner watched their brains. And from their brain activity, a computer made rough reconstructions of what they viewed.

Scientists reported that result Thursday and speculated such an approach might be able to reveal dreams and hallucinations someday.

In the future, it might help stroke victims or others who have no other way to communicate, said Jack Gallant, a neuroscientist at the University of California, Berkeley, and co-author of the paper.

He believes such a technique could eventually reconstruct a dream or other made-up mental movie well enough to be recognizable. But the experiment dealt with scenes being viewed through the eyes at the time of scanning, and it's not clear how much of the approach would apply to scenes generated by the brain instead, he said.

People shouldn't be worried about others secretly eavesdropping on their thoughts in the near future, since the technique requires a person to spend long periods in an MRI machine, he noted.

Another expert said he expected any mind-reading capability would appear only far in the future.

For now, the reconstructed movie clips are only crude representations, loosely mimicking shapes and movement, but not nearly detailed enough to show that a blurry human-like figure represents the actor Steve Martin, for example.

The new work was published online Thursday by the journal Current Biology. It's a step beyond previous work that produced similar results with still images.

The paper reports results from the brain scans of three co-authors, who were chosen because the study subjects had to be motivated enough to lie motionless in an MRI machine for hours and stay alert as they stared at a tiny dot, Gallant said. The machine was used for a technique called functional MRI, or fMRI. Unlike ordinary MRI, which reveals anatomy, fMRI shows brain activity.

The first task was to teach the computer how different parts of each subject's brain responded to scenes of moving objects.

Participants stared at a dot to keep their eyes still as movie clips lasting 10 to 20 seconds unfolded in the background. That went on for two hours as the MRI machine tracked activity in their brains.

The study focused on parts of the brain that respond to simple features like shapes and movement, rather than other parts that identify objects. So it was limited to "only the most basic parts of vision," Gallant said.

Next, the question was: Could the computer use that brain activity information to reconstruct what appeared in the movie clips?

To test that, researchers fed the computer 18 million one-second YouTube clips that the participants had never seen. They asked the computer to predict what brain activity each of those clips would evoke.

Then they asked it to reconstruct the movie clips using the best matches it could find between the YouTube scenes and the participants' brain activity.

The reconstructions are blends of the YouTube snippets, which makes them blurry. Some are better than others. If a human appeared in the original clip, a human form generally showed up in the reconstruction. But one clip that showed elephants walking left to right led to a reconstruction that looked like "a shambling mound," Gallant said. The YouTube clips hadn't shown elephants and so "we just had to make do with what we had."

The quality could be improved by better techniques to blend human forms, as well as a bigger storehouse of moving images, he said.

Still, the overall results are "one of the most impressive demonstrations of the scientific knowledge of how the visual system works," said Marcel Just, director of the Center for Cognitive Brain Imaging at Carnegie Mellon University.

"I'd give 50 or 100 dollars to see dreams of mine with that (current level of) quality," said Just, who didn't participate in the new study.

Perhaps the technique could be used someday to provide helpful brain stimulation to people who have trouble processing visual information, he said.

Michael Tarr, co-director of the Center for the Neural Basis of Cognition, a joint venture of Carnegie Mellon and the University of Pittsburgh, called the work a "cool demonstration" of how scientists can use fMRI to study the brain.

"I don't think people should interpret this as a precursor to mind-reading," said Tarr, who didn't participate in the work. "The level of knowledge we'd have to have about the brain before we could even think about seeing whether mind-reading would work is decades, if not centuries, away."



No comments:

Post a Comment