Πηγή: The Economist
Oct 29 2011
If you think the art of mind-reading is a conjuring trick, think again. Over the past few years, the ability to connect first monkeys and then men to machines in ways that allow brain signals to tell those machines what to do has improved by leaps and bounds. In the latest demonstration of this, just published in the Public Library of Science, Bin He and his colleagues at the University of Minnesota report that their volunteers can successfully fly a helicopter (admittedly a virtual one, on a computer screen) through a three-dimensional digital sky, merely by thinking about it. Signals from electrodes taped to the scalp of such pilots provide enough information for a computer to work out exactly what the pilot wants to do.
That is interesting and useful. Mind-reading of this sort will allow the disabled to lead more normal lives, and the able-bodied to extend their range of possibilities still further. But there is another kind of mind-reading, too: determining, by scanning the brain, what someone is actually thinking about. This sort of mind-reading is less advanced than the machine-controlling type, but it is coming, as three recently published papers make clear. One is an attempt to study dreaming. A second can reconstruct a moving image of what an observer is looking at. And a third can tell what someone is thinking about.
First, dreams. To study them, Martin Dresler, of the Max Planck Institute of Psychiatry, in Munich, and his colleagues recruited a group of what are known as lucid dreamers. They report their results in this week’s Current Biology.
A lucid dream is one in which the person doing the dreaming is aware that he is dreaming, and can control his actions almost as if he were awake. Most people have lucid dreams occasionally. A few, though, have them often—and some have become good at manipulating the process. Dr Dresler co-opted six self-professed practitioners of the art for his experiment. He asked them to perform, in their dreams, a simple action whose neurological traces in a brain scan are well understood. This action was to clench either their right or their left hand into a fist. The test would be to see if Dr Dresler’s brain scanner could reliably tell the difference.
Once a volunteer had dozed off and begun dreaming, he was to shift his eyes from left to right twice, to show he was ready to begin the experiment. (Unlike other parts of the body, which become limp in the phase of sleep during which dreams occur, the eyes continue to twitch. Indeed, this phase is known as rapid-eye-movement sleep.) After this signal, he clenched his left hand in his dream ten times, and then his right hand. (His real hands, of course, remained motionless.) He indicated the end of each set of clenches by turning his eyes as before. A trial was deemed a success if at least four sets of alternate clenches were performed in this way.
At first, only one participant managed to meet these exacting criteria, though he did so on two occasions. Dr Dresler speculated that the reason was his chosen brain scanner. He was using a functional magnetic-resonance imaging (fMRI) machine. This is the best sort of scanner, but it makes a terrible racket and so is not conducive to dreamy slumber. Replacing fMRI with a slightly less accurate technique called near-infra-red spectroscopy produced two further successful trials involving a different volunteer.
Both techniques were able to see the brain acting to clench a volunteer’s fist in his dream in exactly the way that it does when ordering fist-clenching in reality. This might not seem a big deal, but it is the first time science has proved what was hitherto mere speculation: that the brain, when dreaming, behaves like the brain when awake. In principle, then, it might be possible to “read” dreams as they are happening, and thus perhaps solve one of the great mysteries of biology: what, exactly, is dreaming for?
Though it may seem a stretch to suggest that the mind of a dreamer could be read in this way, it is not. For the second paper of the trio, published in Current Biology in September, shows that it is now possible to make a surprisingly accurate reconstruction, in full motion and glorious Technicolor, of exactly what is passing through an awake person’s mind.
This study was done by Jack Gallant of the University of California, Berkeley. In the name of science, three members of Dr Gallant’s team each endured two sessions of fMRI while watching assorted film trailers. The researchers chose to experiment on themselves, rather than calling for volunteers, because the experiment required them to sit perfectly still in an fMRI machine for long periods. Two hours of being bombarded with excerpts from such treats as the remake of “The Pink Panther”, they decided, would be too brutal a procedure to visit on innocent outsiders.
Which will be the critics’ choice?
Psychodrama
Unlike Dr Dresler, who focused on the sensorimotor cortex, which controls movement, Dr Gallant and his team looked at the visual cortex. Their method depended on the brute power of modern computing. They compared the film trailers frame by frame with fMRI images recorded as those trailers were being watched, and looked for correlations between the two. They then fed their computer 5,000 hours of clips from YouTube, a video-sharing website, and asked it to predict, based on the correlations they had discovered, what the matching fMRI pattern would look like.
Having done that, they each endured a further two hours in the machine, watching a new set of trailers. The computer looked at the reactions of their visual cortices and picked, for each clip, the 100 bits of YouTube footage whose corresponding hypothetical fMRI pattern best matched the real one. It then melded these clips together to produce an estimate of what the real clip looked like. As the pictures above show, the result was often a recognisable simulacrum of the original. It also moved (watch at gallantlab.org) in the same way as the clip it was based on.
The third study, published in August in Frontiers in Human Neuroscience by Francisco Pereira and his colleagues at Princeton university, used a technique similar to Dr Gallant’s to perform an equally impressive trick. Rather than recreating images, Dr Pereira was able to determine what topics people were pondering. To do this, he re-examined data collected during an experiment conducted in 2008, in which nine volunteers had been shown labelled pictures of 60 objects, and then had their brains scanned as they were asked to imagine those same objects.
Dr Pereira divided the data in two. He used half to generate his hypothesis and half to test it. Though his pattern-detection algorithms could not distinguish exactly which objects the volunteers had seen, they managed a task that was only slightly less demanding. They could work out what type of object something was. In other words, they could not distinguish a carrot from a stick of celery, but could say that it was a vegetable.
The similarity to Dr Gallant’s study came from the way the categories were established. This was done by pillaging another huge website, Wikipedia, to find out how the names of objects tend to cluster together in the online encyclopedia’s articles. Dr Pereira found that they appear to cluster in similar ways in the brain, and to produce enough shared neural characteristics there for the clustering to be detectable.
Mind-reading, then, has become a reality. It is crude. The results would not stand up in court—yet. But, as the Franck report said of America’s first atom bomb, the thing does work.
No comments:
Post a Comment