Tuesday, January 4, 2011

The Quest to Read the Human Mind


The Quest to Read the Human Mind

If a few very smart neuroscientists are right, with enough number crunching and a powerful brain scanner, science can pluck pictures -- and maybe one day even thoughts -- directly from your brain

By Lisa Katayama / Source: Popular Science

It's after dark on a warm Monday night in April, and I'm lying face-up in a 13-ton tube at the Henry H. Wheeler, Jr. Brain Imaging Center at the University of California at Berkeley. The room is dimly lit, and I am alone. A white plastic cage covers my face, and a blue computer screen shines brightly into my eyes.

I'm here because a neuroscientist named Jack Gallant is about to read my mind. He has given me strict instructions not to move; even the slightest twitch could affect the accuracy of what he's about to do. As I stare straight up, I notice an itch on my thigh. Don't scratch it, I tell myself. I try to keep my thoughts blank as the beeping gets faster and the fMRI machine -- the scanner that will detect changes in blood flow in my brain -- powers up.

Gallant assures me that the random thoughts in my head will not affect his results. Today he's just concerned with what I see and how that registers in the visual cortex, a region at the back of the brain that processes what my eyes take in. It doesn't matter that I'm thinking about what to eat for dinner, or that I'm worried about getting a parking ticket on Oxford Street. The only important thing, he says, is for me to keep as still as possible, and soon he'll have enough information to re-create the pictures I've been staring at without ever having seen the images himself.

For the past 10 years, Gallant has been running a neuroscience and psychology lab at Berkeley dedicated to brain imaging and vision research. He's one of a few neuroscientists in the world on the verge of unlocking the key to mind reading through brain-pattern analysis using magnetic resonance scans and algorithms. By showing me a series of random photographs and evaluating fMRI readings from my primary visual cortex, Gallant says his technique can reconstruct imagery stored in my brain. His current method takes hours of analysis, but his objective is to hone the technology to the point where it can deduce what people are seeing in real time.

If successful, it could influence the way we do just about everything. Mind-reading machines could help doctors understand the inner worlds of people with hallucinations, cognitive disabilities, post-traumatic stress disorder and other impairments. Judges could use them to sneak a look into suspects' brains by having them reenact the experience and reading their visions.

Such machines could also determine whether someone using the insanity defense is faking it, or whether someone claiming self-defense truly feared for his life. On the flip side, the technology raises serious ethical concerns, with critics worrying that it could one day make our private thoughts vulnerable to snoops and hackers.

I ponder all this as I lie motionless in the brain scanner, staring straight ahead while Gallant and two of his lab researchers flash several dozen photographs in front of my eyes, a few seconds at a time. I see sheep grazing in a meadow, a rock formation, a pond and a profile of a guy who looks like Einstein. I'm not actually supposed to be looking at these pictures -- my job is to stare at the white dot in the middle of the screen. "Seeing" doesn't happen entirely in the conscious realm, Gallant explains. The visual cortex works like a camera, automatically absorbing information through the retina and registering the imagery in the brain.

Ten minutes feels like an eternity, but finally the fMRI announces the conclusion of its program with another loud beep. The researchers remove me from my bind and escort me to the control room, where a giant monitor is displaying 30 scanned images of my brain from different angles. I see bunches of white squiggly lines and light gray V shapes inside rows of gray circles. "That's it? That's my brain?" I ask, my head foggy from having tried so hard to stay still. It surprises me that all the goings-on in my mind can be reduced to a bunch of geometric shapes. Gallant tells me that brain activity is basically just a bunch of neurons firing -- an estimated 300 million in the primary visual cortex alone, according to the latest research.

To help make sense of the shapes, the brain scanner divides them up into a grid of three-dimensional cube-like structures called volume pixels, or voxels. To me, each voxel looks like a random mix of whites, grays and blacks. But to Gallant's computer model, which can see more-precise data in those shades, the voxels are a meaningful matrix of zeroes and ones. By crunching this matrix, it can transform the shapes back into a remarkably accurate rendering of the Einstein Guy or the grazing sheep. Gallant and his team didn't have time to generate enough scans of my brain to make their algorithm work, but they showed me some convincing results from other volunteers. "It's not perfect," says Shinji Nishimoto, one of Gallant's postdocs, "but we're getting pretty close."

As I leave the lab, my thoughts secure in my head, I feel a bit uneasy knowing that they may not stay that way for long. Gallant's "neural decoding" -- a term he prefers to "mind reading" -- is getting faster and more sophisticated all the time. In fact, last October, his lab managed to re-create entire video clips just by analyzing the brain patterns of people watching them. In one example, a reconstructed video of an elephant walking through the desert shows a blotchy Dumbo-shaped mass plodding across the screen. The fine details are lost, but the rendering is nonetheless impressive for having been pulled from someone's brain. And it's not just Gallant who's making progress. Using similar technology, other researchers are unlocking memories and dreams.

Beyond the fuzzy realm of the paranormal, mind reading could simply be a question of having the right tools. "As long as we have good measurements of brain activity and good computational models of the brain," Gallant wrote in a supplement to a paper he published in Nature in 2008, "it should be possible in principle to decode the visual content of mental processes like dreams, memory, and imagery."

What's on your Mind?

Remarkably, scientists can predict with near-perfect accuracy the last thing you saw just by analyzing your brain activity. The technique is called neural decoding. To do it, scientists must first scan your brain while you look at thousands of pictures. A computer then analyzes how your brain responds to each image, matching brain activity to various details like shape and color. Over time, the computer establishes a sort of master decoding key that it can later use to identify and reconstruct almost any object you see without the need to analyze the image beforehand.

The Magic of the MRI

Gallant is a slight, wiry man with a horseshoe mustache and a Willy Wonka–esque energy about him. He tends to use friendly, vivid analogies when he talks. "The brain is a Thanksgiving turkey," he said to me last summer during a visit to his bare-bones office at Berkeley. He was drawing furiously on the chalkboard, attempting to explain in simple terms the inner workings of the visual cortex. "The outside of the turkey is the skin, or the brain's cortex. All the giblets inside are subcortical nuclei. This" -- he tapped his chalk on the giant balloon-like cavity at the rear of his "turkey" diagram -- "is the primary visual cortex," the center of our vision system.

The brain employs a complex assembly line to construct the world around us. The primary visual cortex, or V1, connects to a maze of other regions known as V2, V3, and so on. ("Nobody knows exactly how many areas there are up there," Gallant says, a finger to his head.) Each region performs specific vision-related functions, like distinguishing colors, discerning shapes, gauging depth, or sensing motion. When I look at a dog, for instance, I don't just see the shape of a four-legged animal; I recognize that it's the brown-and-white dog I owned as a child, romping in a familiar way in the backyard I grew up in. It might even trigger a memory of playing with him. Each of these aspects of "seeing" would be represented by different patterns in the visual cortex.

The key function of V1 relevant to Gallant's research -- registering visual stimuli -- was discovered in the early 20th century, when soldiers with bullet wounds to the back of the head, presumably to their visual cortex, experienced partial blindness despite having healthy eyes. Experiments on rodents affirmed that the location and shape of things we see are replicated in V1. If I were to look at a tree, for instance, the back of the eye would register a representation of an upside-down tree onto V1. But it wasn't until the late 1990s, when neuroscientists used a process called multi-voxel pattern recognition, that scientists were able to pinpoint these representations non-invasively in humans. The technique uses fMRIs to map the visual cortex into tiny structures -- voxels -- that correspond to patterns of blood flow. One pattern in the area responsible for shape, for instance, might suggest that a person is looking at a dog, while another pattern in the area responsible for color could suggest that the dog is brown.

Gallant's project takes the technique to a new level, using a computer model to not only identify images but also reconstruct them. On the night of my fMRI session, I met five members of Gallant's lab who, for the past three years, have been wrestling with probability theory to come up with the best algorithms to power the model. When I asked them how exactly they devised the code, Thomas Naselaris, a tall, curly-haired postdoc, put a long equation on the blackboard called Bayes' theorem. It's a fundamental tenet of probability theory that calculates how odds change in response to new information, he explained, and it's the key to their technique.

To calculate the probability that someone's brain patterns represent a particular image, the researchers must first prime their special equation with a sizable sampling of data, plugging in 1,750 of the subject's fMRI scans. "For every possible image a person could be looking at, Bayes' theorem tells you the probability that the image is correct," Naselaris says. It's a bit like trying to predict the make of a car concealed beneath a tarp: To come up with an accurate guess, you must first analyze all the available clues -- the shape of the tarp, its size, maybe the type of person who owns the car, possibly the sound of the engine. The more information you have, the better your guess. Likewise, the more data you plug into the equation, the more accurate its predictions.

Dancing Bears

The ability to pluck a picture from someone's brain is an impressive feat, but the far bigger challenge is figuring out the actual thoughts associated with that picture. Gallant would have no way to know, for instance, what I was thinking while I was lying in the scanner. That's because thoughts, unlike pictures, are not neatly recorded at the back of the brain.

So where are they recorded? Tom Mitchell, a computer scientist at Carnegie Mellon University, along with his colleague Marcel Just, is using fMRI and multi-voxel pattern recognition to answer that question. By mapping the brain's response to images, words and emotions, Mitchell believes his lab could be decoding thoughts, not just pictures, within the decade.

To pinpoint where thoughts live in the brain, during a recent study he put volunteers in an fMRI machine, showed them two objects -- a hammer and a house, for example -- and used software to analyze voxel patterns triggered in multiple parts of the brain, ultimately determining which object the subject was thinking about. Like Gallant, Mitchell can do this with 90 percent accuracy. "When you think about a hammer, you think about all aspects of it. You might think about swinging it, which would fire neurons in your motor cortex," he says. "You might think about what it looks like, which activates the visual cortex." His team also gathered fMRI data from the amygdala and the anterior cingulate cortex -- areas that correlate with emotions like anger and love -- to map out brain patterns that form when people hear words such as "love," "justice" and "anxiety."

Yukiyasu Kamitani, a computational neuroscientist at the Advanced Telecommunications Research Institute International in Japan, believes he can take the technology even further and decode dreams. This summer, he plans to put sleeping people in the fMRI to read their brain signals and, like Gallant, reconstruct them.

Meanwhile, Gallant and Nishimoto are attempting to reproduce movies stored in the brain. After I finish my fMRI scans, Gallant showed me a video clip on his computer featuring psychedelic bears floating in front of mountains. Every few seconds, a new bear zoomed into the foreground and then floated away like a beach ball tossed in the air. Occasionally a colorful cube flew past the bears. Just looking at it made me dizzy. "This is a motion-enhanced movie," Gallant says excitedly. "It makes your visual system go absolutely crazy, so you get lots of blood flow and signals."

Nishimoto, the lab's resident "motion guy," is able to reconstruct from brain scans the colors, location and movement of these bears, generating reproductions of the original video footage. In a similar experiment, he asked a volunteer to watch two hours of movie trailers inside an fMRI machine. A computer then matched the subject's brain patterns to colors and moving shapes in the movie. To build up the computer model's reference library of associations -- to prime it -- the researchers fed it thousands of hours of YouTube videos and asked it to predict how the person's brain would respond to watching them. Then, when the subject watched a new set of videos, the computer was able to match the new brain patterns to images in its library to piece together a reproduction of the original video clip. The reconstructed video captured the general flow of motion, as well as shapes and colors, although it missed fine details such as facial features. The resolution will improve, the researchers say, as more data is added to the computer model. "Whenever I tell anyone we can do this," Gallant says, "they say there's no way."

Thinking back to the rat's nest of lines from my own fMRI readings -- all that from looking at a simple black-and-white photo -- it's a little creepy to think that our mental processes can be reduced to binary code in this fashion. But then again, so is the notion of a mysterious black box of neurons controlling everything we do and think. "It's all numbers," Gallant says. "The trick is to do good bookkeeping."

Edited by: Lawyer Asad

No comments: