People Can Control Mental Activity Using Brain Scans

People were able to calm activity in the amygdala after seeing visual cues that responded to their mental processes.

People who can "see" their brain activity can change it, after just one or two neurofeedback sessions, new research shows.

People in the study were able to quiet activity in the amygdala - an almond-shaped brain region that processes emotions such as fear - after seeing simple visual or auditory cues that corresponded to the activity level there, according to a new study published in the Sept. 15 issue of the journal Biological Psychiatry. The findings reveal the incredible plasticity of the brain, the researchers said.

The new technique could one day be used as an inexpensive treatment for people with anxiety, traumatic stress or other mental health conditions, said study co-author Dr. Talma Hendler, a psychiatrist and neuroscientist at the Tel Aviv Center for Brain Functions in Israel.

RELATED: Exercise Keeps the Brain Flexible

"I see it as a very good tool for children and for people who we don't want to give medication," Hendler told Live Science.

Past studies have shown that people have tremendous power to shape their brain activity. For instance, mindfulnessmeditation, a type of meditation in which people focus on sensations from the body, can help with symptoms of depression, anxiety and even low back pain. And studies show that Buddhist monks who have practiced meditating a lot are much better at "clearing the mind" than the average person. In other words, control over one's own mind can be learned. [Mind Games: 7 Reasons You Should Meditate]

However, most of these attempts to control brain activity are indirect, and they often alter activity across the entire brain.

WATCH VIDEO:Simulating the Human Brain

Hendler and her colleagues wondered whether targeting the specific brain regions tied to specific conditions could be a more effective way of helping people with specific symptoms.

In a series of four different experiments with several dozen healthy people, Hendler and her colleagues asked the volunteers to sit inside a functional magnetic resonance imaging (fMRI) machine while simultaneously wearing an electroencephalogram (EEG) hat. The fMRI provided detailed information about which brain regions were active, and the EEG measured activity in the amygdala; together, they allowed the team to pinpoint the precise EEG signature that corresponded to amygdala activation.

Participants were then treated with neurofeedback, in one of two ways: In one condition, they listed to a sound, and in the other, they were shown a movie of a person riding a skateboard. But what they didn't know was that the loudness of the sound they were hearing, or the speed of the person on the skateboard, was actually determined by the electrical activity going on in their own amygdala. The researchers channeled the measurements coming from the fMRI and EEG into an audible sound or a moving image.

RELATED: Implant to 'Plug' Brain into Supercomputers

The participants were asked to use "mental strategies" to make either the sound grow quieter, or the skateboarder go faster. If they succeeded, what they were really doing was tamping down the activity in their amygdala. [10 Things You Didn't Know About the Brain]

In a control group, participants were asked to do the same thing, but were treated with a fake neurofeedback. Unlike the true treatment group, the speed of the skateboard and the level of the sound were unrelated to the amygdala's activity, meaning that when the participants observed a change in the skateboarder's speed or the sound's volume, they were not altering their brain activity levels directly.

Next, people in both groups were asked to look at the faces of happy and sad people with either similar or discordant words above them. Past studies have shown that people who are better able to regulate their emotions are quicker to identify a person's facial expression when the word above that person's picture conflicts with the picture, than can people who have had traumatic stress, the researchers wrote in the article.

PHOTOS: See the Dreams of an Artificial Brain

The results showed that, compared to those who received the sham treatment, people who were given cues based on activity in the amygdala were better able to reduce activity in that region of the brain "It's actually quite amazing that this plasticity takes place after one session or two sessions," Hendler said. Other psychotherapy techniques aimed at treating PTSD or anxiety often take six, eight or 10 sessions, she said. However, she noted that the participants were all healthy. People with traumatic stress could require more sessions to master the method of controlling their mental activity, Hendler said.

What's more, in follow-up experiments, the participants showed a better ability to regulate emotions as measured by the facial-expression-recognition task.

RELATED: Could Brain Scans ID Potential Criminals?

The findings suggest that this type of neurofeedback technique could one day become a cheap and relatively simple way for patients to be treated for anxiety, PTSD or other psychological conditions that are tied to amygdala hyperactivation, Hendler said.

Right now, the treatment requires an EEG cap that calls for gel and wiring, making it unsuitable for home use. But in the future, the team envisions using a wireless, miniature sensor that a patient could use at home, after an initial instructional session with a physician, Hendler said.

However, follow-up studies need to show that this method of targeted brain training works as well as techniques like mindfulness meditation or cognitive behavioral therapy, Hendler said.

"We hope this is a better way to actually modulate specific areas, and bring on some plasticity that is necessary to cure the brain," Hendler said.

Original article on Live Science.

Editor's Recommendations

Copyright 2016 LiveScience, a Purch company. All rights reserved. This material may not be published, broadcast, rewritten or redistributed.

It's easy to mistake this photo for some kind of surreal landscape painting, but this image in fact shows off the imagination of Google's advanced image detection software. Similar to an artist with a blank canvas, Google's software constructed this image out of nothing, or essentially nothing, anyway. This photo began as random noise before software engineers coaxed this pattern out of their machines. How is it possible for software to demonstrate what appears to be an artistic sensibility? It all begins with what is basically an artificial brain.

When Art Meets Science: Photos

Artificial neural networks are systems consisting of between 10 and 30 stacked layers of synthetic neurons. In order to train the network, "each image is fed into the input layer, which then talks to the next layer, until eventually the 'output' layer is reached," the engineers wrote in a blog post detailing their findings. The layers work together to identify an image. The first layer detects the most basic information, such as the outline of the image. The next layers hone in on details about the shapes. The final output layer provides the "answer," or identification of the subject of an image. Shown is Google's image software before and after processing an image of two ibis grazing to detect their outlines.

How Face Recognition Tech Will Change Everything

Searching for shapes in clouds isn't just a human pastime anymore. Google engineers trained the software to identify patterns by feeding millions of images to the artificial neural network. Give the software constraints, and it will scout out patterns to recognize objects even in photos where the search targets are not present. In this photo, for example, Google's software, like a daydreamer staring at the clouds, finds all kinds of different animals in the sky. This pattern emerged because the neural network was trained primarily on images of animals.

Cloud-Gazing: Learn Your Cloud Types

How the machine is trained will determine its bias in terms of recognizing certain objects within an otherwise unfamiliar image. In this photo, a horizon becomes a pagoda; a tree is morphed into building; and a leaf is identified as a bird after image processing. The objects may have similar outlines to their counterparts, but all of the entries in the "before" images aren't a part of the software's image vocabulary, so the system improvises.

Facial Recognition System Detects Pain

When the software acknowledges an object, it modifies a photo to exaggerate the presence of that known pattern. Even if the software is able to correctly recognize the animals it has been trained to spot, image detection may be a little overzealous in identifying familiar shapes, particularly after the engineers send the photo back, telling the software to find more of the same, and thereby creating a feedback loop. In this photo of a knight, the software appears to recognize the horse, but also renders the faces of other animals on the knight's helmet, globe and saddle, among other places.

Photo First: Light Captured as Both Particle and Wave

Taken a step further, using the same image over several cycles in which the output is fed through over and over again, the artificial neural network will restructure an image into the shapes and patterns it has been trained to recognize. Again borrowing from an image library heavy on animals, this landscape scene is transformed into a psychedelic dream scene where clouds are apparently made of dogs.

Plants Thrive in Psychedelic, Underground Farms

At its most extreme, the neural network can transform an image that started as random noise into a recognizable but still somewhat abstract kaleidoscopic expression of objects with which the software is most familiar. Here, the software has detected a seemingly limitless number of arches in what was a random collection of pixels with no coherence whatsoever.

Digital 'Head Dome' Immerses You in Art

This landscape was created with a series of buildings. Google is developing this technology in order to boost its image recognition software. Future photo services might recognize an object, a location or a face in a photo. The engineers also suggest that the software could one day be a tool for artists that unlocks a new form of creative expression and may even shed light on the creative process more broadly.

New Google Initiative Targets Classical Music Lovers