Women with Alzheimer's Face Faster Decline Than Men
Women are found to be nearly twice as likely as men to develop Alzheimer's.
Women with Alzheimer's may face a swifter mental decline than men with the same condition, but researchers are not sure why, according to a study released this week at a U.S. medical conference.
Some two-thirds of U.S. seniors living with Alzheimer's disease are women, and women are almost twice as likely as men to develop the incurable cognitive disease, according to the Alzheimer's Association, which is hosting is annual conference in the US capital.
On Tuesday, researchers from Duke University presented findings on a study of 141 women and 257 men, aged in their mid 70s, who suffer from Alzheimer's disease.
Alzheimer's Breakthrough Could Hold Key to Cure
After studying the group for eight years, they found that women's cognitive abilities declined twice as fast as those of men, judging by mental tests taken each year to gauge memory and other skills.
"Our findings suggest that men and women at risk for Alzheimer's may be having two very different experiences," said lead author Katherine Amy Lin at Duke University Medical Center.
"Our analyses show that women with mild memory impairments deteriorate at much faster rates than men in both cognitive and functional abilities."
The reasons behind the difference remain unclear, and more study is needed to determine if there are gender-specific genetic or environmental risk factors at play.
The Brain: Now in Ultra High-Res 3D
"Women are disproportionately affected by Alzheimer's, and there is an urgent need to understand if differences in brain structure, disease progression, and biological characteristics contribute to higher prevalence and rates of cognitive decline," said Maria Carrillo, Alzheimer's Association chief scientific officer.
Alzheimer's disease affects some 44 million people around the world, and cases are expected to skyrocket as the global population ages.
It's easy to mistake this photo for some kind of surreal landscape painting, but this image in fact shows off the imagination of Google's advanced image detection software. Similar to an artist with a blank canvas, Google's software constructed this image out of nothing, or essentially nothing, anyway. This photo began as random noise before software engineers coaxed this pattern out of their machines. How is it possible for software to demonstrate what appears to be an artistic sensibility? It all begins with what is basically an artificial brain.
Artificial neural networks are systems consisting of between 10 and 30 stacked layers of synthetic neurons. In order to train the network, "each image is fed into the input layer, which then talks to the next layer, until eventually the 'output' layer is reached,"
the engineers wrote in a blog post detailing their findings
. The layers work together to identify an image. The first layer detects the most basic information, such as the outline of the image. The next layers hone in on details about the shapes. The final output layer provides the "answer," or identification of the subject of an image. Shown is Google's image software before and after processing an image of two ibis grazing to detect their outlines.
Searching for shapes in clouds isn't just a human pastime anymore. Google engineers trained the software to identify patterns by feeding millions of images to the artificial neural network. Give the software constraints, and it will scout out patterns to recognize objects even in photos where the search targets are not present. In this photo, for example, Google's software, like a daydreamer staring at the clouds, finds all kinds of different animals in the sky. This pattern emerged because the neural network was trained primarily on images of animals.
How the machine is trained will determine its bias in terms of recognizing certain objects within an otherwise unfamiliar image. In this photo, a horizon becomes a pagoda; a tree is morphed into building; and a leaf is identified as a bird after image processing. The objects may have similar outlines to their counterparts, but all of the entries in the "before" images aren't a part of the software's image vocabulary, so the system improvises.
When the software acknowledges an object, it modifies a photo to exaggerate the presence of that known pattern. Even if the software is able to correctly recognize the animals it has been trained to spot, image detection may be a little overzealous in identifying familiar shapes, particularly after the engineers send the photo back, telling the software to find more of the same, and thereby creating a feedback loop. In this photo of a knight, the software appears to recognize the horse, but also renders the faces of other animals on the knight's helmet, globe and saddle, among other places.
Taken a step further, using the same image over several cycles in which the output is fed through over and over again, the artificial neural network will restructure an image into the shapes and patterns it has been trained to recognize. Again borrowing from an image library heavy on animals, this landscape scene is transformed into a psychedelic dream scene where clouds are apparently made of dogs.
At its most extreme, the neural network can transform an image that started as random noise into a recognizable but still somewhat abstract kaleidoscopic expression of objects with which the software is most familiar. Here, the software has detected a seemingly limitless number of arches in what was a random collection of pixels with no coherence whatsoever.
This landscape was created with a series of buildings. Google is developing this technology in order to boost its image recognition software. Future photo services might recognize an object, a location or a face in a photo. The engineers also suggest that the software could one day be a tool for artists that unlocks a new form of creative expression and may even shed light on the creative process more broadly.