Thursday, November 3, 2011

Brain Imaging


Scientists Use Brain Imaging to Reveal the Movies in Our Mind

Imaginetapping into the mind of a coma patient, or watching one's own dream onYouTube. With a cutting-edge blend of brain imaging and computer simulation,scientists at the University of California, Berkeley, are bringing thesefuturistic scenarios within reach. Using functional Magnetic Resonance Imaging (fMRI) andcomputational models, UC Berkeley researchers have succeeded in decoding andreconstructing people's dynamic visual experiences -- in this case, watchingHollywood movie trailers. As yet, the technology can only reconstructmovie clips people have already viewed. However, the breakthrough paves the wayfor reproducing the movies inside our heads that no one else sees, such asdreams and memories, according to researchers. "This is a major leap toward reconstructing internalimagery," said Professor Jack Gallant, a UC Berkeley neuroscientist andcoauthor of the study to be published online Sept. 22 in the journal CurrentBiology. "We are opening a window into the movies in our minds." Eventually, practical applications of thetechnology could include a better understanding of what goes on in the minds ofpeople who cannot communicate verbally, such as stroke victims, coma patientsand people with neurodegenerative diseases. It may also lay the groundwork for brain-machine interface so thatpeople with cerebral palsy or paralysis, for example, can guide computers withtheir minds. However, researchers point out that thetechnology is decades from allowing users to read others' thoughts and intentions,as portrayed in such sci-fi classics as "Brainstorm," in whichscientists recorded a person's sensations so that others could experience them. Previously, Gallant and fellow researchersrecorded brain activity in the visual cortex while a subject viewedblack-and-white photographs. They then built a computational model that enabledthem to predict with overwhelming accuracy which picture the subject waslooking at. In their latest experiment, researchers say theyhave solved a much more difficult problem by actually decoding brain signalsgenerated by moving pictures. "Our natural visual experience is like watching amovie," said Shinji Nishimoto, lead author of the study and apost-doctoral researcher in Gallant's lab. "In order for this technologyto have wide applicability, we must understand how the brain processes thesedynamic visual experiences." Nishimoto and two other research team membersserved as subjects for the experiment, because the procedure requiresvolunteers to remain still inside the MRI scanner for hours at a time. Theywatched two separate sets of Hollywood movie trailers, while fMRI was used tomeasure blood flow through the visual cortex, the part of the brain thatprocesses visual information. On the computer, the brain was divided intosmall, three-dimensional cubes known as volumetric pixels, or"voxels.""We built a model for each voxel that describes howshape and motion information in the movie is mapped into brain activity,"Nishimoto said. The brain activity recorded while subjects viewed the first setof clips was fed into a computer program that learned, second by second, toassociate visual patterns in the movie with the corresponding brain activity. Brainactivity evoked by the second set of clips was used to test the moviereconstruction algorithm. This was done by feeding 18 million seconds of randomYouTube videos into the computer program so that it could predict the brainactivity that each film clip would most likely evoke in each subject. Finally,the 100 clips that the computer program decided were most similar to the clipthat the subject had probably seen were merged to produce a blurry yetcontinuous reconstruction of the original movie.Reconstructing movies usingbrain scans has been challenging because the blood flow signals measured usingfMRI change much more slowly than the neural signals that encode dynamicinformation in movies, researchers said. For this reason, most previousattempts to decode brain activity have focused on static images."We addressedthis problem by developing a two-stage model that separately describes theunderlying neural population and blood flow signals," Nishimoto said. Ultimately,Nishimoto said, scientists need to understand how the brain processes dynamicvisual events that we experience in everyday life."We need to know how thebrain works in naturalistic conditions," he said. "For that, we needto first understand how the brain works while we are watching movies."Othercoauthors of the study are Thomas Naselaris with UC Berkeley's Helen WillsNeuroscience Institute; An T. Vu with UC Berkeley's Joint Graduate Group inBioengineering; and Yuval Benjamini and Professor Bin Yu with the UC BerkeleyDepartment of Statistics.
 

من خلال إجراء مسح للمخ بمجسّات ممغنطة

علماء ألمانيخترعون جهازاً يعرض الأحلام أثناء النوم

أكدت دراسةعلمية ألمانية أنه من الممكن عرض الأحلام التي يراها الإنسان وقراءتها خلال نومه،وذلك عبر جهاز كمبيوتر يتم تسجيلها خلال فترة زمنية قصيرة.وقال البروفيسور جاك جالنت إن ذلك يتم من خلال إجراء مسحللمخ بمجسات ممغنطة لتسجيل الأحلام تلتقط الأماكن التي تصدر حرارة في المخ، وتعطيأوامر بإظهار صور يراها الشخص وهو مغمض العينين.وأشار أيضاً إلى أن الدراسة سيتم استغلالها في تطويرتكنولوجيا جديدة تظهر صوراً متحركة كالتي ظهرت في الأحلام، وأن تلك التجربة تعتبرالأولى من نوعها وتثبت أنه من الممكن إظهار الصور التي يشاهدها الإنسان فىالأحلام، وفقاً لما نشر في صحيفة "اليوم السابع" الصادرة اليوم الاثنين.وأثبتعلماء من مؤسسة "ماكس بلانك الألمانية"، وهي كبرى المؤسسات العلمية فيالعالم أن الصور الأخيرة للإنسان قبل أن ينام تظهر بشكل مشابه في الأحلام، وذلكبعد أن قاموا بدراسة المخ في حالة الإفاقة 

 

No comments:

Post a Comment