[ad_1]
MinD-Vis could be developed to integrate into virtual reality headsets, with the idea that users could control being in a metaverse with their minds.
Researchers in Singapore have developed an AI system to decipher brain wave patterns and generate an image of what it determines a person is looking at.
The research team is collecting brain scan datasets of about 58 participants who are exposed to between 1,200 and 5,000 different images of animals, food, buildings and human activities while they receive an MRI scan.
Each lasts for 9 seconds with a break in-between.
The mind-reading AI, dubbed MinD-Vis, then matches the brain scans with the images to generate an individual AI model for each participant.
These models allow computers to “read” thoughts and re-create the visuals a person is looking at.
“It can understand your brain activities just like ChatGPT understands the natural languages of humans. And then it will translate your brain activities into a language that the Stable Diffusion [an open source AI which generates images from text] can understand,” said one of the lead researchers on the study and PhD student, Jiaxin Qing, from The Chinese University of Hong Kong (CUHK IE).
According to Qing, the decoded images were consistently similar to what was shown to participants.
Li Ruilin is one of the participants and is fascinated by brain decoding.
“This brain decoding like using brain signals to generate the natural modalities is very interesting and exciting work. I’m also interested in what happened in my brain and what my brain can output and what I’m thinking,” said Ruilin.
The technology can be applied to assist people in the future, the research team says.
“Say for some patients without motor ability. Maybe we can help him to control their robots (artificial limbs)… (or) communicate with others like just using their thoughts instead of speech if that person couldn’t speak at that time,” said Chen Zijiao, at the National University of Singapore’s School of Medicine.
Chen added that the technology could also be developed to integrate into virtual reality headsets, so users could control being in a metaverse with their minds instead of physical controllers.
Future challenges
The researchers say the development of their mind-reading AI now is thanks to the easier availability of gathering MRI datasets and also the recent advances in computational power to crunch through the data.
However, it will take many years of advances for MinD-Vis to read the public’s mind, according to the team.
“We are trying to test the possibility right now, but I will say in terms of the dataset that is available right now, the computational power we have, as well as the huge heterogeneity or inter-individual differences in our brain anatomy as well as brain function; this is going to be very, very difficult,” said Juan Helen Zhou, an associate professor at the National University of Singapore.
There is also the risk that the datasets learnt from the AI could be shared without consent. Researchers also acknowledged that the relative lack of legislation in AI research could be a hindrance to progress.
“The privacy concerns are the first important thing and then people might be worried, whether the information we provided here might be assessed or shared without prior consent. So the thing to address this is we should have very strict guidelines, ethical and law in terms of how to protect the privacy,” said Zhou.
For more on this story, watch the video in the media player above.
Video editor • Roselyne Min
[ad_2]
Source link