Artificial intelligence (AI) has decoded the brain activity of its subjects, creating images on the screen that the subject was thinking about, according to a report by Al Jazeera. Using Stable Diffusion (SD), a deep learning AI model developed in Germany, neuroscientist and assistant professor at Osaka University Yu Takagi and his team analysed brain scans of subjects shown up to 10,000 images while inside an MRI machine. By translating mental activity into a readable form, Takagi and his partner Shinji Nishimoto created images which were akin to the original ones. Even if AI machines are not trained or shown pictures, they are capable of creating such images.
The research has sparked debate around the future use and ethical concerns of AI technology, with some questioning whether it poses a risk to humans. While privacy issues are a concern, Takagi believes that technology is not without its benefits. He argues that there need to be high-level discussions to ensure that governments and institutions cannot read people’s minds.
The study has been accepted for the Conference on Computer Vision and Pattern Recognition in June 2023, a common route for legitimising significant breakthroughs in neuroscience. Although the development is groundbreaking, scientists maintain that the world is still far from decoding visual experiences. In a paper published in 2021, researchers at the Korea Advanced Institute of Science and Technology found that conventional neural interfaces lack chronic recording stability due to the soft and complex nature of neural tissue.
Takagi remains optimistic about the advancements being made in AI, stating that while he is optimistic about AI, he is not optimistic about brain technology. The framework used in research can be used in other brain scanning devices such as EEG and brain-computer implants being developed by Elon Musk’s Neuralink. Despite limitations, Takagi is hopeful about the potential uses of AI, which can be applied in clinical as well as entertainment settings.
However, there are concerns over the ethical implications of AI technology, particularly in terms of data collection and usage. Ricardo Silva, a professor of computational neuroscience at University College London, highlights the need for transparency and accountability in the use of data collected through brain scanning technology. While the technology has potential, Silva argues that there is a pressing need to consider its limitations and potential misuse.
Takagi and his partner are already planning version two of their project, which will focus on improving the technology and applying it to other modalities. While the ethical implications of AI technology remain a concern, the potential benefits cannot be ignored.
The study has also raised ethical concerns regarding the use of AI and its potential misuse. With the increasing advancements in AI technology, the potential for its misuse has become a significant concern for many.
Technology leaders, including Tesla and Twitter CEO Elon Musk, have called for a pause on AI, citing “profound risks to society and humanity” in an open letter. Takagi acknowledges that there are serious concerns about the misuse of technology, especially in terms of privacy issues.
“If a government or institution can read people’s minds, it’s a very sensitive issue,” he stated. “There need to be high-level discussions to make sure this can’t happen.”
Despite these concerns, Takagi believes that AI technology is not without benefits. “For us, privacy issues are the most important thing,” he added. “But there are benefits to AI technology, and we shouldn’t ignore them.”
The study has sparked discussions about the future of AI technology and its potential applications, with many researchers and scientists acknowledging the significant breakthroughs that have been made in neuroscience.
However, scientists maintain that the world is still far from decoding visual experiences, and there are limitations to mind-reading AI. For example, subjects need to sit in an fMRI scanner for around 40 hours, according to the research.
In a paper published in 2021, researchers at the Korea Advanced Institute of Science and Technology found that “conventional neural interfaces lack chronic recording stability due to the soft and complex nature of neural tissue, which reacts in unusual ways when brought into contact with synthetic interfaces.”
Despite these limitations, Takagi remains optimistic about the advancements being made in AI technology. “I’m optimistic for AI, but I’m not optimistic for brain technology,” he said. “I think this is the consensus among neuroscientists.”
The framework used in this research can be used in other brain scanning devices, such as EEG, like the brain-computer implants being developed by Elon Musk’s Neuralink.
Takagi also noted that, apart from clinical uses, the technology could be used for entertainment purposes.
While talking with Al Jazeera, Ricardo Silva, a professor of computational neuroscience at University College London and a research fellow at the Alan Turing Institute, said that “it’s hard to predict what a successful clinical application might be at this stage, as it is still very exploratory research.”
He also raised concerns about the potential misuse of this technology, especially in terms of data privacy. “The most pressing issue is to which extent the data collector should be forced to disclose in full detail the uses of the data collected,” he noted. “It’s one thing to sign up as a way of taking a snapshot of your younger self for, maybe, future clinical use… It’s yet another completely different thing to have it used in secondary tasks such as marketing, or worse, used in legal cases against someone’s own interests.”
Despite these concerns, Takagi and his partner have no intention of slowing down their research. They are already planning version two of their project, which will focus on improving the technology and applying it to other modalities.
“We are now developing a much better [image] reconstructing technique,” Takagi noted, “and it’s happening at a very rapid pace.”