The idea of someone using artificial intelligence to read your thoughts, arguably the only thing in human nature that is our own and inaccessible to anyone else, may make you shudder. Researchers at the University of Texas have published a study about a new system that can read thoughts and translate them into a continuous stream of text.
Also: Why open source is essential to allaying AI fears
The study explains how the authors trained a semantic decoder to make it capable of interpreting a subject's brain activity using functional magnetic resonance imaging (fMRI), while the person listened to or silently imagined stories and watched silent videos, to produce text that directly correlates to what was heard, thought, or watched.
Also: Want a compassionate response from a doctor? Ask ChatGPT instead
The decoder is a non-invasive system that learns from the brain activity measured with an fMRI scanner while the subject listens to hours of podcasts. This teaches the system to process and correlate the input data combined with the scanned brain activity, so it can learn to decode the person's future thoughts.
The individual then either listened to a new story, imagined one, or watched four silent videos, and the decoder was able to generate text corresponding to the person's thoughts, by decoding their brain activity.
Also: Would you listen to AI-run radio? This station tested it out on listeners
AI reading minds isn't as nefarious as it sounds. The ability to decode a person's thoughts could help people communicate more effectively, especially those that are conscious but unable to speak, like people with physical disabilities. This study proves the viability of doing so with language brain-computer interfaces.
This study, groundbreaking as it is, is far from a finished project. Though the semantic decoder generates a continuous stream of text, it does not provide a word-for-word transcript of a person's thoughts. The system in this study, however, is capable of decoding continuous language with complicated ideas, rather than words or simple phrases, for extended periods of time.
It's also not as effective as you may think. The study found the semantic decoder only accurately generated text that matched the meaning behind the person's thoughts about half the time.
Also: Universities that ban ChatGPT may be hurting their own admissions, according to a study
Half the time is still an exceedingly impressive result, considering it is the first successful study of a non-invasive semantic decoder that doesn't require surgical implants.
Though the semantic decoder developed by the researchers at UT of Austin is capable of deciphering and reconstructing a person's thoughts to display them in text, it won't work on anyone.
For starters, the system requires an fMRI scanner to capture an individual's brain activity, both during training and testing, though it could be done with other technologies in the future, like functional near-infrared spectrocopy (fNIRS).
Also: AI bots have been acing medical school exams, but should they become your doctor?
The UT researchers also value mental privacy and want this technology used only by individuals that want to use it and that can gain something from it. To diminish the potential for misuse of this technology, they proved in the study that the semantic decoder only works with willing participants that want to cooperate with it.
The system was unable to effectively decode thoughts from individuals that it was not trained on, or that recanted their cooperation after training.