![]() ![]() ![]() ![]() It uses an encoding model similar to that used by Open AI’s ChatGPT and Google’s Bard that can predict how a person’s brain will respond to natural language. That’s where the semantic decoder comes in. ![]() Because naturally spoken English uses more than two words per second, each brain image can therefore be affected by more than 20 words. While fMRI produces excellent quality images, the signal it measures, which depends on blood oxygen levels, is very slow, as an impulse of neural activity causes a rise and fall in blood oxygen over about 10 seconds. The researchers at UT Austin took non-invasive recordings of the brain using functional magnetic resonance imaging (fMRI) to reconstruct perceived or imagined stimuli using continuous, natural language. This new brain-computer interface differs from other ‘ mind-reading’ technology because it doesn’t need to be implanted into the brain. Called a semantic decoder, the system may help people who are conscious but unable to speak, such as those who’ve suffered a stroke. Researchers from the University of Texas at Austin have created a mind-reading AI system that can take images of a person’s brain activity and translate them into a continuous stream of text. ![]()
0 Comments
Leave a Reply. |