Meta can turn your thoughts into words typed on a screen if you don't mind lugging a machine the size of a room around
Maybe someday you won't have that pesky lag between thinking and posting online
![Meta AI Typing](https://cdn.mos.cms.futurecdn.net/F64rwBGFZttTc5z8BWHVzL-1200-80.png)
- Meta is testing a machine that decodes brain signals into words typed into a computer.
- The brain-typing system is up to 80% accurate but nowhere near practical.
- The machine is a half-ton, costs $2 million, needs a shielded room, and even slight head movements disrupt the signal.
Meta is showing off a machine capable of turning your thoughts into words typed on a screen, but don't expect to write your Instagram captions telepathically any time soon. The device weighs about half a ton, costs $2 million, and is about as portable as a refrigerator. So, unless you were planning to lug around a lab-grade magnetoencephalography (MEG) scanner, you won’t be sending mind texts anytime soon. And that's before even considering how you can't even slightly move your head when using it.
Still, what Meta has done is impressive. Their AI and neuroscience teams have trained a system that can analyze brain activity and determine what keys someone is pressing – purely based on thought. There are no implanted electrodes, no sci-fi headbands, just a deep neural network deciphering brainwaves from the outside. The research, detailed in a pair of newly released papers, reveals that the system is up to 80% accurate at identifying letters from brain activity, allowing it to reconstruct complete sentences from a typist’s thoughts.
While typing out phrases, a volunteer sits inside a MEG scanner, which looks a bit like a giant hair dryer. The scanner picks up magnetic signals from neurons firing in the brain, and an AI model, aptly named Brain2Qwerty, gets to work learning which signals correspond to which keys. After enough training, it can predict the letters a person is typing. The results weren't perfect, but could reach accuracy levels of up to 80%.
Brain typing
Telepathic typing has some real limits for now. The scanner needs to be in a specially shielded room to block out Earth’s magnetic field, which is a trillion times stronger than what's in your head. Plus, the slightest head tilt scrambles the signal. But there's more to it than just another Meta-branded product. The research could really boost brain science and, eventually, medical care for brain injuries and illnesses.
"To explore how the brain transforms thoughts into intricate sequences of motor actions, we used AI to help interpret the MEG signals while participants typed sentences. By taking 1,000 snapshots of the brain every second, we can pinpoint the precise moment where thoughts are turned into words, syllables, and even individual letters," Meta explained in a blog post. "Our study shows that the brain generates a sequence of representations that start from the most abstract level of representations—the meaning of a sentence—and progressively transform them into a myriad of actions, such as the actual finger movement on the keyboard."
Despite its limitations, the non-invasive aspect of Meta's research makes for a much less scary approach than cramming a computer chip right in your brain as companies like Neuralink are testing. Most people wouldn't sign up for elective brain surgery. Even though a product isn't the stated goal of the research, historical points demonstrate that giant, lab-bound machines don't have to stay that way. A tiny smartphone does what a building-size computer couldn't in the 1950s. Perhaps today's brain scanner is tomorrow’s wearable.
You might also like
- WhatsApp looks set to get an AI makeover soon – here's what could be coming
- Meta wants to fill your social media feeds with bots – here's why I think it's wrong
- Meta AI's latest feature can show you in a fantasy world – if you feed it your face
Get daily insight, inspiration and deals in your inbox
Sign up for breaking news, reviews, opinion, top tech deals, and more.
Eric Hal Schwartz is a freelance writer for TechRadar with more than 15 years of experience covering the intersection of the world and technology. For the last five years, he served as head writer for Voicebot.ai and was on the leading edge of reporting on generative AI and large language models. He's since become an expert on the products of generative AI models, such as OpenAI’s ChatGPT, Anthropic’s Claude, Google Gemini, and every other synthetic media tool. His experience runs the gamut of media, including print, digital, broadcast, and live events. Now, he's continuing to tell the stories people want and need to hear about the rapidly evolving AI space and its impact on their lives. Eric is based in New York City.
You must confirm your public display name before commenting
Please logout and then login again, you will then be prompted to enter your display name.