Developed by MIT researchers, this headpiece lets you talk to computers without actually speaking
Isn’t it kind of frustrating when you can’t focus on your work because the person sitting next to you is constantly typing on his/her laptop and the click-clack of the keyboard just won’t stop? If only there was a way to silently type out our thoughts. Well, a group of researchers at MIT have come up with what they call AlterEgo that promises to boost one’s productivity by enabling them to talk to computers without speaking and listen to them without hearing.
Chances are you may not fall in love with the headpiece as it looks more like a fix for dental issues – with a white plastic curve that extends from the ear to your chin, the AlterEgo is rather strange looking. However, you will definitely fall in love with its functionality. The AlterEgo makes use of electrodes that scan the jaw and face from neuromuscular signals that are produced when the user thinks about verbalizing words without speaking them out loud, better known as sub-vocalization. This mechanism enables the headpiece to act as a microphone for the computer without picking up any actual sounds. By now, you are probably wondering how you can listen to the computer without actually hearing. Well, the pair of bone conduction headphones transmit sounds from the computer to the user which bypass the conventional channels and go directly into the inner ear through the bones of the jaw and the skull!
According to MIT, AlterEgo offers greater privacy and reduces potential noises caused by working. For instance, you can secretly talk to your computer while playing scrabble to come up with high scoring words! But on a more principled note, you can use the AlterEgo to work on an airplane where it is already too noisy. Arnav Kapur, leader of the project, explained, “The motivation for this was to build an IA device – an intelligence-augmentation device. Our idea was: Could we have a computing platform that’s more internal, that melds human and machine in some ways and that feels like an internal extension of our own cognition?” So basically, users will have a better experience working as there will be less distractions.
You might not find the concept of sub-vocalization novel as it has been used for over a century but when it comes to computer interfaces, the concept is comparatively unexplored. In order to come up with this device, the team used an array of 16 electrodes set on different parts of a set of volunteers’ faces to find the best neuromuscular signals. Once this spot was found, the volunteers then subvocalized a series of words four times in succession. Upon conducting various experiments, the team found out that only four electrodes were required. So far, the algorithm’s functionality is confined to 20 words each for a series of computer tasks such as multiplication problems and a chess application that allows the input of moves using standard chess alphanumeric nomenclature. Once the words are subvocalized, the neural network passes the data through a series of processing nodes that figure out correlation between the neuromuscular signals and specific words. While carrying out a 90-minute test that aimed to quantify the usability of AlterEgo on 10 subjects who had tuned the prototype to their neurophysiology for 15 minutes, the headpiece showcased an accuracy of 92 percent. But since the inventors are from MIT, 92 percent still isn’t good enough as Mr. Kapur said that the performance can be improved with a greater training data set. He added, “We’re in the middle of collecting data, and the results look nice. I think we’ll achieve full conversation someday.” Hence the team is focusing on collecting more data for the system so that it can deal with more complex commands.
The research was presented in a paper at this year’s ACM Intelligent User Interface conference and the AlterEgo is demonstrated in the video below.
References: MIT News, Inhabitat