Engineers at Texas A&M are developing a wearable motion-sensor device that they hope will one day break through the communication barrier between deaf people and the rest of the hearing world.
The device uses sensors to determine subtle muscular movements in the wrist and in the forearm, and then communicates the data wirelessly to a program that translates the sign language into English.
So far, the researchers have completed a prototype that relies upon a limited vocabulary of the most commonly used conversational speech. The system “learns” each person who wears it differently, meaning that each new user must initially repeat certain words and motions in order for the device to translate accurately. Although the prototype only translates one word at a time, the team intends to develop more advanced software that will allow it to perform fluently in real conversation.
“When we’re speaking, we put all the words in one sentence,” said lead researcher and associate professor of biomedical engineering Roozbeh Jafari. “The transition from one word to another word is seamless and it’s actually immediate.”
This is not the first attempt to create wearable technology that translates sign language into spoken word. Six designers from Asia University snagged a 2013 Red Dot Design Concept Award for developing a system of rings and bracelets that could read a signer’s hand motion and then speak the words aloud, also displaying the text on a small screen. But these bracelets were met with a certain reticence within the deaf and hard-of-hearing community for their inability to read the entire arm’s musculature or the signer’s facial expressions.
Other attempts at similar technologies have relied on camera-based software, which proved to be incapable of detecting the nuances of intricate finger movements. By contrast, the A&M sensor excels at reading fine-tuned gestures throughout a signer’s arms, wrists, and hands.
Jafari hopes that the software will eventually lead to more fluid communication between humans and computers by eliminating the need for keyboards on small devices like smart watches.
“We need to have a new user interface and a user interface modality that helps us to communicate with these devices,” Jafari told Live Science. “Devices like [the wearable sensor] might help us to get there.”
Feature photo courtesy of daveynin.