Songs for the Deaf: A Word With Frank Russo - Sound Week
ADDITIONAL CONTRIBUTORS Matthew DeMello

Music history is punctuated with an undying desire to overcome the largest impediment to its enjoyment, namely, the damage to or absence of one’s hearing. The legends surrounding hearing impaired musical mavens from Beethoven to Brian Wilson are embroidered with their struggles to create art that is defiant of their incapacity to perceive what they’ve created in the way it is heard by everyone else. From the perspective of the audience, it is from the altruistic essence of musical creativity that fosters the belief that nearly any boundary to experiencing the art form can be transcended.

Ludwig van Beethoven. Photo courtesy of Fraunhofer-Gesellschaft Mediendienst.

There’s a great scene in Mr. Holland’s Opus where Richard Dreyfus’ character curates a musical performance for his deaf son. He leads his teenage ensemble into a rendition of John Lennon’s “Beautiful Boy” accompanied by a synchronized light display and Dreyfus’ clumsy sign language. Where sound waves are basically wasted, emotional efficiency is not, and the simple message of “I love you” is given greater weight by the edges of vanity within the experiment.

In reality, the mechanics of sensory experience make Beethoven’s 9th a work more easily translated between senses than, say, trying to get across what the roof of the Sistine Chapel looks like to the congenitally blind. To that end, the folks at the Centre of Learning Technology and the Science of Music, Auditory Research and Technology Lab at Ryerson University have explored these mechanics in attempt to perfect a device that can effectively translate musical experiences for the hearing impaired.

The Emoti-chair, developed by the University’s Alternative Sensory Information Displays (ASID) project, is a “sensory substitution technology” taking the form of a plush camping chair embedded with voice coils that project different sound vibrations into several locations of the body. One of the co-inventors of the device, Dr. Frank Russo of Ryerson, talked to BTR on the phone about the creation of the Emoti-chair and the results of their Concert for the Deaf event that took place this past March.

Emoti-chairs in action. Photo courtesy of Ryerson University.

BreakThru Radio: Tell us more about the Emoti-chair and how it came to be.

Dr. Frank Russo: Okay so, as you say, the Emoti-chair is a sensory substitution technology and that simply means that we’re taking information from one of your modalities, in this case, audition or sound, and we are making it available to another modality. Typically, you’re doing this because you are interested in supporting perception in people that have some sort of damage to one of the modalities and that’s exactly what we were trying to do. We were trying to bring sound to deaf and hard of hearing individuals and really, the team that was involved in the development was primarily concerned with film and music from the start, and communication of emotion in film and music. It became clear to us that without substantial access to music, it would be very challenging really to know whether or not you were in a scary scene in a movie, to be as surprised in a movie; and then in the case of music, if you only have residual hearing, there’s really just limited access to the nuance in texture and being able to perceive the emotional tenor of a piece of music.

So, many people have tried to deal with this problem, if you like, in the past and we’re not the first, by any means, to dabble in vibration. But I can tell you a little bit about some of the decisions we made…

BTR: Certainly.

FR: One of the things we observed early on was that vibration has a tendency to mask, this is a common property across the senses. So what I mean by ‘mask’ is that one vibration will cover up another vibration, so that, even though you give the skin ten different sources of vibration, it will only feel the one. So that’s a problem if you’re trying to represent music, because often there’s more than one thing happening in the music. So, we made a decision to create a kind of spatial display, and to do this we embedded voice coils in a chair. Initially, it was a simple camping chair that conformed to the back’s contour. So we embedded these voice coils, which are essentially speakers but without the cones.

BTR: Now what difference does that make when you remove the cones?

FR: When you remove the cones, you remove the majority of sound that the speakers produce. If you’ve ever removed the cone on a speaker, you’ll see that there’s a flat surface that moves up and down at the frequency rate that the music is playing at any giving moment. So if you’re listening to some bass-y music you’ll see that voice coil move relatively slowly as compared to hearing something at the high end of the piano range. So, since you can drive a voice coil with a regular auditory signal, this means that you’re really preserving a lot of the nuance in the music. And by separating out the music into separate frequency channels, you can get around the problem of masking that I was mentioning.

So what we do is something called “band-pass filtering”, which everyone with an equalizer has done in their home. And what this means is separating out bands of frequencies and dealing with them independently. So you can imagine there’s a low-frequency band between 40 and 100 Hz and we direct to one set of voice coils that are positioned in one area of the chair. Initially, we had put them down on the lower end of the body — so on the upper legs — and the higher frequencies would be put in the upper back and you fill in the rest in a logical kind of order.

BTR: Is there a reason just for the tonality that they’re placed in different parts of the body? Like, do bass sounds create a different emotional effect if they’re kept more towards your feet?

FR: You know, that’s a great question and we’ve been meaning to do the proper experiments to figure that out for a long time and we’ve never done them, to be honest. What we do know is that it does help immensely to order the voice coils from low to high or high to low. So if they’re jumbled then I think it basically takes a longer time for our brains to form a representation of what’s going on. Our initial feeling was that it should go from low to high because there’s some evidence that cross-culturally, people tend to have these spatial associations where higher frequencies are associated with higher positions in space. There are some exceptions to that, cross-culturally, and there’s a little bit of debate about it, but that’s what drove our initial conceptualization of “low frequency, low position on the body.” But what we have done since in more recent implementations, we’ve flipped the map entirely — we’ve done it the other way, and we’ve had some very positive reports from users. And I think that part of the logic with putting the low frequencies up at the top of the body, is that you’re really dealing with the chest, which is this deep, resonant cavity. I think it may have to do with resonances, and lower frequencies resonate [in] the chest better than they do the upper legs. So you’re just taking advantage of that and that adds some dimension to the sensory experience. But on the basis of real research that we’ve done, all I can say is that organizing these voice coils from low to high or high to low is important. When they’re jumbled, people have a hard time understanding the music.

For more of our interview with Dr. Russo, tune in this Thursday to the latest episode of BTR’s new current events podcast, Third Eye Weekly.

recommendations