********************************************************************************************
STOP PRESS!! Have you tried the YouTube Playlist featuring all of my compositions for the TRANSFORMATES? Here it is:
********************************************************************************************
There are estimated to be 45 million people in the World who are totally blind. These people, particularly those who have been blind from birth, compensate in part for their lack of visual ability by developing other senses like their ability to hear. One avenue of research to help blind people to ‘see’ focuses on using their finely tuned sense of hearing to help them recognise shapes, images and positioning in a new and exciting way.
This week neuroscientists published the results of research carried out at the Hebrew University of Jerusalem in which they taught blind people to ‘hear’ pictures. They also confirmed that during this process the same part of the visual cortex in the brain was activated as that used by ‘seeing’ people for image recognition – despite the research participants never having seen in their life (they were congenitally blind).
This work was published by Ella Striem-Amit and Amir Amedi and built upon an algorithm called vOICe developed in 1992 by the Dutch engineer Peter Meijer. This algorithm was used to convert simple grey scale images into a soundscape as the images were scanned from left to right. A simple product of this process might involve the production of a soundscape from a series of square blocks arranged like steps rising diagonally across a page. Each block receives a tone of higher frequency compared with the one preceding it. As vOICe digitally scans this image it converts it electronically into a series of rising tones and these are then played back to the blind person. Imagine if there are seven blocks this would sound something like a seven note scale being played on a piano keyboard.
As the image becomes more complex the vOICe representation of it also becomes more complex, requiring many hours of training to ‘understand’ the images represented by this Augmented Reality. To the untrained ear these soundscapes appear like the garbled tones you might pick up playing with an old radio receiver at night time. If you are interested to hear what these ‘images’ sound like click on the video below (but be warned – your sanity could be at risk if you force yourself to endure more than the first minute!).
The neuroscientists at the Hebrew University of Jerusalem carried out research with the help of congenitally blind people who had learned how to use the vOICe technology to hear images. After about 70 hours of training these people were able to identify the shape and position of objects from the soundscapes they were presented with. In effect they used their highly developed sense of hearing to recognise the pictures.
The following two minute video has been prepared by the team working with Amir Amedi and shows their blind participants listening to the soundscapes and identifying what they represent in a simple test environment. They not only recognised the images but they were also able to write down what they ‘saw’.
Some participants were fitted with a head-mounted camera linked to a computer and headphones and asked to navigate their way around a room using only the sound cues provided via the headset. The images were converted as they slowly moved around into digital snap shots which were then converted into a soundscape. This was sufficient to provide the users with enough information to navigate around the tables and waste bins which had been placed in the room as potential obstacles.
Following training participants achieved 78 percent accuracy rates when classifying three different types of objects: people, everyday objects (like mobile phones), or textured patterns. Some were able to report the body posture of the people in the images they ‘heard’ and described these postures verbally, even mimicking them (see the video at the end of this article).
Nature already has led the way in showing us how other senses can be used to facilitate navigation when visual senses are insufficient. For example consider the ability of dolphins and bats to use tone signals and echo-location senses to traverse oceans and caves where a lack of light limits the usefulness of normal vision.
In addition to helping their blind friends to identify images the neuroscientists also made a discovery which has surprised many of their research colleagues. Using Magnetic Resonance Tomography they identified which parts of the brain were activated when their participants used soundscapes to recognise images (like different kinds of face). Rather than these images activating the parts of the brain normally associated with hearing the area which was robustly stimulated by the soundscape was the Extrastriate Body Area (EBA) within the visual cortex.
This EBA is exactly the area that would be stimulated when a normal ‘seeing person’ processes visual images. However the seven participants in this part of the research where all congenitally blind. They had no possibility to develop this part of the visual cortex in their brain since birth because they had never been able to see.
Until now the organisation of these neurological areas was considered to be predominantly sensory in nature with areas specifically dealing with hearing, smelling, seeing etc. This research appears to indicate that the organisation of the brain, at least as far as image recognition is concerned, is much more functional in nature. We analyse and process information about images and positioning in a specific ‘functional’ region independent of which senses were used to gather the inputs from the outside world.
To quote Ella Striem-Amit co-author of the research paper from Harvard University: “This may be time to think about a new model. The brain, it turns out, is a task machine, not a sensory machine. You get areas that process body shapes with whatever input you give them—the visual cortex does not just process visual information.” (Quotation with kind acknowledgements to sciencemag.org March 2014).
For those readers who found the video clip of vOICe (above) somewhat harsh on their ears there is a new version which is more pleasing to listen to and easily accessible to all. This is called Eyemusic and is available as a free i-phone app.The new algorithm uses music to create more pleasant tones and can now provide information about colours. To see a man who has been blind since birth using EyeMusic to “see” drawn faces please click on the video link here.
For any readers interested in finding out more about EyeMusic please visit their website and try out the demonstration which is here. This site may take a little time to load but if you would like to listen to the musical sounds created by familiar shapes it is worth the wait. Then you just click on the shapes to narrow down your selection. Try it with the stairs image (lesson 1).
To round off this article let me leave you with a short clip from Dr Amir Amedi as he explains the work he is doing in Israel and demonstrates how some of these tests are being performed by his blind participants.
If you have found it interesting reading about the latest research to help the blind to ‘hear’ images and the related latest developments in neuroscience research on the visual cortex please visit the Alpine Press section of www.ChrisDuggleby.com again.
Chris Duggleby
The original research presented here was published in Current Biology, 17th March 2014 Vol 24, issue 6, pages 687-692 (Visual Cortex Extrastriate Body-Selective Area Activation in Congenitally Blind People “Seeing” by Using Sounds).
You may also find some of my other recent articles of interest. Please click on the following links and do not hesitate to let me know your views using the comments boxes on each page:
7th July 2012: Sexual Equality in the Black Forest town of Triberg: Men only parking spaces.