What it does
According to the World Health Organization, 5% of the world’s population, is affected by deafness- a hidden disability. We designed a low cost spectacle add-on which allows the deaf and hard of hearing to visualize sounds using visual cues.
We rely almost exclusively on our sense of sound to keep track of things beyond our field of vision, so a deaf or hard of hearing individual may face delays or difficulties in responding to danger and opportunities around him. Moreover just as the silence of electric cars poses a risk to hearing pedestrians, a car honking at a deaf person beyond their field of vision poses a risk too. We wanted to design a product which improves situational awareness for the deaf and hard of hearing, especially in unfamiliar and dynamic environments. Some of the inspirations were the Heads Up Display in First Person Shooter games and the Google Glass.
How it works
The circuit for the hardware consists of an Arduino microcontroller, microphone, RGB LED and a Li-Po battery. The circuit is programmed to show specific light patterns on according to the sound detected. Sounded recognition is performed using Fast Fourier Transform which matches the predefined tones stored in the microcontroller. The RGB led is guided by a strip of acrylic- creating an edge lit effect- illuminating in front of the user’s eyes. The device is an spectacle add-on which can be attached to a pair of spectacles.
We scoped a problem related to the theme ‘A Better World’ -improving situational awareness for the deaf and hard of hearing. Thereafter, we performed precedent analysis and literature review to look at the current research and products. We also sought the assistance of researchers in our university who provided us with advice about the various forms of sensory substitution such as using vibrations and apply low level of electric shocks and flashing lights. We had several brainstorming and c-sketching sessions to generate ideas to solve the problem. Finally we converged to develop the first prototype made up of a simple circuit with LEDs mounted on metal wire and buttons to manually trigger the LEDs. During the testing, users were asked to listen to music and walk in the outdoors while another person following at the back would activate the LED to alert the user. We used the feedback to 3D print a spectacle attachment with an acrylic diffuser. A second round of user testing was conducted and an informal group interview was performed with a 5 deaf individuals with the help of an interpreter. The findings were then used to further refine the diffuser. To avoid Designer Bias, we had constant contact with 2 deaf individuals to best involve them at every stage of our Design.
How it is different
The ability to assemble the entire system using easily available electronic and 3D printable parts, makes it a very cheap and accessible alternative to the conventional hearing aids which are relatively expensive and have a high maintenance cost. Moreover, by using a single pixel to convey information about sound, we reduce the possible occurrence of information overload during emergencies.
We are planning to work closely with the researchers at Augmented Human Lab, Singapore University of Technology and Design - well-known for their FingerReader and EyeRing- to explore the use of other materials which allow for customizable positions of the light and making them inconspicuous, improve the ergonomics of the enclosure, developing better algorithms for recognizing of sounds and integrating the functionalities in a dedicated printed circuit board. Ultimately we would like to open-source the entire design, code, 3D models and instructions so that everyone can fabricate the device themselves.