Skip to main content
National Runner Up

VisionCap: Virtual Eye & Assistant for Blind

VisionCap is a Smart AI Based Virtual Eye & Assistant for Visually Impaired with Aural Feedback. It enables Visually Impaired to 'see' by through sound and use basic smartphone features.

  • VisionCap enables visually impaired people to see the world by hearing it.

  • VisionCap demonstration of features.

    VisionCap demonstration of features.

  • It is a smart cap with single board computer and camera.

  • Image analysis results, these shots were taken by VisionCap and following were the output.

  • It also enables visually impaired people to use basic smartphone features via the cap itself.

  • Algorithm and data flow.

What it does

VisionCap is a Smart AI Based Virtual Eye & Assistant for Visually Impaired, it enables visually impaired people to see the world by hearing. It is able to describe the scene in front of the user, and also assists users in using basic smartphone features.


Your inspiration

Globally, there are an estimated 314 million visually impaired people. The idea came to me in July 2018, after visiting a government hospital in my home country India, I saw a patient who was blind and had a fracture in his lower limb. I was surprised to see, that even in the 21st century with so many technologies like facial recognition and gesture recognition, the patient didn't see the world for nearly 4 decades of his life. I was very depressed after searching on Google the number of visually impaired people in the world, and thought why not build a product which can give a virtual eye to someone in need.


How it works

VisionCap is based on the Microsoft's Computer Vision API. I've programmed the API with various custom images of entrances and exits (doors), walkways (Yellow dotted strips to assist blind people in navigating public spaces, mostly found in blind schools and government buildings, and basic day to day objects, some of which were already hardcoded in the API. VisionCap runs on Raspberry Pi board (SBC), along with an 8 MP camera which captures images in real time and narrates and describes the scene to the user. There is also a programmable button on the top which can be used to capture images, and also using basic smartphone features like making phone calls, narrating news, narrating headlines, reading SMS and emails, speech-calculator, etc. The device has a Bluetooth 4.0 module which connects it to the smartphone, for data connection for transmission of images to API, as well as using basic smartphone features and commands.


Design process

The first step of the design process was to have a survey and know the root of the problem. There are lot of solutions in the market today to make lives of visually challenged people easy by giving information about any hazards nearby in their surroundings, but unfortunately none of the solutions are able to help the visually challenged person get a virtual eye and see the world. The next step was to think about the design which would be fancy to wear as well as have enough space to incorporate all electronic parts like a single board computer (SBC), an HD camera, a button, and a lithium ion battery. While visiting a hardware store, I found a construction cap/helmet with LED flashlight on it, so I thought if a flashlight can be fitted on a helmet/cap, why can't a full blind assist system be fitted on top of a regular cap. I went to many shops searching for a cap with a hard and solid base, and finally found one which was suitable for the prototype. With the Cap on place, I started with the electronic prototyping process. The next step was to find a single board computer (SBC) and attach a camera and button, the device must be able to easily fit onto the cap easily. Since I had worked on Raspberry Pi SBC before, I used it in the prototype.


How it is different

With VisionCap, visually impaired person can actually hear what is in front of them, instead of just the distance which is given by dozens of smart canes out in the market. There are many products available in market to help blind people, but unfortunately they aren't able to solve the problem, as most of these devices give the user distance from a hazardous object (hot objects), or identify blind-friendly walkways (Yellow dotted strips to assist blind people in navigating public spaces, mostly found in schools and government buildings). VisionCap is able to tell the description of the scene in front of the visually impaired person, which makes it different from all products available in the market today. There are hardware devices like Google Home, Amazon Alexa, and smartphone apps like Apple Siri, Google voice which help users in voice commands, but none of them provides accessibility and portability as VisionCap which can be easily worn by the user.


Future plans

The next target with the project is to work on the design of the cap, and turn it into a wearable goggle. Then build an SDK and API for VisionCap hardware platform, through which other developers build their tailored apps for visually impaired users. In addition, optimize the power consumption for maximum battery life. Another features which I plan to add is detection of zebra crossings, and helping visually challenged people cross roads via zebra crossings. By addition of this feature VisionCap will be able to assist the visually impaired to know if the zebra-cross signal is red or green, and when they can safely cross the road.


Awards

Finalist at Paper Presentation Competition, ASET Conference HCT Dubai, UAE. Presented the project at MakerFaire Dubai 2019.


End of main content. Return to top of main content.