Przejdź do głównej treści
Pierwsza dwudziestka międzynarodowa

AISIG

Devices that enable the visually impaired to safely cross pedestrian crossings by using AI-based image recognition to determine the color of the signal and communicate it to the user.

  • It is attached to the grip of a white cane as shown in the figure.

Do czego to służy

AI-based image recognition solves the problem of visually impaired people not being able to cross pedestrian crossings without acoustic signals. Image recognition determines the color of the signal and transmits it to the user via vibration.


Twoja inspiracja

It all started when a visually impaired friend of mine was having a hard time crossing the street. Acoustic signals are not widely used because they are difficult to install. So, I thought that if individuals could understand the traffic signal information on a device they could carry with them, they would be able to cross at any pedestrian crossing.


Jak to działa

The camera on the AISIG reads the image on the front of the device. The AI, which is machine-learned from 300 traffic light images, then determines whether the signal is red or green. The accuracy is further enhanced by using Open AI to perform image processing, such as determining color components, overlaid on top of each other. The results of the AI judgment are transmitted to the user by vibration motors. If the signal is red, the vibration is long, and if it is blue, the vibration is short and continuous. This difference in vibration pattern allows even a blind person to understand the signal status.


Proces projektowania

Based on the concept of "using a device carried by an individual to determine the status of a traffic light," we began with machine learning of AI. We actually went to the site, took pictures of the traffic lights, and updated the dataset based on the images. Once a certain degree of accuracy was achieved, we began to consider the shape of the device and how to use it. Initially, we were developing a ring-shaped device, but since it was difficult to point the camera in the direction of the traffic light and it would get in the way when not in use, so we changed it to a device worn on a white cane. In addition, to avoid having the device make special movements when in use, we made it possible to use an acceleration sensor to determine the device's orientation. This allows image recognition to start automatically when the white cane is perpendicular to the ground at a pedestrian crossing, and to stop when the user starts walking.


Jak to się różni

A competing product is a smartphone app that can recognize signal images, but this requires the user to hold the smartphone in the hand, which is dangerous because the user cannot put his or her hand on the device in the event of a fall. AISIG is attached to a white cane, so it can be used with the same movements as usual.


Plany na przyszłość

In future development, the most important thing is to improve recognition accuracy. At the present stage, the recognition accuracy is 80%, but we are working on development aiming for 99% or more. We also believe that we can support safer walking if we can use AI to recognize not only traffic signals but also cars and obstacles.


Nagrody

GUGEN 2022 Grand Prize


Koniec głównej treści. Powróć do początku głównej treści

Wybierz swoją lokalizację