Skip to main content Skip to navigation

The Haptic Eye

This smart device empowers blind students by instantly converting handwritten whiteboard text into tactile Braille—enabling real-time, independent learning without human assistance.

  • The Haptic Eye

  • About project

  • Device

  • Device

  • AI

What it does

This device uses a camera and AI to recognize distant writing in real-time and convert it to tactile Braille. It solves the information access problem for visually impaired users in classrooms and workplaces, enabling independent participation.


Your inspiration

Inspired by a visually impaired student unable to follow whiteboard lectures, this project addresses a critical accessibility gap. Current assistive technologies lack a portable, real-time solution for distant visual content, limiting user independence. Our system will use a camera and a CNN to recognize text and diagrams, instantly converting them into tactile Braille. This will empower users with autonomous access to information, fostering equal participation in classrooms and workplaces by bridging the gap between analog visual content and a tactile interface.


How it works

Our system is a real-time visual-to-tactile device with two main units. The Vision Unit (Raspberry Pi 3B+ & camera) captures video frames and uses OpenCV for key preprocessing like adaptive thresholding and noise removal. For character recognition, it employs a versatile dual path: Tesseract OCR for clear, printed text and a custom CNN model for handwriting. This Keras-built CNN, trained on the EMNIST dataset, processes individually segmented characters to enhance accuracy across various writing styles. Recognized text is converted into a 60-bit Grade 1 Braille string. This data is transmitted via Bluetooth SPP to the Display Unit. The ESP32-based unit parses the string and uses I2C to command four PCA9685 PWM drivers. These drivers control 60 micro servos, each actuating a custom 3D-printed cam. This mechanism precisely raises or lowers individual pins, forming tactile characters on a 10-cell display for the user to read.


Design process

Our project features two core modular units: a stationary Vision Unit (Raspberry Pi, camera) and a portable Display Unit (ESP32, Braille cells). The decision to use a wireless Bluetooth connection was made strategically during the initial design phase to prioritize user mobility and a clean user experience. In the system design phase, we utilized four PCA9685 PWM drivers to reliably control 60 micro servos. Mechanically, we conceived a simple and effective cam-based mechanism to translate each servo's rotation into the linear motion of a Braille pin. Simultaneously, we focused on software robustness. Anticipating challenges with varied real-world lighting, we designed an advanced OpenCV preprocessing pipeline into our architecture, which includes adaptive thresholding and noise removal to ensure reliable OCR accuracy. The final blueprint is a culmination of these proactive design choices, featuring a robust system that is fully wireless, innovative, and characterized by its modular software.


How it is different

Our device's originality lies in its real-time autonomy, handwriting recognition, and affordability. Commercial Braille displays are merely output peripherals dependent on a host device. In contrast, our system is a complete perception-to-output solution, operating without human mediation. A key innovation is our custom CNN, trained on EMNIST, enabling robust recognition of handwritten whiteboard text where standard OCR fails. We also achieve radical affordability by using low-cost servos and 3D-printed parts instead of the expensive piezoelectric actuators found in commercial units costing thousands. Finally, our open-source philosophy and singular focus on solving one problem—accessing distant text—ensure a reliable and community-driven tool, setting us apart from overly complex or single-function alternatives.


Future plans

Future plans will focus on three key areas. First, we will miniaturize the device with a custom PCB and a new housing for enhanced portability. Second, we will implement advanced AI by upgrading the processor to enable superior OCR and basic diagram recognition. Finally, the most critical phase will be to conduct extensive user trials with partner schools for the visually impaired, allowing us to gather direct feedback and refine the design to ensure it becomes a truly empowering tool for its users.


Awards


End of main content. Return to top of main content.

Select your location