What it does
KALAM is an EEG-based thought‑to‑speech system that helps non‑verbal individuals communicate by translating imagined Arabic words into text or audio, allowing for faster and more inclusive interaction.
Your inspiration
The inspiration for KALAM came from seeing how challenging communication can be for non‑verbal individuals. Many existing assistive tools are slow or limited, often relying on eye movement or physical input. We wanted to create a more natural, direct way for them to express themselves. The idea came from research in brain‑computer interfaces and the desire to apply AI to make imagined speech usable, especially in Arabic, to give people a voice that truly reflects their thoughts.
How it works
KALAM works by capturing brain signals through an EEG headset when a person imagines specific Arabic words. These raw signals are first cleaned to remove noise, then processed to extract key patterns in how the brain responds. Our system then uses an SVM Classifier to recognize these patterns and match them to a library of 30 trained words. Once a match is found, the word is displayed as text or spoken aloud through the app. All processing happens in real time, allowing quick communication without any physical movement just by thinking of the word.
Design process
We started by researching brain‑computer interfaces and selecting an Emotiv Epoc X EEG as our input method because it is non‑invasive and affordable. Our first prototype was a basic setup with an EEG headset and raw signal recording. It could capture brainwaves but had no way to interpret them. To build a dataset, we designed controlled sessions with 30 volunteer participants. Each wore an EEG headset while seated in a quiet room to minimize interference. For each trial, they first looked at a blank screen for 5 seconds to relax and establish a baseline. Then an Arabic word appeared on the screen for 5 seconds, allowing them to visualize its shape. After that, they closed their eyes and imagined the word for 10 seconds while we recorded their EEG signals. This was repeated for 30 words with 10 trials each for every individual. With this dataset, we developed feature extraction methods to identify patterns in the signals. Early models struggled with accuracy, so we tested different machine learning approaches, eventually optimizing an SVM classifier. With each iteration, we improved preprocessing to remove noise, and tuned the classifier for higher accuracies. We integrated these into a user‑friendly app, allowing real‑time text and audio output.
How it is different
KALAM is unique because it focuses on imagined speech in Arabic, a language often overlooked in brain‑computer interface research, while most existing systems target English or other widely used languages. Unlike many assistive tools that rely on eye tracking, physical switches, or limited symbol boards, KALAM allows users to communicate purely through thought, without any movement.
Future plans
Our next steps for KALAM focus on making it more practical and scalable. Right now, the system works best in a controlled environment with minimal noise and distractions. We plan to improve signal processing so users can communicate effectively in everyday settings without strict conditions. We also aim to expand beyond the current fixed library of 30 words by developing models that can handle an unlimited set of words and phrases, enabling more natural conversations. Long term, we hope to refine the app, explore partnerships, and bring KALAM to market as an affordable, inclusive communication tool.
Awards
We were recently awarded first place in our university's 2025 CEN Senior Design Projects Competition for the Computer Science Department.
Share this page on