What it does
MuseBand is a wearable AI interface that reduces cognitive fatigue in museums and public spaces. It enables quiet, screen-free interaction through throat mic input, emotion sensing, and haptic feedback—helping users stay engaged without speaking aloud.
Your inspiration
I believe personal AI agents will play a vital role in the future, but current hardware—like smartphones or earbuds—often makes interacting with AI awkward, especially in public spaces. While researching museum fatigue, I noticed how audio guides and voice assistants require users to speak aloud, which feels disruptive and unnatural in quiet environments. This insight led me to explore how we might design more intuitive, human-centered hardware. MuseBand was born from this vision: a wearable that enables private, seamless interaction with AI while responding empathetically to the user’s emotional and cognitive state.
How it works
MuseBand is worn around the neck and uses a throat microphone to capture voice input without requiring the user to speak aloud. This allows for discreet interaction with AI in quiet public spaces. The device features vibration motors that provide haptic feedback—guiding the user or responding to emotional states with subtle pulses. Embedded sensors detect fatigue, stress, and cognitive load, enabling the system to adapt its responses in real time. A small front-facing display shows simple facial animations, allowing people nearby to read the user’s status through nonverbal cues. MuseBand connects to a smartphone via Bluetooth, where an AI assistant processes input and delivers personalized guidance. Its modular design—with detachable battery-display pods and flexible connectors—keeps it lightweight and easy to wear for extended periods.
Design process
The design process began with ethnographic research in New York museums, where I observed visitor behaviors and conducted interviews to understand cognitive fatigue and emotional disengagement. I discovered that audio guides and voice assistants often required users to speak out loud or wear bulky headphones—adding to discomfort in already overwhelming environments. I started sketching ideas for a discreet, screen-free wearable that could enable private interaction. Initial foam and 3D-printed prototypes helped me test form factors around the neck. I then built early functional models using Arduino, vibration motors, and throat microphones to simulate low-profile voice capture and haptic feedback. User testing revealed key challenges: discomfort from rigid shapes, confusing vibrations, and limited battery life. I responded with improved curvature, softer materials, and a refined vibration language that feels intuitive. I explored modularity with magnetic pogo pins, embedded a small OLED for subtle expression display, and optimized sensor placement for emotional input. The final prototype balances emotional intelligence, comfort, and discreet AI interaction—allowing users to engage with information in public spaces without needing to speak aloud or rely on screens.
How it is different
While throat microphones have existed for decades, they were not designed with AI in mind. MuseBand integrates a modern throat mic with machine learning–based filtering, allowing accurate, quiet voice capture without loud speech—even in public settings. What truly sets it apart is its multimodal sensing system: vibration feedback, emotion recognition, and fatigue detection allow the device to respond intuitively to the user’s cognitive and emotional state. Unlike most wearables that rely on screens or apps, MuseBand offers a screen-free, ambient experience tailored for quiet environments like museums. Its modular design—with detachable battery-display pods and flexible connectors—keeps it lightweight yet powerful. A central robotic display communicates the user’s state through animated facial cues, making internal emotions socially legible and enabling nonverbal empathy from those nearby.
Future plans
Next, I plan to develop a streamlined version of MuseBand focused solely on voice input—transforming it from a museum-specific guide into a product for everyday use. It will retain its discreet, screen-free interaction and ergonomic design, while being optimized for comfort, wearability, and real-world AI communication. I aim to simplify the hardware for easier prototyping and small-scale manufacturing. The goal is to extend its use beyond cultural spaces—supporting private, intuitive interaction with AI in daily life, from commuting to working in shared environments.
Awards
While MuseBand has not yet received formal recognition, it is a newly completed project now entering its first round of award submissions.
Connect