What it does
Dream Match lets children spark ideas with a match-like gesture and a few words. The system instantly turns their speech and emotion into living scenes, showing that every whisper can become a story—building lasting confidence to speak, imagine and create.
Your inspiration
Dream Match is inspired by the moment when children hesitate—full of ideas yet unsure how to share them. A symbolic “strike of a match” opens a direct path from reality to imagination: one small gesture and a single phrase instantly blossom into vivid projected scenes that listen and respond. Every thought is caught, visualized, and gently expanded, turning uncertainty into creative momentum. In this way, children grow a lasting confidence: “I dare to speak, and I can create.”
How it works
A match-strike gesture, detected by the top ToF sensor, wakes or sleeps the projector. Six mics then capture speech; an on-device lightweight model extracts key words and a mood profile (rate, pitch, energy). A request such as “a flying beetle castle” is parsed into beetle | castle | flight | sky, tagged “excited,” and sent—stripped of personal data—to a cloud image engine, which returns a 1024 × 1024 scene and a spoken prompt (“Should the castle use wings or a balloon?”) in ~3 s. Children can iterate or say “make it move” to trigger a 4–6 s cloud-rendered animation, streamed frame-by-frame. A micro DLP unit projects the result in <1 s while local TTS delivers the prompt. Only anonymized tokens leave the device; in offline mode a smaller on-device generator keeps core functions while nothing is uploaded.
Design process
1. Insight Phase – 18 home interviews and kindergarten observations revealed children’s hesitation to share ideas, shaping a framework of ritual trigger → instant feedback → emotional companionship. 2. Concept Phase – Full-scale cardboard and 3-D-printed mockups plus Figma paper prototypes tested the hand-gesture start-up; a “wizard-of-oz” laptop generated images to validate story flow. 3. Functional Prototype – An RP2040 MCU, ToF sensor and micro-DLP module, linked via Raspberry Pi Zero 2 W to a cloud model, formed the first self-contained unit. Six children (ages 6–8) used it for 30 minutes, generating 132 voice commands and 47 gestures; findings highlighted projection brightness and response time as immersion keys.
How it is different
1. Emotional Ritual Trigger – A symbolic “match-strike” gesture replaces buttons or touch screens, turning the very act of starting the device into a story-like ceremony that instantly gives children a sense of safety and immersion. 2. Multimodal Empathic AI – Beyond simple keyword spotting, the system reads speech content, tone and gesture strength to offer real-time, emotion-aware encouragement, pairing what the child says with how they feel. 3. Progressive Creativity Scaffold – Instead of one-shot image output, the projector asks, the child answers, and the scene expands—guiding ideas from fragments to full stories while building narrative logic and self-efficacy. 4. Portable Growth Archive – Future versions will store an encrypted “creativity timeline,” allowing parents to generate monthly reports on evolving interests and expression skills, turning play sessions into long-term developmental insight.
Future plans
Content Ecosystem Expansion – Partner with authors and cultural institutions to build a multilingual library of stories and art styles. An open Creator API will let teachers and designers upload original assets, enabling cross-disciplinary experiences—from science-experiment plots to heritage shadow-puppet scenes. Remote-Area Outreach – After mass production, work with charities to place simplified units in rural schools, giving more children access to expressive AI companionship and advancing educational equity in the AI era.
Awards
none
Share this page on
LinkedIn
Facebook
Twitter