Ne işe yarar
Sonica lets users create synth and percussion prompts through gestures, with an AI-generated counterpoint completing the track. It's intuitive, accessible, and does not require technical skills, allowing users to rely on their intuition and personal taste.
İlham kaynağın
From a research that we conducted on the use of artificial intelligence in music generation, we noticed that there is a lack of accessible tools that allow users to create music intuitively. This sparked the idea of designing a system that could engage a broader audience, regardless of their musical or technical expertise. Our intention was to create an experience that could support users throughout their creative journey, by assisting them or by giving inspiration. From the beginning, our goal was to develop a tool that would allow users to explore musical creation through inputs closer to how ideas naturally take shape in the human mind.
Nasıl Çalışıyor
Using two separate devices, users can generate sounds by moving their hands over ultrasonic sensors. The beat module uses an air-drumming gesture, while the synth module involves swinging motions, like conducting an orchestra or dancing. We focused on making the interaction intuitive, so users can understand the system through use, without needing to know the code. Each module runs on a dedicated Arduino connected to Cycling MAX 8. The Arduinos handle sensor configuration, detecting hand presence within 50 cm. In the synth module, distance data is mapped to musical chords (C, G, D, A, E, B). In the beat module, hand presence is read as binary input, with “1” triggering a custom beat sound. By opening the Arduino serial port, we enabled communication with MAX 8, providing real-time audio feedback. Chords in the synth are arranged spatially, higher pitches correspond to movements up and to the right, while the beat module responds directly to drumming gestures.
Tasarım süreci
The design process evolved through many stages, with a strong focus on user-centered testing. Over time we tested over 100 users with people (20-70+) of backgrounds to ensure that both the interaction and the outputs were intuitive and well-received. Using mainly low-fidelity prototypes, we tested both interaction styles and device affordances: (1) In the early phases, we made two cardboard boxes with printed affordances: the sensor area and pairing symbol on the side. While the users interacted with them, we simulated the outputs to assess whether users understood the interaction. (2) Later, we focused on testing pairing and sound selection. We printed the affordances on paper and rearranged them as users interacted, simulating feedback. This led us to identify the most intuitive options. (3) We then tested the actual coded interaction. We placed the sensors inside boxes marked with the previously selected affordances and hid inside the Arduinos. Without giving instructions, we observed how users naturally engaged with the system after a brief explanation of the concept. Throughout the process, user testing was the most impactful element. It allowed us to refine both the interaction and the system’s output, ensuring the final devices were truly designed with users in mind.
Nasıl bir farkı var
Sonica stands out from other music generation projects because it turns the body into a musical instrument, offering an experience that is playful yet expressive, and makes music creation more inclusive, emotional, and immediate. Unlike many platforms that require complex interfaces or specific skills, Sonica is based on natural gesture-based interactions, allowing users to generate sound patterns (synth and percussion) just by moving their hands. What truly sets Sonica apart is that AI doesn’t take control or replace the creative role of the user, but it acts as a partner. It complements the user’s input by generating a counterpoint melody that enhances the final output while remaining faithful to the user's original expressive intent. What also sets Sonica apart is its strong user-centered design approach: every stage of the project was guided by testing with real users of different ages and backgrounds to ensure the interaction was intuitive and engaging.
Gelecek planları
We have big ambitions for the future of Sonica. Our next steps include designing additional modules connected to other layers of music production, like basslines, harmonies, and guitar, each paired with new, intuitive gestures. This will allow users to create more complex and dynamic musical prompts through movement. We also aim to expand Sonica beyond real-time interaction by developing a companion app for desktop and mobile. Through this platform, users will be able to revisit, transform, and assemble the prompts they've generated, as well as add vocals, samples, and recordings, turning their initial gestures into fully developed tracks.
Bu sayfayı paylaş