Skip to main content Skip to navigation

Pocket Cat

Pocket Cat is a palm-sized AI-driven robotic cat delivering engaging companionship through playful animations, ChatGPT-powered conversation, and responsive haptic and sound feedback.

What it does

Pocket Cat delivers emotional companionship in pet-restricted or compact living spaces by combining expressive animations, ChatGPT-powered conversation, tactile haptic feedback, Internet-enabled remote control, multimedia streaming, and local NFC pairing.


Your inspiration

Inspired by losing my beloved cat of 12 years, I sought to preserve that emotional bond through technology and share it with anyone facing pet-ownership barriers—limited space, financial constraints, allergies, or fear of loss. Research shows that over 70% of renters struggle to find affordable pet-inclusive housing, making real pet ownership inaccessible for many urban dwellers. Our team combined this personal motivation with market insights to create Pocket Cat, we want to ensure that those in compact living spaces or with pet restrictions can still experience the well-being and social connection fostered by animal companionship.


How it works

Pocket Cat is a palm-sized AI-powered robotic cat combining expressive visuals, sound, haptics, and cloud connectivity. It's built around two microcontrollers: the SAMW25 handles sound, vibration, buttons, and Wi-Fi updates; the Xiao ESP32 S3 Sense runs the display, mic, camera, and ChatGPT-based AI. Movement is detected using an accelerometer; responses include vibrations, sound effects, and screen animations. Users interact via voice, buttons, NFC, or remotely through a Node-RED web dashboard. It supports real-time photo capture, video streaming, and OTA firmware updates. The final enclosure is made from soft, flexible resin, which is comfortable to hold, durable against drops, and gives Pocket Cat a playful, tactile feel.


Design process

In the early conceptual phase we defined Pocket Cat’s core features, including expressive animations, ChatGPT driven voice interaction, sound and vibration feedback, movement detection and remote control, and we chose the Microchip SAMW25 and Seeed Studio XIAO ESP32 S3 Sense microcontrollers to distribute processing tasks effectively. During hardware prototyping we breadboarded the accelerometer, microphone, camera, speaker, vibration motor and OLED display while overcoming challenges such as the Adafruit FX sound board’s solder sensitivity and the ESP32’s limited memory for AI workloads. In the PCB and firmware development phase we designed a circular PCB with optimized component placement and power routing, then resolved memory crashes by loading lightweight tasks before heavy AI routines. For the enclosure we printed six iterations in PLA, clear resin and soft flexible A80 resin—chosen for comfort, grip and drop resistance—and added a magnetic ring for seamless mode switching. Finally we integrated and tested each module, tuning sensor responses, refining the Node-RED control dashboard and validating reliable over the air updates. Future enhancements will include gesture recognition and smarter power management.


How it is different

Pocket Cat is more than a digital toy, it’s an intelligent, connected companion. Unlike typical robotic pets, it combines real-time ChatGPT-powered conversation, cloud AI, live video streaming, and OTA firmware updates in one ultra-compact form. Most pet gadgets offer preset responses or offline features; Pocket Cat uses Wi-Fi to deliver dynamic, evolving interactions. Its circular PCB and soft flexible resin shell make it uniquely portable and durable. The dual-microcontroller design splits tasks for better performance—one MCU handles sensory input and output, while the other powers AI, visuals, and camera. Users can control it remotely via a web dashboard, interact via NFC, or talk to it directly. No other pocket-sized pet blends AI, tactile feedback, and multimedia like this.


Future plans

We plan to simplify the architecture by transitioning to a single ESP32 S3 board for all functions, including sound, display, AI, and control, which will reduce cost and improve stability. We will enhance speaker performance, add smart sleep modes, and implement gesture and face recognition. We also plan to introduce custom behavior profiles such as “Silent Mode” and “Study Buddy.” In the long term, we aim to produce small batches, gather user feedback, and build an engaged community. Our vision is to make Pocket Cat a lovable, AI-powered companion that is compact, expressive, and truly interactive.


Awards

Pocket Cat is a brand-new product completed in May 2025. Therefore, we haven’t won or been nominated for any awards yet, but we hope the James Dyson Award judges will love our product.


End of main content. Return to top of main content.

Select your location