Skip to main content
National Runner Up


A device for detection of facial gestures based on signal from 2 points on a head behind the ears.

  • Me together with 3D printed plastic model of device

  • Demonstration of work on Russian TV

    Demonstration of work on Russian TV

  • Figma layouts on new version of the design of Reface application

  • 3D-render of the device

  • 3D model of ear clips ready to be printed

  • Screenshot of a Reface application

What it does

1. Controlling phone, smart homes, etc with facial gestures (blink 3 times — have your music stopped) 2. Monitoring eyes fatigue 3. Monitoring the tiredness and effectiveness of workers and students during the day

Your inspiration

At the age of 15 or 16, I have visited many lectures on neural themes including brain-computer interfaces. And at the age of 17, I saw an advertisement on Instagram on the Kickstarter campaign of Muse: a neural sensing headband for meditation purposes. I thought that I can get some data from it (on behavior of my brain), as public SDK was available. I bought it. Then I developed an application that was able to show real-time signals from electrodes on my head. And I noticed that the signal somehow changes when I change facial expression. And I started to create this project.

How it works

An EEG (electroencephalography, a way to monitor brain activity) device is placed on the head. Then the data it captures behind the ears are sent to the smartphone via BLE (Bluetooth Low Energy). A special application is installed on the smartphone. It receives the data from the device, then applies several filters to clear out the noise and passes the result to Neural Network. Then Neural Network classifies the signal. After the classification result is obtained, an application performs some actions with it: 1. Sends data to other applications via special API 2. Executes action (like "stop music"), if the user has set that the combination of gestures should execute it. Currently 8 gestures are available for classification, but the list will be increased: 1. Blinks 2. Strong blinks 3. Moving eyes to the left 4. Moving eyes to the right 5. Moving eyes up 6. Moving eyes down 7. Jaw clench 8. Moving head left-right The accuracy is 98%, precision is 95%.

Design process

In October of 2018, I started to create a prototype application to cut datasets for the neural network and to check if I can create such a system. The neural network was trained and the test application was finished in April of 2019. Then I posted 4 videos on YouTube with a demonstration of work and wrote an article to one of the Russian media. After an article had been published, I received some positive feedback and decided to continue. I developed a site, some additional software including API server, application for controlling computer, a new version of an application. Then I started to develop additional functions like monitoring fatigue of eyes, splitting the project into several parts: 1. Reface, a thing to control devices 2. Rehealth, an application to monitor fatigue and effectiveness After all of this, I am here, developing my hardware, as currently another device is used to get things to work (

How it is different

There are 3 main applications of my project: 1. Controlling technical devices with facial gestures There are such things as smart speakers etc, but there are no devices to control smartphones, computers, and IoT with facial gestures. 2. Monitoring eyes fatigue There are no such devices, the market of monitoring and preventing the eyes fatigue is empty 3. Monitoring employees fatigue There are some companies that try to estimate the quality of work with time tracking (tracking the time of usage of social networks, for example). There are some which try to estimate the tiredness by using data about alpha-beta waves. But my solution is better, as it uses more data (not only about alpha-beta waves, but also facial gestures). So it is more precise.

Future plans

Get investitions and hire a team. Currently, my girlfriend and several friends help me with the project, but we work on it in my spare time and it cannot be considered as a work. Future plans on the development: 1. Neural network for detecting gestures based on data from the camera, as I should retrain neural network for every user in order to have their signals detected correctly 2. Django server for retraining of NNs 3. Redesign of an application for monitoring tiredness/effectiveness 4. Worldwide PR campaign. Currently, there are many articles in Russian, but not in English, for example 5. Data augmentation and data markup software


1. Reface is participating in Russian final of worldwide startup contest of Alibaba. 2. I was given a prize by my university Higher School of Economics and Yandex (

End of main content. Return to top of main content.

Select your location