top of page
  • Writer's pictureFinn Kuusisto

What can you even do with EMG?

As we move our hands, small but detectable changes in electrical activity occur in the muscles of our forearm and wrist. Electromyography (EMG) is the measurement of that electrical activity and that's exactly what our DEVLPR sensor boards do. It doesn't take a great stretch of the imagination then to think of what one could do with those biopotential signals. While EMG has historically been used for medical diagnosis, the last 5-10 years have seen a substantial increase in interest in EMG for human-computer interface (HCI) technologies [1], particularly in gesture recognition.

HCI applications of EMG are is still in early stages, but the potential is obvious. With enough sensors and good machine learning models you could accurately detect continuous hand pose in real-time. An obvious use case would be to allow users to interact with extended reality (XR), which includes virtual and augmented reality, experiences with their own hands. Of course, there exist camera-based and glove-based solutions to allow direct interaction with one’s hands, but both suffer from several issues. Current glove-based interfaces are bulky, prohibitively expensive, and interfere with other normal uses of the hands which ultimately breaks immersion. Camera-based interfaces fail under improper lighting conditions or when the hands are out-of-frame or occluded, and they can present privacy issues as well. Further, neither gloves nor cameras handle missing fingers or disability at all. In contrast, a good EMG solution would suffer from none of these issues, but there's really nothing like that on the market right now.

It doesn't stop at XR though, because the same technology platform could potentially be used for numerous other applications from healthcare to national defense. For example, the same models to detect hand pose from EMG at the wrist or forearm could be adapted to control advanced robotic prostheses, or to improve fine motor control of surgical robots. They could even be used for robotic teleoperation, allowing for fine motor control of graspers on say bomb disposal robots or to repair a satellite in orbit. The potential is quite broad.

The Demo

One step at a time though, right? For now, we'll just show you how you can accomplish basic EMG gesture recognition using just two FANTM DEVLPR boards.

Placement of sensor electrodes for our basic gesture recognition demo.

Modeling the complex biomechanics of the human hand based solely on inputs from the forearm and wrist is non-trivial. For this simple demo of discrete gesture recognition from EMG using machine learning, we followed a similar approach to a relatively recent publication on the subject [2]. As shown above, we placed electrodes for the two DEVLPR sensors roughly over the flexor digitorum profundus and extensor digitorum muscles. We then wrote software to collect the two simultaneous sensor streams with labels for five different discrete hand poses and built a machine learning model to accurately differentiate between those five poses: fist, okay, peace, thumbs up, and none.

To measure model accuracy, we collected 30 examples each of the four active poses, and 121 examples of the none pose (the state before and after an active pose). For each pose example, we extracted four time-domain features from each of the two sensor waveforms (one from each sensor location): mean absolute amplitude, mean waveform length, mean slope changes, and mean zero crossings. Each example was thus turned into eight waveform features and a single pose label. Using five-fold stratified cross-validation, we trained and tested simple linear support vector machine (SVM) models using this dataset. Confusion matrix and accuracy results are shown below.

Confusion matrix predicting each of the five poses using 5-fold cross-validation.

Accuracy results predicting the five poses using 5-fold cross-validation.












Thumb Up












Weighted Avg.






With a limited number of examples, a minimal feature set, and a simple off-the-shelf machine learning model we were able to achieve 96% classification accuracy across the five discrete poses. We were surprised at the accuracy given how simple the approach was. Pretty exciting stuff! You can check out a live demonstration of the process in the video below.

Of course, there's a fairly substantial technical gap that lies between simple discrete gesture recognition like this and full continuous hand pose estimation. The future isn't quite here yet, but you can see a glimmer of it on the horizon. That's why we do what we do. FANTM is dedicated to researching and developing these technologies, and to making them a bit more accessible along the way. We hope you'll join us too.


[1] Jaramillo-Yánez, A., Benalcázar, M. E., & Mena-Maldonado, E. (2020). Real-Time hand gesture recognition using surface electromyography and machine learning: A systematic literature review. Sensors, 20(9), 2467.

[2] Shi, W.-T., Lyu, Z.-J., Tang, S.-T., Chia, T.-L., & Yang, C.-Y. (2018). A bionic hand controlled by hand gesture recognition based on surface EMG signals: A preliminary study. Biocybernetics and Biomedical Engineering, 38(1), 126-135.

348 views1 comment

Recent Posts

See All

1 Comment

weican xu
weican xu
May 16, 2022

Hello,I managed to collect data, but I didn't find an example of model training.

bottom of page