Search papers, labs, and topics across Lattice.
This paper introduces a TinyML pipeline for IMU-based air handwriting recognition of English alphabets, converting raw IMU data into 2D rasterized images for training lightweight deep learning models. The study compares SqueezeNet, EfficientNet-Lite0, ShuffleNetV2, and FastKAN, evaluating them under a unified training configuration and applying quantization, pruning, and knowledge distillation for compression. FastKAN achieves 97.4% test accuracy with a 120kB model size and 0.0011J energy consumption after compression, demonstrating its suitability for resource-constrained edge devices.
Achieve 97.4% accuracy in air handwriting recognition on a microcontroller with a 120kB model, proving that high performance is possible even on the tiniest devices.
As touchless interaction becomes increasingly important in wearable and ambient computing, gesture-based air handwriting offers a promising input modality, particularly for low-power embedded devices. While vision-based and radar-based systems have achieved high accuracy in gesture recognition, they are often unsuitable for deployment on microcontrollers due to their computational and energy demands. In contrast, inertial measurement unit (IMU)-based systems provide a lightweight and privacy-preserving alternative, yet existing research rarely addresses full alphabet recognition or deployment-ready pipelines for resource-constrained environments. This article proposes a complete tiny machine learning (TinyML) pipeline for inertial-based air handwriting recognition of English alphabets, integrating structured preprocessing of raw IMU data into 2-D rasterized gesture images, followed by training and deployment of four lightweight deep learning models: SqueezeNet, EfficientNet-Lite0, ShuffleNetV2, and FastKAN. The models are evaluated under a unified training configuration and subjected to compression techniques including quantization, pruning, and knowledge distillation (KD). Among them, FastKAN demonstrates significant superiority, achieving a test accuracy of 97.4% with a minimal model size of 120 kB and energy consumption as low as 0.0011 J per inference after hybrid compression. This work explicitly targets isolated characters (A–Z, a–z); continuous handwriting and word-level recognition are out of scope and left for future work. Extensive evaluations, including confusion matrix analysis, compression benchmarking, and successful deployment on an Arduino Nano 33 BLE Sense, demonstrate the practicality, efficiency, and robustness of the proposed system for real-time TinyML-based handwriting recognition applications.