Constrained signal processing using deep neural networks for MEMS sensors-based applications


Speciality : Mathématiques et Informatique

6/07/2023 - 14:30 Minh Tri Lê (Université Grenoble Alpes) Grand Amphi Centre Inria de l’Université Grenoble Alpes 655 Av. de l'Europe 38330 Montbonnot-Saint-Martin

Keywords :
  • sensors
  • microcontrollers
  • tinyML
  • model compression
  • quantization
Deep learning has become a powerful technique for solving complex problems across numerous domains, owing to its ability to learn and model intricate non-linear relationships from data. However, the substantial computational power, memory, and energy requirements of deep learning models make them unsuitable for deployment on devices with limited resources. Simultaneously, the emergence of microelectromechanical sensors (MEMS), microcontroller units (MCUs), and the Internet of Things (IoT) has resulted in a growing number of applications that depend on embedded systems for local data processing and environmental interaction. MEMS provides an interface to continuously sense data from the real world to the digital world. MCUs are low-cost devices with high consumer market volumes, where targeting the lowest-power hardware can result in billions in saving per year. Thus, delivering always-on, real-time sensing pose ultra-low power constraints, with direct and high technical and economic implications and challenge.
The intersection of deep learning and embedded systems has given rise to the field of tinyML, which offers significant opportunities and challenges. Deep learning promises the automation of the algorithm design process, which enables practitioners to customize a product behavior at will. The primary challenge lies in adapting deep learning techniques to operate efficiently on ultra-low-power MEMS-based devices with constrained resources and operations, while maintaining acceptable performance levels.
This thesis aims to provide strategies for optimizing power footprint and deploying deep learning models in ultra-low power settings for MEMS-based applications. We first investigate model compression methods, such as pruning, knowledge distillation, and quantization. Then, we enable end-to-end deployment of deep learning models for efficient inference on the most resource-constrained MCUs in the industry and state-of-the-art, effectively redefining the concept of ultra-low power as extreme-low power. Finally, we present a novel approach to generalize the quantization process, accommodating any number of bits, and extending to extreme quantization levels, such as 1-bit.
The outcomes of this research contribute to the advancement of tinyML and enable the broader adoption of intelligent sensing devices across various real-world applications.


Valérie Perrier (Université Grenoble Alpes)


  • Julyan Arbel (Université Grenoble Alpes )
  • Etienne de Foras (TDK InvenSense )


  • Eiman Kanjo (Nottingham Trent University )
  • Mathieu Salzmann (École Polytechnique Fédérale de Lausanne )


  • Valérie Perrier (Université Grenoble Alpes )
  • Inna Kucher (Centre CEA Paris-Saclay )