|MCUNet||Memory-efficient inference, System-algorithm co-design|
|TinyTL||On-device learning, Memory-efficient transfer learning|
Intelligent edge devices with rich sensors (e.g., billions of mobile phones and IoT devices) have been ubiquitous in our daily lives. Combining artificial intelligence (AI) and these edge devices, there are vast real-world applications such as smart home, smart retail, autonomous driving, and so on. However, the state-of-the-art deep learning AI systems typically require tremendous resources (e.g., large labeled dataset, many computational resources, many AI experts), both for training and inference. This hinders the application of these powerful deep learning AI systems on edge devices. The TinyML project aims to improve the efficiency of deep learning AI systems by requiring less computation, fewer engineers, and less data, to facilitate the giant market of edge AI and AIoT.
MCUNet: Tiny Deep Learning on IoT Devices (NeurIPS'20, spotlight)
HAQ: Hardware-Aware Automated Quantization (CVPR'19, oral)