SparseLearning - Run network inference with sparse inference on end-devices

Project: Research project

Project Details

Description

Leveraging sparsity in neural networks can achieve efficient inference on the device, such as AR/VR stand-alone devices, and training larger networks with less-demanding resource requirements. For inference, the network’s sparsity naturally leads to faster inference with less energy consumption and smaller deployment size. For training, as sparse networks require significantly less memory, larger and deeper networks with the same or less amount of parameters can approximate more complex functions and need less training data.
StatusFinished
Effective start/end date1/11/2031/08/22

Fingerprint

Explore the research topics touched on by this project. These labels are generated based on the underlying awards/grants. Together they form a unique fingerprint.