SparseLearning - Run network inference with sparse inference on end-devices

Project: Research project

Project Details

Description

Leveraging sparsity in neural networks can achieve efficient inference on the device, such as AR/VR stand-alone devices, and training larger networks with less-demanding resource requirements. For inference, the network’s sparsity naturally leads to faster inference with less energy consumption and smaller deployment size. For training, as
sparse networks require significantly less memory, larger and deeper networks with the same or less amount of parameters can approximate more complex functions and need less training
data.
StatusActive
Effective start/end date1/11/2030/10/21