Project Details
Description
Leveraging sparsity in neural networks can achieve efficient inference on the device, such as AR/VR stand-alone devices, and training larger networks with less-demanding resource requirements. For inference, the network’s sparsity naturally leads to faster inference with less energy consumption and smaller deployment size. For training, as
sparse networks require significantly less memory, larger and deeper networks with the same or less amount of parameters can approximate more complex functions and need less training
data.
Status | Finished |
---|---|
Effective start/end date | 1/11/20 → 31/05/23 |
Fingerprint
Explore the research topics touched on by this project. These labels are generated based on the underlying awards/grants. Together they form a unique fingerprint.