Loading…
Monday, October 14 • 4:00pm - 4:15pm
E^2-Train: Energy-Efficient Deep Network Training with Data-, Model-, and Algorithm-Level Savings

Sign up or log in to save this to your schedule and see who's attending!

WATCH VIDEO​​​

Convolutional neural networks (CNNs) have been increasingly deployed to edge devices. Hence, many efforts have been made towards efficient CNN inference on resource-constrained platforms. This paper attempts to explore an orthogonal direction: how to conduct more energy-efficient training of CNNs? We strive to reduce the energy cost during training from three complementary levels: stochastic mini-batch dropping on the data level; selective layer update on the model level; and sign prediction for low-cost, low-precision back-propagation, on the algorithm level. Extensive simulations and ablation studies, with real energy measurements from an FPGA board, confirm the superiority of our proposed strategies and demonstrate remarkable energy savings for training. For example, when training ResNet-110 on CIFAR-100, an over 84% training energy saving comes at the small accuracy costs of 2% (top-1) and 0.1% (top-5).

Speakers
YW

Yue Wang

Presenter, Rice University
ZJ

Ziyu Jiang

Texas A&M University
XC

Xiaohan Chen

Texas A&M University
PX

Pengfei Xu

Rice University
YZ

Yang Zhao

Rice University
ZW

Zhangyang Wang

Texas A&M University
YL

Yingyan Lin

Rice University



Monday October 14, 2019 4:00pm - 4:15pm
BRC 103

Attendees (1)