Learning-AI

FBNet: Hardware-Aware Efficient ConvNet Design via Differentiable Neural Architecture Search

August 2021

tl;dr: Gradient-based optimization (Differentiable NAS) with hardware aware loss function.

Overall impression

FBNet (facebook-berkeley net) is one of the representative papers of efficient NAS.

The paper is similar in idea to, but easier to understand than DARTS. The main idea is to train a stochastic super net with probabilistic combination of all possible building blocks, then sample the network during inference.

The hardware aware optimization via a LUT is inspired by MnasNet, which uses evolution algorithm as a controller.

The differentiable NAS method resembles pruning a lot.

Key ideas

Technical details

Notes