Learning-AI

DARTS: Differentiable Architecture Search

August 2021

tl;dr: Differentiable optimization of neural architecture.

Overall impression

Previous methods uses evolution or reinforcement learning over a discrete and non-differentiable search space, where a large number of architecture evaluation is required. DARTS is based on continuous relaxation of the arch representation.

DARTS does not involve training a controller based on sparse validation set score (treated as the reward or fitness) with RL or evolution. DARTS directly optimizes the parameters controlling the architecture via gradient descent on the validation dataset.

The paper is very similar in idea to FBNet.

Key ideas

Technical details

Notes