Deep Learning with Pytorch in a Nutshell
  • Content
  • Image recognition
  • Object detection
  • Semantic segmentation
  • GAN
  • Image style transfer
  • Face recognition
  • Interpretability
  • Word embedding
  • Pytorch
  • Optimization
  • Special layers
    • Transposed convolution
  • Neural architecture search
  • Reinforcement learning
    • Proof of Bellman equation
    • Tabular solution method
Powered by GitBook
On this page
  • Collections
  • Neural Architecture Search: A Survey
  • Search space
  • Search strategy
  • Performance estimation strategy
  • Neural Architecture Search with Reinforcement Learning
  • AutoAugment: Learning Augmentation Policies from Data

Neural architecture search

PreviousTransposed convolutionNextReinforcement learning

Last updated 6 years ago

Collections

Neural Architecture Search: A Survey

  1. Search space

    • Small search space reduces search time, but introduces a human bias, and prevents finding novel architecture

  2. Search strategy

    • How to explore the search space

  3. Performance estimation strategy

    • The objectives of NAS is typically to find architectures that achieve high predictive performance on unseen data.

Search space

Input of layer i can be formally described as a function: gi(Li−1out,...,L0out)g_i(L_{i-1}^{out},...,L_{0}^{out})gi​(Li−1out​,...,L0out​)

  • Chain-structured network: gi(Li−1out,...,L0out)=Li−1outg_i(L_{i-1}^{out},...,L_{0}^{out})=L_{i-1}^{out}gi​(Li−1out​,...,L0out​)=Li−1out​

  • Residual type network: gi(Li−1out,...,L0out)=Li−1out+Ljoutg_i(L_{i-1}^{out},...,L_{0}^{out})=L_{i-1}^{out}+L_{j}^{out}gi​(Li−1out​,...,L0out​)=Li−1out​+Ljout​

  • Dense connected network: gi(Li−1out,...,L0out)=concat(Li−1out,...,L0out)g_i(L_{i-1}^{out},...,L_{0}^{out})=\text{concat}(L_{i-1}^{out},...,L_{0}^{out})gi​(Li−1out​,...,L0out​)=concat(Li−1out​,...,L0out​)

Search strategy

  • Random search

  • Bayesian optimization

  • Evolutionary methods

  • Reinforcement learning

  • Gradient based methods

Performance estimation strategy

  • Regular training

  • Lower resolution image

  • Less filter per layer

  • Network morphism

  • One-shot architecture search

Neural Architecture Search with Reinforcement Learning

AutoAugment: Learning Augmentation Policies from Data

https://www.ml4aad.org/automl/literature-on-neural-architecture-search
https://arxiv.org/pdf/1808.05377.pdf
https://arxiv.org/pdf/1611.01578.pdf
https://arxiv.org/pdf/1805.09501.pdf