Why Use Mixed Precision Deep Learning?
As advancements have been made in the field of deep learning, the discussion about mixed-precision training of deep learning models has seen a similar increase. These improvements and natural evolution of the scope, sequence, and raw power of neural networks mean that the size of these models has had to increase to compensate accordingly. Larger and more complicated deep learning models require advancements in technology and approach.
This has led to multi-GPU setups with distributed training which can get out of hand quickly as more GPUs are integrated into training. Getting back to the basic training principles of deep learning and brushing up on fundamental techniques can ease the stress of the training phase of neural networks and optimize GPU usage. Mixed precision training or automatic mixed precision training can be a simple way to do exactly this.