![François Chollet on Twitter: "Tweetorial: high-performance multi-GPU training with Keras. The only thing you need to do to turn single-device code into multi-device code is to place your model construction function under François Chollet on Twitter: "Tweetorial: high-performance multi-GPU training with Keras. The only thing you need to do to turn single-device code into multi-device code is to place your model construction function under](https://pbs.twimg.com/media/EwjqBGZUYAQWUD-.jpg:large)
François Chollet on Twitter: "Tweetorial: high-performance multi-GPU training with Keras. The only thing you need to do to turn single-device code into multi-device code is to place your model construction function under
![A quick guide to distributed training with TensorFlow and Horovod on Amazon SageMaker | by Shashank Prasanna | Towards Data Science A quick guide to distributed training with TensorFlow and Horovod on Amazon SageMaker | by Shashank Prasanna | Towards Data Science](https://miro.medium.com/max/1400/1*8ZQEO4BfflwU6IzYJgfZIQ.png)
A quick guide to distributed training with TensorFlow and Horovod on Amazon SageMaker | by Shashank Prasanna | Towards Data Science
GitHub - sayakpaul/tf.keras-Distributed-Training: Shows how to use MirroredStrategy to distribute training workloads when using the regular fit and compile paradigm in tf.keras.
![Multi-GPU and distributed training using Horovod in Amazon SageMaker Pipe mode | AWS Machine Learning Blog Multi-GPU and distributed training using Horovod in Amazon SageMaker Pipe mode | AWS Machine Learning Blog](https://d2908q01vomqb2.cloudfront.net/f1f836cb4ea6efb2a0b1b99f41ad8b103eff4b59/2020/07/28/multi-gpu-distributed-training-1-1.jpg)