tensorflow - Scaling performance across multi GPUs -


i've been running cifar 10 model in tensorflow tutorials train across multiple gpus.

source: https://github.com/tensorflow/tensorflow/blob/r0.8/tensorflow/models/image/cifar10/cifar10_multi_gpu_train.py

gpus: nvidia 8 * m40

configuration: tensorflow 0.8.0 , cuda 7.5, cudnn 4

the result of training performance not scalable expected. pattern of graph looks amdahl's law.

the chart of scaling performance across multi gpus

is normal? if is, see main cause of that?

try gpu utilization high can (>80%). data disk not delivered fast enough, gpus idle of time.


Comments

Popular posts from this blog

Django REST Framework perform_create: You cannot call `.save()` after accessing `serializer.data` -

Why does Go error when trying to marshal this JSON? -