tensorflow - Scaling performance across multi GPUs -
i've been running cifar 10 model in tensorflow tutorials train across multiple gpus.
gpus: nvidia 8 * m40
configuration: tensorflow 0.8.0 , cuda 7.5, cudnn 4
the result of training performance not scalable expected. pattern of graph looks amdahl's law.
the chart of scaling performance across multi gpus
is normal? if is, see main cause of that?
try gpu utilization high can (>80%). data disk not delivered fast enough, gpus idle of time.
Comments
Post a Comment