Ako: Decentralised Deep Learning with Partial Gradient Exchange
Pijika Watcharapichat, Imperial College London

Distributed systems for training deep neural networks (DNNs) with large amounts of data have vastly improved the accuracy of machine learning models for image and speech recognition. DNN systems scale to large cluster deployments by having worker nodes train many model replicas in parallel; to ensure model convergence, parameter servers periodically synchronise the replicas. This raises the challenge of how to split resources between workers and parameter servers so that the cluster CPU and network resources are fully utilised without introducing bottlenecks. In practice, this requires manual tuning for each model configuration or hardware type.

We describe Ako, a decentralised dataflow-based DNN system without parameter servers that is designed to saturate cluster resources. All nodes execute workers that fully use the CPU resources to update model replicas. To synchronise replicas as often as possible subject to the available network bandwidth, workers exchange partitioned gradient updates directly with each other. The number of partitions is chosen so that the used network bandwidth remains constant, independently of cluster size. Since workers eventually receive all gradient partitions after several rounds, convergence is unaffected. For the ImageNet benchmark on a 64-node cluster, does not require any resource allocation decisions, yet converges faster than deployments with parameter servers.

Ako has been accepted at ACM SoCC'16. This is a practice talk.

About the speaker
Pijika Watcharapichat is a third-year PhD student from the Large-Scale Distributed Systems (LSDS) group. Her research area of interest is in distributed computing for machine learning and scalable systems for deep neural networks.
Date & Time
Thursday, September 8, 2016 - 14:00
Huxley Building, 218