IBM speeds deep learning by using multiple servers
For everyone frustrated by how long it takes to train deep learning models, IBM has some good news: It has unveiled a way to automatically split deep-learning training jobs across multiple physical servers -- not just individual GPUs, but whole systems with their own separate sets of GPUs.
Machine Learning, Artificial Intelligence, and Deep Learning News around the world. We publish the latest developments and advances in these fields.
Sunday, August 13, 2017
IBM speeds deep learning by using multiple servers
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment