Distributed Machine Learning and Gradient Optimization

Springer
SKU:
9789811634222
|
ISBN13:
9789811634222
$180.44
(No reviews yet)
Condition:
New
Usually Ships in 24hrs
Current Stock:
Estimated Delivery by: | Fastest delivery by:
Adding to cart… The item has been added
Buy ebook
This book presents the state of the art in distributed machine learning algorithms that are based on gradient optimization methods. In the big data era, large-scale datasets pose enormous challenges for the existing machine learning systems. As such, implementing machine learning algorithms in a distributed environment has become a key technology, and recent research has shown gradient-based iterative optimization to be an effective solution. Focusing on methods that can speed up large-scale gradient optimization through both algorithm optimizations and careful system implementations, the book introduces three essential techniques in designing a gradient optimization algorithm to train a distributed machine learning model: parallel strategy, data compression and synchronization protocol. Written in a tutorial style, it covers a range of topics, from fundamental knowledge to a number of carefully designed algorithms and systems of distributed machine learning. It will appeal to a broad audience in the field of machine learning, artificial intelligence, big data and database management.


  • | Author: Jiawei Jiang, Bin Cui, Ce Zhang
  • | Publisher: Springer
  • | Publication Date: Feb 25, 2023
  • | Number of Pages: NA pages
  • | Language: English
  • | Binding: Paperback
  • | ISBN-10: 981163422X
  • | ISBN-13: 9789811634222
Author:
Jiawei Jiang, Bin Cui, Ce Zhang
Publisher:
Springer
Publication Date:
Feb 25, 2023
Number of pages:
NA pages
Language:
English
Binding:
Paperback
ISBN-10:
981163422X
ISBN-13:
9789811634222