报告题目:Achieving Acceleration in Distributed Gradient Methods
报告人:屈冠南
报告时间:2018年1月4日 上午10:00
报告地点:先进制造大楼东楼C402
报告摘要:
We consider the distributed optimization problem over a network, where the objective is to optimize a global function formed by a sum of local functions, using only local computation and communication. We develop an Accelerated Distributed Nesterov Gradient Descent (Acc-DNGD) method. When the objective function is convex and $L$-smooth, we show that it achieves a $O(\frac{1}{t^{1.4-\epsilon}})$ convergence rate for all $\epsilon\in(0,1.4)$. We also show the convergence rate can be improved to $O(\frac{1}{t^2})$ if the objective function is a composition of a linear map and a strongly-convex and smooth function. When the objective function is $\mu$-strongly convex and $L$-smooth, we show that it achieves a linear convergence rate of $O([ 1 - C (\frac{\mu}{L})^{5/7} ]^t)$, where $\frac{L}{\mu}$ is the condition number of the objective, and $C>0$ is some constant that does not depend on $\frac{L}{\mu}$.
报告人简介:
Guannan Qu received his B.S. degree in Electrical Engineering from Tsinghua University in Beijing, China in 2014. Since 2014 he has been a graduate student in the School of Engineering and Applied Sciences at Harvard University. His research interest lies in network control and optimization.