Skip to main content
Log in

A Decentralized Multi-objective Optimization Algorithm

  • Published:
Journal of Optimization Theory and Applications Aims and scope Submit manuscript

Abstract

During the past few decades, multi-agent optimization problems have drawn increased attention from the research community. When multiple objective functions are present among agents, many works optimize the sum of these objective functions. However, this formulation implies a decision regarding the relative importance of each objective: optimizing the sum is a special case of a multi-objective problem in which all objectives are prioritized equally. To enable more general prioritizations, we present a distributed optimization algorithm that explores Pareto optimal solutions for non-homogeneously weighted sums of objective functions. This exploration is performed through a new rule based on agents’ priorities that generates edge weights in agents’ communication graph. These weights determine how agents update their decision variables with information received from other agents in the network. Agents initially disagree on the priorities of objective functions, though they are driven to agree upon them as they optimize. As a result, agents still reach a common solution. The network-level weight matrix is (non-doubly) stochastic, contrasting with many works on the subject in which the network-level weight matrix is doubly-stochastic. New theoretical analyses are therefore developed to ensure convergence of the proposed algorithm. This paper provides a gradient-based optimization algorithm, proof of convergence to solutions, and convergence rates of the proposed algorithm. It is shown that agents’ initial priorities influence the convergence rate of the proposed algorithm and that these initial choices affect its long-run behavior. Numerical results performed with different numbers of agents illustrate the performance and effectiveness of the proposed algorithm.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3

Similar content being viewed by others

Data Availability Statement

Data sharing not applicable to this article as no datasets were generated or analysed during the current study.

References

  1. Agarwal, A., Duchi, J.C.: Distributed delayed stochastic optimization. In: Advances in Neural Information Processing Systems, pp. 873–881 (2011)

  2. Bianchi, P., Fort, G., Hachem, P., Jakubowicz, J.: Performance analysis of a distributed Robbins-Monro algorithm for sensor networks. In: European Signal Processing Conference, pp. 1030–1034 (2011)

  3. Blondin, M. J., Hale, M.: An algorithm for multi-objective multi-agent optimization. In: American control conference (ACC), pp. 1489–1494. Denver, CO (2020). https://doi.org/10.23919/ACC45564.2020.9148017

  4. Blondel, V.D., Hendrickx, J.M., Olshevsky, A., Tsitsiklis, J.N.: Convergence in multiagent coordination, consensus, and flocking. In: Proceedings of the 44th IEEE Conference on Decision and Control, pp. 2996–3000 (2005)

  5. Byungchul, K., Lavrova, O.: Optimal power flow and energy-sharing among multi-agent smart buildings in the smart grid. In: IEEE Energytech, pp. 1–5 (2013)

  6. Collette, Y., Siarry, P.: Multiobjective Optimization: Principles and Case Studies. Springer, Berlin (2004)

    Book  Google Scholar 

  7. Duchi, J.C., Agarwal, A., Wainwright, M.J.: Dual averaging for distributed optimization: convergence analysis and network scaling. IEEE Trans. Autom. Control 57(3), 592–606 (2011)

    Article  MathSciNet  Google Scholar 

  8. Filotheou, A., Nikou, A., Dimarogonas, D.V.: Decentralized control of uncertain multi-agent systems with connectivity maintenance and collision avoidance. In: European Control Conference, pp. 8–13 (2018)

  9. Khim, S.: The Frobenius–Perron theorem. Doctoral Dissertation, PhD thesis, The University of Chicago (2007)

  10. Liu, Q., Wang, J.: A second-order multi-agent network for bound-constrained distributed optimization. IEEE Trans. Autom. Control 60(12), 3310–3325 (2015)

    Article  MathSciNet  Google Scholar 

  11. Lobel, I., Ozdaglar, A., Feijer, D.: Distributed multi-agent optimization with state-dependent communication. Math. Program. 129(2), 255–284 (2011)

    Article  MathSciNet  Google Scholar 

  12. Miettinen, K.M.: Nonlinear Multiobjective Optimiation. Kluwer Academic Publishers, New York (1999)

    Google Scholar 

  13. Nedic, A., Bertsekas, D.P.: Incremental subgradient methods for nondifferentiable optimization. SIAM J. Optim. 12(1), 109–138 (2001)

    Article  MathSciNet  Google Scholar 

  14. Nedić, A., Ozdaglar, A.: Distributed subgradient methods for multi-agent optimization. IEEE Trans. Autom. Control 54(1), 48–61 (2009)

    Article  MathSciNet  Google Scholar 

  15. Nedić, A., Ozdaglar, A.: Cooperative distributed multi-agent. In: Palomar, D., Eldar, Y. (eds.) Convex Optimization in Signal Processing and Communications, pp. 340–386. Cambridge University Press, Cambridge (2010)

    MATH  Google Scholar 

  16. Nedić, A., Ozdaglar, A., Parrilo, P.: Constrained consensus. arXiv preprint arXiv:0802.3922 (2008)

  17. Nedić, A., Ozdaglar, A., Parrilo, P.: Constrained consensus and optimization in multi-agent networks. IEEE Trans. Autom. Control 55(4), 922–938 (2010)

    Article  MathSciNet  Google Scholar 

  18. Oh, K.K., Park, M.C., Ahn, H.S.: A survey of multi-agent formation control. Automatica 53, 424–40 (2015)

    Article  MathSciNet  Google Scholar 

  19. Olfati-Saber, R., Fax, J.A., Murray, R.M.: Consensus and cooperation in networked multi-agent systems. Proc. IEEE. 95(1), 215–233 (2007)

    Article  Google Scholar 

  20. Olshevsky, A., Tsitsiklis, J.N.: Convergence speed in distributed consensus and averaging. SIAM Rev. 53(4), 747–772 (2011)

    Article  MathSciNet  Google Scholar 

  21. Qin, J., Ma, Q., Shi, Y., Wang, L.: Recent advances in consensus of multi-agent systems: a brief survey. IEEE Trans. Ind. Electron. 64(6), 4972–4983 (2016)

    Article  Google Scholar 

  22. Ram, S.S., Nedić, A., Veeravalli, V.V.: Distributed stochastic subgradient projection algorithms for convex optimization. J. Optim. Theory Appl. 147(3), 516–545 (2010)

    Article  MathSciNet  Google Scholar 

  23. Touri, B., Nedic, A.: On backward product of stochastic matrices. Automatica 48(8), 1477–1488 (2018)

    Article  MathSciNet  Google Scholar 

  24. Tsianos, K.I., Lawlor, S., Rabbat, M.G.: Consensus-based distributed optimization: practical issues and applications in large-scale machine learning. In: Annual Allerton IEEE Conference on Communication, Control, and Computing, pp. 1543–1550 (2012)

  25. Wang, J., Elia, N.: Control approach to distributed optimization. In: Annual Allerton Conference on Communication, Control, and Computing, pp. 557–561 (2010)

  26. Wang, X., Su, H., Wang, X., Chen, G.: An overview of coordinated control for multi-agent systems subject to input saturation. Perspect. Sci. 7, 133–39 (2016)

    Article  Google Scholar 

  27. Xiao, L., Boyd, S., Kim, S.J.: Distributed average consensus with least-mean-square deviation. J. Parallel Distrib. Comput. 67(1), 33–46 (2007)

    Article  Google Scholar 

  28. Zhang, Y., Lou, Y., Hong, Y.: An approximate gradient algorithm for constrained distributed convex optimization. IEEE/CAA J. Autom. Sinica 1, 61–67 (2014)

    Article  Google Scholar 

Download references

Acknowledgements

Maude J. Blondin would like to thank the support of Fonds de recherche Nature et technologies postdoctoral fellowship. Matthew Hale was supported in part by AFOSR Grant No. FA9550-19-1-0169.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Maude J. Blondin.

Ethics declarations

Conflict of interest

The authors have no conflicts of interest to declare that are relevant to the content of this article.

Additional information

Communicated by Xiaoqi Yang.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix

Appendix

This appendix contains proofs for lemmas presented in the paper.

Proof of Lemma 3.1

Define \( \mu (k) := \mathop {\min }\nolimits _{j \in [m]} \; \mathop {\min }\nolimits _{i \in [m]} \, w^j_i(k)\). Then, \(W(k+1) = PW(k)\) can be expressed as

$$\begin{aligned} \begin{aligned}&\begin{bmatrix} p_{1}^1 &{}\ldots &{}p_{1}^m \\ \vdots &{} \vdots &{} \vdots \\ p_{i}^1 &{}\ldots &{}p_{i}^m \\ \vdots &{} \vdots &{} \vdots \\ p_{m}^1 &{}\ldots &{}p_{m}^m \end{bmatrix} \quad \begin{bmatrix} \mu (k) + \delta _{1}^1(k) &{}\ldots &{}\mu (k) + \delta _{1}^m(k) \\ \vdots &{} \vdots &{} \vdots \\ \mu (k) + \delta _{i}^1(k) &{}\ldots &{} \mu (k) + \delta _{i}^m(k) \\ \vdots &{} \vdots &{} \vdots \\ \mu (k) + \delta _{m}^1(k) &{}\ldots &{} \mu (k) + \delta _{m}^m(k) \end{bmatrix}\\&\quad = \begin{bmatrix} w_{1}^1(k+1) &{}\ldots &{}w_{1}^m(k+1) \\ \vdots &{} \vdots &{} \vdots \\ w_{i}^1(k+1) &{}\ldots &{}w_{i}^m(k+1) \\ \vdots &{} \vdots &{} \vdots \\ w_{m}^1(k+1) &{}\ldots &{}w_{m}^m(k+1) \end{bmatrix}, \end{aligned} \end{aligned}$$
(45)

where \(\delta _{i}^j(k) = w_{i}^j(k)-\mu (k) \ge 0\) for \(i,j = \{1, \ldots , m\} \). Then, we have

$$\begin{aligned} \begin{aligned} w_{i}^j(k+1)&= \sum _{n=1}^m p_{i}^n [\mu (k) + \delta _{n}^j(k)] = \sum _{n=1}^m p_{i}^n \mu (k) + \sum _{n=1}^m p_{i}^n\delta _{n}^j(k) \\&= \mu (k) \sum _{n=1}^m p_{i}^n + \sum _{n=1}^m p_{i}^n\delta _{n}^j(k). \end{aligned} \end{aligned}$$
(46)

By definition, we know that \(\sum _{m=1}^n p_{i}^m =1\). Therefore, we get

$$\begin{aligned} w_{i}^j(k+1) = \mu (k) + \sum _{n=1}^m p_{i}^n\delta _{n}^j(k). \end{aligned}$$
(47)

Since \(\delta _{n}^j \ge 0 \) and \(p_i^n \ge 0\) for \(i,j,n= \{1, \ldots ,m\}\), \(w_{i}^j(k+1) \ge \mu (k) =\mathop {\min }\nolimits _{j \in [m]} \; \mathop {\min }\nolimits _{i \in [m]} \, w^j_i(k)\) for \(i,j = \{1, \ldots , m \}\) and all k. This establishes that the minimum of W(k) is non-decreasing and other agents cannot go below the previous minimum at the next time step.

Therefore, since (10) defines A(k), the smallest non-zero element of A(k), denoted \(\mathop {\min ^+ \,}\nolimits _{i \in [m]}\mathop {\min ^+}\nolimits _{j \in [m]} [A(k)]^j_i\), is at least \(\mathop {\min \,}\nolimits _{i \in [m]} \mathop {\min }\nolimits _{j \in [m]} w_i^j(k)\). This directly implies that the lower bound can be set as \(\mathop {\min }\nolimits _{j \in [m]} \; \mathop {\min }\nolimits _{i \in [m]} \, w^j_i(0)\). \(\square \)

Proof of Lemma 4.6

From Lemma 4.5 and based on [16] and since \(x^i(k+1)=P_{X}[v^i(k)-\alpha _k d^i(k)]\), we have

$$\begin{aligned}&\Vert x^i(k+1)-z\Vert ^2 \le \Vert v^i(k)-\alpha _{k}d^i(k)-z\Vert ^2 \nonumber \\&\quad -\Vert P_{X}[v^i(k)-\alpha _{k}d^i(k)]-v^i(k)-\alpha _{k}d^i(k)\Vert ^2. \end{aligned}$$
(48)

From the definition of \(\phi ^i(k)\) in (14), the previous relation becomes,

$$\begin{aligned} ||x^i(k+1)-z||^2 \le ||v^i(k)-\alpha _kd^i(k)-z||^2-||\phi ^i(k)||^2. \end{aligned}$$

By expanding \(||v^i(k)-\alpha _k d^i(k)-z||^2\), we have

$$\begin{aligned} \begin{aligned}&||v^i(k)-\alpha _k d^i(k)-z||^2 = ||v^i(k)-z||^2+\alpha _k^2||d^i(k)||^2\\&\quad -2\alpha _k d^i(k)'(v^i(k)-z). \end{aligned} \end{aligned}$$
(49)

Because \(d^i(k)\) is the gradient of \(f_i(x)\) at \(x=v^i(k)\), we obtain from convexity that

$$\begin{aligned} d^i(k)'(v^i(k)-z) \ge f_i(v^i(k))-f_i(z). \end{aligned}$$
(50)

By bringing together (49) and (50), we get

$$\begin{aligned} \begin{aligned}&||x^i(k+1)-z||^2 \le ||v^i(k)-z||^2 + \alpha _k^2||d^i(k)||^2 \\&\quad - 2\alpha _k[f_i(v^i(k)-f_i(z))] - ||\phi ^i(k)||^2. \end{aligned} \end{aligned}$$
(51)

Given the definition of \(v^i(k)\), using the convexity of the norm squared function and the stochasticity of the \(a^i(w^i(k))\), we find that

$$\begin{aligned} ||v^i(k)-z||^2 \le \sum _{j=1}^m a_i^j(w^i(k))||x^j(k)-z||^2. \end{aligned}$$
(52)

It then follows from (51) and (52) that

$$\begin{aligned} \begin{aligned}&||x^i(k+1)-z||^2 \le \sum _{j=1}^m a_i^j(w^i(k))||x^j(k)-z||^2+\alpha _k^2||d^i(k)||^2 \\&\quad -2\alpha _k [f_i(v^i(k))-f_i(z)] - ||\phi ^i(k)||^2. \end{aligned} \end{aligned}$$
(53)

By summing (53) over \(i=1, \ldots , m\), we obtain the following relation:

$$\begin{aligned} \begin{aligned}&\sum _{i=1}^m||x^i(k+1)-z||^2 \le \sum _{i=1}^m\sum _{j=1}^m a_i^j(w^i(k))||x^j(k)-z||^2 \\&\quad +\alpha _k^2\sum _{i=1}^m||d^i(k)||^2 -2\alpha _k\sum _{i=1}^m[f_i(v^i(k))-f_i(z)] - \sum _{i=1}^m||\phi ^i(k)||^2. \end{aligned} \end{aligned}$$

\(\square \)

Proof of Lemma 4.7

(a) From (15) and based on [16], we have

$$\begin{aligned} \begin{aligned} x^i(k)&=\sum _{j=1}^m [\varPhi (k-1,s)]_i^jx^j(s)-\sum _{r=s}^{k-2}\sum _{j=1}^m[\varPhi (k-1,r+1)]_i^j\alpha _rd^j(r) \\&\quad -\alpha _{k-1}d^i(k-1) + \sum _{r=s}^{k-2}\sum _{j=1}^m[\varPhi (k-1,r+1)]_i^j\phi ^j(r)+\phi ^i(k-1). \end{aligned} \end{aligned}$$
(54)

Using the transition matrix \(\varPhi (k,s) = A(W(k))A(W(k-1)), \ldots , A(W(s))\) and following the same logic to obtain (15), (19) can be re-written for all k and s with \(k > s\) as

$$\begin{aligned} \begin{aligned} y(k) =&\dfrac{1}{m}\sum _{i=1}^m\sum _{j=1}^m[\varPhi (k-1,s)]_i^jx^j(s)- \dfrac{1}{m}\sum _{i=1}^m\sum _{r=s}^{k-2}\sum _{j=1}^m[\varPhi (k-1,r+1)]_i^j\alpha _rd^i(r) \\&+ \dfrac{1}{m}\sum _{i=1}^m\sum _{r=s}^{k-2}\sum _{j=1}^m[\varPhi (k-1,r+1)]_i^j\phi ^i(r)\\&- \dfrac{\alpha _k}{m}\sum _{i=1}^md^i(k-1)+\dfrac{1}{m}\sum _{i=1}^m\phi ^i(k-1). \end{aligned} \end{aligned}$$
(55)

By subtracting (55) from (54), we obtain

$$\begin{aligned} \begin{aligned}&x^i(k)-y(k) = \sum _{j=1}^m[\varPhi (k-1,s)]^j_ix^j(s)-\dfrac{1}{m}\sum _{i=1}^m \sum _{j=1}^m[\varPhi (k-1,s)]_i^jx^j(s) \\&\quad -\sum _{r=s}^{k-2}\sum _{j=1}^m[\varPhi (k-1,r+1)]_i^j\alpha _rd^j(r)+ \dfrac{1}{m}\sum _{i=1}^m\sum _{r=s}^{k-2}\sum _{j=1}^m[\varPhi (k-1,r+1)]_i^j\alpha _rd^j(r) \\&\quad - \alpha _{k-1}d^i(k-1)+\dfrac{\alpha _{k-1}}{m}\sum _{i=1}^md^i(k-1) + \sum _{r=s}^{k-2}\sum _{j=1}^{m}[\varPhi (k-1,r+1)]_i^j\phi ^j(r) \\&\quad - \dfrac{1}{m}\sum _{i=1}^m\sum _{r=s}^{k-2}\sum _{j=1}^m[\varPhi (k-1,r+1)]_i^j\phi ^j(r) +\phi ^i(k-1)-\dfrac{1}{m}\sum _{i=1}^m\phi ^i(k-1). \end{aligned} \nonumber \\ \end{aligned}$$
(56)

Taking the norm of (56), we get

$$\begin{aligned}&||x^i(k)-y(k)||\le \sum _{j=1}^m \left| [\varPhi (k-1,s)]_i^j -\dfrac{1}{m}\sum _{i=1}^m[\varPhi (k-1,s)]_i^j \right| ||x^j(s)|| \nonumber \\&\quad + \sum _{r=s}^{k-2}\sum _{j=1}^m\left[ \left| [\varPhi (k-1,r+1)]_i^j -\dfrac{1}{m}\sum _{i=1}^m[\varPhi (k-1,r+1)]_i^j \right| \alpha _r||d^j(r)|| \right] \nonumber \\&\quad + \alpha _{k-1}||d^i(k-1)||+\dfrac{\alpha _{k-1}}{m}\sum _{i=1}^m||d^i(k-1)|| \nonumber \\&\quad + \sum _{r=s}^{k-2}\sum _{j=1}^{m}\left[ \left| [\varPhi (k-1,r+1)]_i^j-\dfrac{1}{m} \sum _{i=1}^m[\varPhi (k-1,r+1)]_i^j \right| ||\phi ^j(r)|| \right] \nonumber \\&\quad +||\phi ^i(k-1)||+\dfrac{1}{m}\sum _{i=1}^m||\phi ^i(k-1)||. \end{aligned}$$
(57)

Using Lemma 4.3 and for \(s=0\), and \(k \rightarrow \infty \), the first right-hand term of (57) is

$$\begin{aligned}&||x^i(k)-y(k)|| \nonumber \\&\quad \le \sum _{j=1}^m \left[ \left| [\varPhi (k-1,0)]_i^j -\gamma _j(0) \right| + \left| \dfrac{1}{m}\sum _{i=1}^m[\varPhi (k-1,0)]_i^j - \gamma _j(0) \right| \right] ||x^j(0)|| \nonumber \\&\qquad + \sum _{r=0}^{k-2}\sum _{j=1}^m\left[ \left| [\varPhi (k-1,r+1)]_i^j -\dfrac{1}{m}\sum _{i=1}^m[\varPhi (k-1,r+1)]_i^j \right| \alpha _r||d^j(r)|| \right] \nonumber \\&\qquad + \alpha _{k-1}||d^i(k-1)||+\dfrac{\alpha _{k-1}}{m}\sum _{i=1}^m||d^i(k-1)|| \nonumber \\&\qquad + \sum _{r=0}^{k-2}\sum _{j=1}^{m}\left[ \left| [\varPhi (k-1,r+1)]_i^j-\dfrac{1}{m} \sum _{i=1}^m[\varPhi (k-1,r+1)]_i^j \right| ||\phi ^j(r)|| \right] \nonumber \\&\qquad +||\phi ^i(k-1)||+\dfrac{1}{m}\sum _{i=1}^m||\phi ^i(k-1)||, \end{aligned}$$
(58)

which can be simplified as

$$\begin{aligned}&||x^i(k)-y(k)||\le 2mC\beta ^{k-1}\sum _{j=1}^m ||x^j(0)|| \nonumber \\&\quad + \sum _{r=0}^{k-2}\sum _{j=1}^m\left[ \left| [\varPhi (k-1,r+1)]_i^j-\dfrac{1}{m} \sum _{i=1}^m[\varPhi (k-1,r+1)]_i^j \right| \alpha _r||d^j(r)|| \right] \nonumber \\&\quad + \alpha _{k-1}||d^i(k-1)||+\dfrac{\alpha _{k-1}}{m}\sum _{i=1}^m||d^i(k-1)|| \nonumber \\&\quad + \sum _{r=0}^{k-2}\sum _{j=1}^{m}\left[ \left| [\varPhi (k-1,r+1)]_i^j -\dfrac{1}{m}\sum _{i=1}^m[\varPhi (k-1,r+1)]_i^j \right| ||\phi ^j(r)|| \right] \nonumber \\&\quad +||\phi ^i(k-1)||+\dfrac{1}{m}\sum _{i=1}^m||\phi ^i(k-1)||. \end{aligned}$$
(59)

Similarly, using Lemma 4.3, the second right-hand term is

$$\begin{aligned} \begin{aligned}&||x^i(k)-y(k)||\le 2mC\beta ^{k-1}\sum _{j=1}^m ||x^j(0)|| + 2mCL \sum _{r=0}^{k-2}\beta ^{k-r} \alpha _r \\&\quad + \alpha _{k-1}||d^i(k-1)||+\dfrac{\alpha _{k-1}}{m}\sum _{i=1}^m||d^i(k-1)|| \\&\quad + \sum _{r=0}^{k-2}\sum _{j=1}^{m}\left[ \left| [\varPhi (k-1,r+1)]_i^j -\dfrac{1}{m}\sum _{i=1}^m[\varPhi (k-1,r+1)]_i^j \right| ||\phi ^j(r)|| \right] \\&\quad +||\phi ^i(k-1)||+\dfrac{1}{m}\sum _{i=1}^m||\phi ^i(k-1)||. \end{aligned} \end{aligned}$$
(60)

Using Lemma 4.2 and the gradient bound, the third hand-right term is

$$\begin{aligned} \begin{aligned}&||x^i(k)-y(k)||\le 2mC\beta ^{k-1}\sum _{j=1}^m ||x^j(0)|| + 2mCL \sum _{r=0}^{k-2}\beta ^{k-r} \alpha _r + 2\alpha _{k-1}L \\&\quad + \sum _{r=0}^{k-2}\sum _{j=1}^{m}\left[ \left| [\varPhi (k-1,r+1)]_i^j -\dfrac{1}{m}\sum _{i=1}^m[\varPhi (k-1,r+1)]_i^j \right| ||\phi ^j(r)|| \right] \\&\quad +||\phi ^i(k-1)||+\dfrac{1}{m}\sum _{i=1}^m||\phi ^i(k-1)||. \end{aligned} \end{aligned}$$
(61)

Using again Lemma 4.2 and 4.3, we obtain for the last two terms

$$\begin{aligned} \begin{aligned}&||x^i(k)-y(k)||\le 2mC\beta ^{k-1}\sum _{j=1}^m ||x^j(0)|| + 2mCL \sum _{r=0}^{k-2}\beta ^{k-r} \alpha _r + 2\alpha _{k-1}L \\&\quad + 2mCL \sum _{r=0}^{k-2} \beta ^{k-r}\alpha _r +2\alpha _{k-1}L. \end{aligned} \end{aligned}$$
(62)

We therefore obtain

$$\begin{aligned} \begin{aligned} ||x^i(k)-y(k)|| \le 2mC\beta ^{k-1}\sum _{j=1}^m ||x^j(0)|| + 4mCL \sum _{r=0}^{k-2}\beta ^{k-r} \alpha _r + 4\alpha _{k-1}L. \end{aligned} \end{aligned}$$
(63)

Since \(0< \beta < 1\), \(\beta ^k \rightarrow 0\) as \(k\rightarrow \infty \). Assuming that \(\alpha _k \rightarrow 0\) and taking the limit superior, we have for all i,

$$\begin{aligned} \limsup _{k\rightarrow \infty }||x^i(k)-y(k)|| \le 4mCL \limsup _{k\rightarrow \infty } \sum _{r=0}^{k-2}\beta ^{k-r}\alpha _r. \end{aligned}$$
(64)

By Lemma 4.4, we have

$$\begin{aligned} \lim _{k\rightarrow \infty } \sum _{r=0}^{k-2}\beta ^{k-r}\alpha _r = 0. \end{aligned}$$

Therefore, \(\lim _{k\rightarrow \infty }||x^i(k)-y(k)||=0\) for all i.

(b) By multiplying (63) with \(\alpha _k\), we get

$$\begin{aligned}&\alpha _k||x^i(k)-y(k)||\le 2mC \alpha _k\beta ^{k-1}\sum _{j=1}^m ||x^j(0)|| \\&\quad + 4mCL \sum _{r=0}^{k-2}\beta ^{k-r} \alpha _k \alpha _r + 4 \alpha _k\alpha _{k-1}L. \end{aligned}$$

Using \(2\alpha _k\alpha _r \le \alpha _k^2+\alpha _r^2\) and \(\alpha _k\beta ^{k-1} \le \alpha _k^2+\beta ^{2(k-1)}\) for any k and r, we obtain

$$\begin{aligned}&\alpha _k||x^i(k)-y(k)|| \le 2mC\beta ^{2(k-1)}\sum _{j=1}^m||x^j(0)||+ 2mC\alpha _k^2\sum _{j=1}^m||x^j(0)|| \\&\quad + 2mCL\alpha _k^2\sum _{r=0}^{k-2}\beta ^{k-r} + 2mCL\sum _{r=0}^{k-2}\beta ^{k-r}\alpha _r^2 + 2L(\alpha _k^2+\alpha _{k-1}^2). \end{aligned}$$

Since \(\sum _{r=0}^{k-2}\beta ^{k-r} \le \dfrac{1}{1-\beta }\), we have

$$\begin{aligned}&\alpha _k||x^i(k)-y(k)|| \le 2mC\beta ^{2(k-1)}\sum _{j=1}^m||x^j(0)||+ 2mC\alpha _k^2\sum _{j=1}^m||x^j(0)|| \\&\quad + 2mCL\alpha _k^2\dfrac{1}{1-\beta } + 2mCL\sum _{r=0}^{k-2}\beta ^{k-r}\alpha _r^2 + 2L(\alpha _k^2+\alpha _{k-1}^2). \end{aligned}$$

By summing from \(k=1\) to \(k =\infty \), we obtain

$$\begin{aligned} \begin{aligned}&\sum _{k=1}^{\infty } \alpha _k||x^i(k)-y(k)|| \le 2mC\sum _{k=1}^{\infty }\beta ^{2(k-1)}\sum _{j=1}^m||x^j(0)|| \\&\quad +2mC\sum _{k=1}^{\infty }\alpha _k^2\sum _{j=1}^m||x^j(0)|| + 2mCL\dfrac{1}{1-\beta }\sum _{k=1}^{\infty }\alpha _k^2 \\&\quad +2mCL\sum _{k=1}^{\infty }\sum _{r=0}^{k-2}\beta ^{k-r}\alpha _r^2 + 2L\sum _{k=1}^{\infty }(\alpha _k^2+\alpha _{k-1}^2). \end{aligned} \end{aligned}$$
(65)

In (65), the first term is summable since \(0< \beta < 1\). The second, third, and fifth terms are also summable since \(\sum _{k\rightarrow \infty } \alpha _{k}^2 < \infty \). By Lemma 4.4, the fourth term is summable. Thus, \(\sum _{k=1}^{\infty } \alpha _k||x^i(k)-y(k)|| < \infty \text { for all } i\). \(\square \)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Blondin, M.J., Hale, M. A Decentralized Multi-objective Optimization Algorithm. J Optim Theory Appl 189, 458–485 (2021). https://doi.org/10.1007/s10957-021-01840-z

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10957-021-01840-z

Keywords

Navigation