Abstract
During the past few decades, multi-agent optimization problems have drawn increased attention from the research community. When multiple objective functions are present among agents, many works optimize the sum of these objective functions. However, this formulation implies a decision regarding the relative importance of each objective: optimizing the sum is a special case of a multi-objective problem in which all objectives are prioritized equally. To enable more general prioritizations, we present a distributed optimization algorithm that explores Pareto optimal solutions for non-homogeneously weighted sums of objective functions. This exploration is performed through a new rule based on agents’ priorities that generates edge weights in agents’ communication graph. These weights determine how agents update their decision variables with information received from other agents in the network. Agents initially disagree on the priorities of objective functions, though they are driven to agree upon them as they optimize. As a result, agents still reach a common solution. The network-level weight matrix is (non-doubly) stochastic, contrasting with many works on the subject in which the network-level weight matrix is doubly-stochastic. New theoretical analyses are therefore developed to ensure convergence of the proposed algorithm. This paper provides a gradient-based optimization algorithm, proof of convergence to solutions, and convergence rates of the proposed algorithm. It is shown that agents’ initial priorities influence the convergence rate of the proposed algorithm and that these initial choices affect its long-run behavior. Numerical results performed with different numbers of agents illustrate the performance and effectiveness of the proposed algorithm.
Similar content being viewed by others
Data Availability Statement
Data sharing not applicable to this article as no datasets were generated or analysed during the current study.
References
Agarwal, A., Duchi, J.C.: Distributed delayed stochastic optimization. In: Advances in Neural Information Processing Systems, pp. 873–881 (2011)
Bianchi, P., Fort, G., Hachem, P., Jakubowicz, J.: Performance analysis of a distributed Robbins-Monro algorithm for sensor networks. In: European Signal Processing Conference, pp. 1030–1034 (2011)
Blondin, M. J., Hale, M.: An algorithm for multi-objective multi-agent optimization. In: American control conference (ACC), pp. 1489–1494. Denver, CO (2020). https://doi.org/10.23919/ACC45564.2020.9148017
Blondel, V.D., Hendrickx, J.M., Olshevsky, A., Tsitsiklis, J.N.: Convergence in multiagent coordination, consensus, and flocking. In: Proceedings of the 44th IEEE Conference on Decision and Control, pp. 2996–3000 (2005)
Byungchul, K., Lavrova, O.: Optimal power flow and energy-sharing among multi-agent smart buildings in the smart grid. In: IEEE Energytech, pp. 1–5 (2013)
Collette, Y., Siarry, P.: Multiobjective Optimization: Principles and Case Studies. Springer, Berlin (2004)
Duchi, J.C., Agarwal, A., Wainwright, M.J.: Dual averaging for distributed optimization: convergence analysis and network scaling. IEEE Trans. Autom. Control 57(3), 592–606 (2011)
Filotheou, A., Nikou, A., Dimarogonas, D.V.: Decentralized control of uncertain multi-agent systems with connectivity maintenance and collision avoidance. In: European Control Conference, pp. 8–13 (2018)
Khim, S.: The Frobenius–Perron theorem. Doctoral Dissertation, PhD thesis, The University of Chicago (2007)
Liu, Q., Wang, J.: A second-order multi-agent network for bound-constrained distributed optimization. IEEE Trans. Autom. Control 60(12), 3310–3325 (2015)
Lobel, I., Ozdaglar, A., Feijer, D.: Distributed multi-agent optimization with state-dependent communication. Math. Program. 129(2), 255–284 (2011)
Miettinen, K.M.: Nonlinear Multiobjective Optimiation. Kluwer Academic Publishers, New York (1999)
Nedic, A., Bertsekas, D.P.: Incremental subgradient methods for nondifferentiable optimization. SIAM J. Optim. 12(1), 109–138 (2001)
Nedić, A., Ozdaglar, A.: Distributed subgradient methods for multi-agent optimization. IEEE Trans. Autom. Control 54(1), 48–61 (2009)
Nedić, A., Ozdaglar, A.: Cooperative distributed multi-agent. In: Palomar, D., Eldar, Y. (eds.) Convex Optimization in Signal Processing and Communications, pp. 340–386. Cambridge University Press, Cambridge (2010)
Nedić, A., Ozdaglar, A., Parrilo, P.: Constrained consensus. arXiv preprint arXiv:0802.3922 (2008)
Nedić, A., Ozdaglar, A., Parrilo, P.: Constrained consensus and optimization in multi-agent networks. IEEE Trans. Autom. Control 55(4), 922–938 (2010)
Oh, K.K., Park, M.C., Ahn, H.S.: A survey of multi-agent formation control. Automatica 53, 424–40 (2015)
Olfati-Saber, R., Fax, J.A., Murray, R.M.: Consensus and cooperation in networked multi-agent systems. Proc. IEEE. 95(1), 215–233 (2007)
Olshevsky, A., Tsitsiklis, J.N.: Convergence speed in distributed consensus and averaging. SIAM Rev. 53(4), 747–772 (2011)
Qin, J., Ma, Q., Shi, Y., Wang, L.: Recent advances in consensus of multi-agent systems: a brief survey. IEEE Trans. Ind. Electron. 64(6), 4972–4983 (2016)
Ram, S.S., Nedić, A., Veeravalli, V.V.: Distributed stochastic subgradient projection algorithms for convex optimization. J. Optim. Theory Appl. 147(3), 516–545 (2010)
Touri, B., Nedic, A.: On backward product of stochastic matrices. Automatica 48(8), 1477–1488 (2018)
Tsianos, K.I., Lawlor, S., Rabbat, M.G.: Consensus-based distributed optimization: practical issues and applications in large-scale machine learning. In: Annual Allerton IEEE Conference on Communication, Control, and Computing, pp. 1543–1550 (2012)
Wang, J., Elia, N.: Control approach to distributed optimization. In: Annual Allerton Conference on Communication, Control, and Computing, pp. 557–561 (2010)
Wang, X., Su, H., Wang, X., Chen, G.: An overview of coordinated control for multi-agent systems subject to input saturation. Perspect. Sci. 7, 133–39 (2016)
Xiao, L., Boyd, S., Kim, S.J.: Distributed average consensus with least-mean-square deviation. J. Parallel Distrib. Comput. 67(1), 33–46 (2007)
Zhang, Y., Lou, Y., Hong, Y.: An approximate gradient algorithm for constrained distributed convex optimization. IEEE/CAA J. Autom. Sinica 1, 61–67 (2014)
Acknowledgements
Maude J. Blondin would like to thank the support of Fonds de recherche Nature et technologies postdoctoral fellowship. Matthew Hale was supported in part by AFOSR Grant No. FA9550-19-1-0169.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The authors have no conflicts of interest to declare that are relevant to the content of this article.
Additional information
Communicated by Xiaoqi Yang.
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendix
Appendix
This appendix contains proofs for lemmas presented in the paper.
Proof of Lemma 3.1
Define \( \mu (k) := \mathop {\min }\nolimits _{j \in [m]} \; \mathop {\min }\nolimits _{i \in [m]} \, w^j_i(k)\). Then, \(W(k+1) = PW(k)\) can be expressed as
where \(\delta _{i}^j(k) = w_{i}^j(k)-\mu (k) \ge 0\) for \(i,j = \{1, \ldots , m\} \). Then, we have
By definition, we know that \(\sum _{m=1}^n p_{i}^m =1\). Therefore, we get
Since \(\delta _{n}^j \ge 0 \) and \(p_i^n \ge 0\) for \(i,j,n= \{1, \ldots ,m\}\), \(w_{i}^j(k+1) \ge \mu (k) =\mathop {\min }\nolimits _{j \in [m]} \; \mathop {\min }\nolimits _{i \in [m]} \, w^j_i(k)\) for \(i,j = \{1, \ldots , m \}\) and all k. This establishes that the minimum of W(k) is non-decreasing and other agents cannot go below the previous minimum at the next time step.
Therefore, since (10) defines A(k), the smallest non-zero element of A(k), denoted \(\mathop {\min ^+ \,}\nolimits _{i \in [m]}\mathop {\min ^+}\nolimits _{j \in [m]} [A(k)]^j_i\), is at least \(\mathop {\min \,}\nolimits _{i \in [m]} \mathop {\min }\nolimits _{j \in [m]} w_i^j(k)\). This directly implies that the lower bound can be set as \(\mathop {\min }\nolimits _{j \in [m]} \; \mathop {\min }\nolimits _{i \in [m]} \, w^j_i(0)\). \(\square \)
Proof of Lemma 4.6
From Lemma 4.5 and based on [16] and since \(x^i(k+1)=P_{X}[v^i(k)-\alpha _k d^i(k)]\), we have
From the definition of \(\phi ^i(k)\) in (14), the previous relation becomes,
By expanding \(||v^i(k)-\alpha _k d^i(k)-z||^2\), we have
Because \(d^i(k)\) is the gradient of \(f_i(x)\) at \(x=v^i(k)\), we obtain from convexity that
By bringing together (49) and (50), we get
Given the definition of \(v^i(k)\), using the convexity of the norm squared function and the stochasticity of the \(a^i(w^i(k))\), we find that
It then follows from (51) and (52) that
By summing (53) over \(i=1, \ldots , m\), we obtain the following relation:
\(\square \)
Proof of Lemma 4.7
(a) From (15) and based on [16], we have
Using the transition matrix \(\varPhi (k,s) = A(W(k))A(W(k-1)), \ldots , A(W(s))\) and following the same logic to obtain (15), (19) can be re-written for all k and s with \(k > s\) as
By subtracting (55) from (54), we obtain
Taking the norm of (56), we get
Using Lemma 4.3 and for \(s=0\), and \(k \rightarrow \infty \), the first right-hand term of (57) is
which can be simplified as
Similarly, using Lemma 4.3, the second right-hand term is
Using Lemma 4.2 and the gradient bound, the third hand-right term is
Using again Lemma 4.2 and 4.3, we obtain for the last two terms
We therefore obtain
Since \(0< \beta < 1\), \(\beta ^k \rightarrow 0\) as \(k\rightarrow \infty \). Assuming that \(\alpha _k \rightarrow 0\) and taking the limit superior, we have for all i,
By Lemma 4.4, we have
Therefore, \(\lim _{k\rightarrow \infty }||x^i(k)-y(k)||=0\) for all i.
(b) By multiplying (63) with \(\alpha _k\), we get
Using \(2\alpha _k\alpha _r \le \alpha _k^2+\alpha _r^2\) and \(\alpha _k\beta ^{k-1} \le \alpha _k^2+\beta ^{2(k-1)}\) for any k and r, we obtain
Since \(\sum _{r=0}^{k-2}\beta ^{k-r} \le \dfrac{1}{1-\beta }\), we have
By summing from \(k=1\) to \(k =\infty \), we obtain
In (65), the first term is summable since \(0< \beta < 1\). The second, third, and fifth terms are also summable since \(\sum _{k\rightarrow \infty } \alpha _{k}^2 < \infty \). By Lemma 4.4, the fourth term is summable. Thus, \(\sum _{k=1}^{\infty } \alpha _k||x^i(k)-y(k)|| < \infty \text { for all } i\). \(\square \)
Rights and permissions
About this article
Cite this article
Blondin, M.J., Hale, M. A Decentralized Multi-objective Optimization Algorithm. J Optim Theory Appl 189, 458–485 (2021). https://doi.org/10.1007/s10957-021-01840-z
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10957-021-01840-z