-
Notifications
You must be signed in to change notification settings - Fork 0
/
research.html
120 lines (120 loc) · 8.12 KB
/
research.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.1//EN"
"http://www.w3.org/TR/xhtml11/DTD/xhtml11.dtd">
<html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en">
<head>
<meta name="generator" content="jemdoc, see http://jemdoc.jaboc.net/" />
<meta http-equiv="Content-Type" content="text/html;charset=utf-8" />
<link rel="stylesheet" href="jemdoc.css" type="text/css" />
<link rel="shortcut icon" type="image/x-icon" href="misc/ucla.ico" />
<title>Research overview</title>
</head>
<body>
<table summary="Table for page layout." id="tlayout">
<tr valign="top">
<td id="layout-menu">
<div class="menu-category">Xin Jiang</div>
<div class="menu-item"><a href="index.html">Home</a></div>
<div class="menu-item"><a href="research.html" class="current">Research</a></div>
<div class="menu-item"><a href="publications.html">Publications</a></div>
<div class="menu-item"><a href="talks.html">Talks</a></div>
<div class="menu-item"><a href="teaching.html">Teaching</a></div>
</td>
<td id="layout-content">
<div id="toptitle">
<h1>Research overview</h1>
</div>
<p>My research is rooted in the <b>mathematical foundations of data science</b>, with primary focuses on <b>theory and algorithms for large-scale optimization problems</b> from engineering and data sciences and <b>machine learning for graphical data</b>. Most of my research is in one of the following directions.</p>
<ol>
<li><p><b>First-order methods for large-scale systems</b> <br />
I design, analyze and implement proximal first-order methods for large-scale optimization problems.</p>
</li>
<li><p><b>Nonlinear and nonconvex optimization with benign structure</b> <br />
I try to identify interesting problem and data structures and design efficient algorithms for these problems.</p>
</li>
<li><p><b>Stochastic constrained optimization</b> <br />
For smooth nonlinear optimization problems, I study stochastic-gradient-based algorithms that can handle deterministic constraints.</p>
</li>
<li><p><b>Distributed and decentralized optimization</b> <br /> I study topology design in decentralized optimization and manage the trade-off between communication costs and convergence rate.</p>
</li>
<li><p><b>Machine learning for graphical data</b> <br />
I aim to build data-centric and robust graph representation models, especially with fairness and privacy concerns.</p>
</li>
</ol>
<h1>Proximal methods with Bregman distances</h1>
<p>Proximal methods have become a standard tool for solving nonsmooth, constrained, large-scale or distributed optimization probelms. To further improve the efficiency in computation of proximal operators, I am particularly interested in generalized proximal operators based on Bregman distances. Carefully designed Bregman proximal operators can better match the structure of the problem, thus improving the convergence rate of the proximal algorithms. A Bregman proximal operator can also be easier to compute than its Euclidean counterpart, thereby reducing the per-iteration complexity of a proximal algorithm.</p>
<table class="imgtable"><tr><td>
<img src="research/breg-3op.jpg" alt="alt text" width="850px" height="230px" /> </td>
<td align="left"></td></tr></table>
<p>Selected publications</p>
<ul>
<li><p><a href="https://link.springer.com/content/pdf/10.1007/s10957-022-02125-9.pdf">Bregman three-operator splitting methods</a> <br />
<b>X. Jiang</b>, and L. Vandenberghe, 2023</p>
</li>
<li><p><a href="https://link.springer.com/content/pdf/10.1007/s10589-021-00339-7.pdf">Bregman primal-dual first-order method and application to sparse semidefinite programming</a> <br />
<b>X. Jiang</b>, and L. Vandenberghe, 2022</p>
</li>
</ul>
<h1>Structured nonlinear and nonconvex optimization</h1>
<p>Exploiting problem and data structure is important in both convex and nonconvex optimization. Convex conic optimization, for example, is the basis for general-purpose solvers as well as an important modeling tool for various applications. On the other hand, for certain nonconvex optimization problems, specific problem structure could be leveraged to find a global optimum. My research in this direction exploits interesting structures in the positive semidefinite (PSD) matrix cone, the monotone cone, difference-of-convex (DC) programming, etc.</p>
<table class="imgtable"><tr><td>
<img src="research/minrank.jpg" alt="alt text" width="800px" height="350px" /> </td>
<td align="left"></td></tr></table>
<p>Selected publications:</p>
<ul>
<li><p><a href="publications/dcprox.pdf">A globally convergent difference-of-convex algorithmic framework and application to log-determinant optimization problems</a> <br />
C. Yao, and <b>X. Jiang</b>, 2023</p>
</li>
<li><p><a href="https://orbit.dtu.dk/en/publications/minimum-rank-positive-semidefinite-matrix-completion-with-chordal">Minimum-rank positive semidefinite matrix completion with chordal patterns and applications to semidefinite relaxations</a> <br />
<b>X. Jiang</b>, Y. Sun, M. S. Andersen, and L. Vandenberghe, 2023</p>
</li>
<li><p><a href="https://link.springer.com/content/pdf/10.1007/s10589-021-00339-7.pdf">Bregman primal-dual first-order method and application to sparse semidefinite programming</a> <br />
<b>X. Jiang</b>, and L. Vandenberghe, 2022</p>
</li>
</ul>
<h1>Distributed and decentralized optimization</h1>
<p>Distributed and decentralized methods allow computational agents to communicate over a network and to collaboratively solve an optimization problem. This area has received increasing attention, partially due to the recent interests in federated learning with privacy concerns. My research involves the design and analysis of distributed algorithms as well as the network topology in decentralized optimization.</p>
<table class="imgtable"><tr><td>
<img src="research/hypercuboid.jpg" alt="alt text" width="800px" height="175px" /> </td>
<td align="left"></td></tr></table>
<p>Selected publications:</p>
<ul>
<li><p><a href="publications/hierarchical-banded.pdf">Sparse factorization of the square all-ones matrix of arbitrary order</a> <br />
<b>X. Jiang</b>, E. D. H. Nguyen, C. A. Uribe, and B. Ying, 2024</p>
</li>
<li><p><a href="publications/finite-time-consensus.pdf">On graphs with finite-time consensus and their use in gradient tracking</a> <br />
E. D. H. Nguyen, <b>X. Jiang</b>, B. Ying, and C. A. Uribe, 2023</p>
</li>
</ul>
<h1>Stochastic constrained optimization</h1>
<p>My research focuses on the design, analysis and implementation of efficient and reliable algorithms for solving large-scale nonlinear optimization problems, especially when only stochastic estimates of objective gradients (rather than true gradients) are accessible.</p>
<table class="imgtable"><tr><td>
<img src="research/sqp-analysis.jpg" alt="alt text" width="800px" height="250px" /> </td>
<td align="left"></td></tr></table>
<p>Selected publications:</p>
<ul>
<li><p><a href="publications/sqp-analysis.pdf">Almost-sure convergence of iterates and multipliers in stochastic sequential quadratic optimization</a> <br />
F. E. Curtis, <b>X. Jiang</b>, and Q. Wang, 2023</p>
</li>
</ul>
<h1>Machine learning for graphical data</h1>
<p>My research in this direction aims at building a data-centric, scalable, and robust graph representation model. My recent research interests involve graph representation models with fairness and privacy concerns.</p>
<table class="imgtable"><tr><td>
<img src="research/apt.jpg" alt="alt text" width="800px" height="450px" /> </td>
<td align="left"></td></tr></table>
<p>Selected publications:</p>
<ul>
<li><p><a href="publications/better-w-less-neurips23.pdf">Better with less: A data-active perspective on pre-training graph neural networks</a> [NeurIPS’23] <br />
J. Xu, R. Huang, <b>X. Jiang</b>, Y. Cao, C. Yang, C. Wang, and Y. Yang <br /></p>
</li>
<li><p><a href="publications/stack-aaai22.pdf">Blindfolded attackers still threatening: Strict black-box adversarial attacks on graphs</a> [AAAI’22] <br />
J. Xu, Y. Sun, <b>X. Jiang</b>, Y. Wang, C. Wang, J. Lu, and Y. Yang <br /></p>
</li>
<li><p><a href="publications/robustgraph-aaai22.pdf">Unsupervised adversarially robust representation learning on graphs</a> [AAAI’22] <br />
J. Xu, Y. Yang, J. Chen, <b>X. Jiang</b>, C. Wang, J. Lu, and Y. Yang <br /></p>
</li>
</ul>
</td>
</tr>
</table>
</body>
</html>