Section 3.4: Stochastic Mirror Descent
The SGD update and the SPGD update can be generalized using the Bregman divergence instead of the Euclidean distance. Let \(\varphi\) be an \(\alpha\)-strongly convex function with respect to a general norm \(\|\cdot\|\). Recall the definition of Bregman divergence \(D_{\varphi}(\mathbf{w}, \mathbf{w}')\) in Definition 1.7 induced by \(\varphi\). Due to the strong convexity of \(\varphi\), we have
\[\begin{align}\label{eqn:scvphi} D_{\varphi}(\mathbf{w}, \mathbf{w}')\geq \frac{\alpha}{2}\|\mathbf{w} - \mathbf{w}'\|^2. \end{align}\]
The stochastic mirror descent (SMD) update applied to the non-smooth convex optimization problem is given by
\[\begin{align}\label{eqn:smd1} \mathbf{w}_{t+1} = \arg\min_{\mathbf{w}\in\mathbb{R}^d} \mathcal{G}(\mathbf{w}_t; \zeta_t)^{\top}(\mathbf{w} - \mathbf{w}_t) + \frac{1}{\eta_t}D_\varphi(\mathbf{w}, \mathbf{w}_{t}). \end{align}\]
The SMD update applied to the composite optimization problem is given by
\[\begin{align}\label{eqn:smd2} \mathbf{w}_{t+1} = \arg\min_{\mathbf{w}\in\mathbb{R}^d} \nabla g(\mathbf{w}_t; \zeta_t)^{\top}(\mathbf{w} - \mathbf{w}_t) + \frac{1}{\eta_t}D_\varphi(\mathbf{w}, \mathbf{w}_{t}) + r(\mathbf{w}). \end{align}\]
Algorithm 5: SMD
- Input: learning rate schedule \(\{\eta_t\}_{t=1}^{T}\), starting point \(\mathbf{w}_1\)
- For \(t=1,\dotsc,T\)
- Compute an unbiased gradient estimator \(\mathbf{z}_t = \nabla g(\mathbf{w}_t; \zeta_t)\)
- Update the model \(\mathbf{w}\) by \(\mathbf{w}_{t+1} = \arg\min_{\mathbf{w}\in\mathbb{R}^d} \mathbf{z}_t^{\top}(\mathbf{w} - \mathbf{w}_t) + \frac{1}{\eta_t}D_{\varphi}(\mathbf{w}, \mathbf{w}_t) + r(\mathbf{w})\).
Examples
Example 3.7: Using Euclidean distance
By choosing \(\varphi(\cdot) = \tfrac{1}{2}\|\cdot\|_2^2\), which is \(1\)-strongly convex with respect to \(\|\cdot\|_2\), the Bregman divergence reduces to the Euclidean distance, and the above updates simplify to SGD or SPGD.
Example 3.8: Using KL Divergence
Let us consider another example, where \(r(\mathbf{w}) = \mathbb{I}_{0-\infty}(\mathbf{w} \in \Delta)\) and \(\Delta_d = \{\mathbf{w} \in \mathbb{R}^d : \mathbf{w} \geq 0,\ \sum_{i=1}^d [\mathbf{w}]_i = 1\}\).
By choosing \(\varphi(\mathbf{w}) = \sum_{i=1}^d [\mathbf{w}]_i \log [\mathbf{w}]_i\), which is \(1\)-strongly convex w.r.t \(\|\cdot\|_1\) (cf. Lemma 1.10), the Bregman divergence reduces to the KL-divergence: \[ D_{\varphi}(\mathbf{w}, \mathbf{u}) = \sum_{i=1}^d [\mathbf{w}]_i \log \frac{[\mathbf{w}]_i}{[\mathbf{u}]_i}, \] and the SMD update (\(\ref{eqn:smd2}\)) simplifies to \[ [\mathbf{w}_{t+1}]_i = \frac{[\mathbf{w}_{t}]_i \exp(-\eta_t [\nabla g(\mathbf{w}_t; \zeta_t)]_i)} {\sum_{j=1}^d [\mathbf{w}_{t}]_j \exp(-\eta_t [\nabla g(\mathbf{w}_t; \zeta_t)]_j)}, \] which is also known as stochastic exponential gradient descent.
Convergence Analysis
The following lemma is similar to Lemma 1.7.
Lemma 3.9
If \(r(\cdot)\) is convex and \(\varphi\) is \(\alpha\)-strongly convex w.r.t a norm \(\|\cdot\|\), with \[\begin{align*} \mathbf{z}_1&=\arg\min_{\mathbf{w}} \mathbf{w}^\top \mathbf{a} + r(\mathbf{w}) + \frac{1}{\eta}D_{\varphi}(\mathbf{w}, \mathbf{z}),\\ \mathbf{z}_2&=\arg\min_{\mathbf{w}} \mathbf{w}^\top \mathbf{b} + r(\mathbf{w}) + \frac{1}{\eta}D_{\varphi}(\mathbf{w}, \mathbf{z}), \end{align*}\] we have \(\|\mathbf{z}_1 - \mathbf{z}_2 \| \leq \frac{\eta}{\alpha}\|\mathbf{a}- \mathbf{b}\|_*\).
By the optimality of \(\mathbf{z}_1\) and \(\mathbf{z}_2\) we have \[\begin{align*} \mathbf{u} := & \frac{\nabla\varphi(\mathbf{z}) - \nabla\varphi(\mathbf{z}_1)}{\eta} - \mathbf{a} \in \partial r(\mathbf{z}_1),\\ \mathbf{v} := & \frac{\nabla\varphi(\mathbf{z}) - \nabla\varphi(\mathbf{z}_2)}{\eta} - \mathbf{b} \in \partial r(\mathbf{z}_2). \end{align*}\] Since \(r(\mathbf{x})\) is convex, we have \[\begin{align*} r(\mathbf{z}_1)\geq r(\mathbf{z}_2) + \mathbf{v}^{\top} (\mathbf{z}_1 - \mathbf{z}_2),\\ r(\mathbf{z}_2)\geq r(\mathbf{z}_1) + \mathbf{u}^{\top} (\mathbf{z}_2 - \mathbf{z}_1). \end{align*}\] Adding them together, we have \[\begin{align*} 0 \leq (\mathbf{u} - \mathbf{v})^{\top}(\mathbf{z}_1 - \mathbf{z}_2)= \frac{1}{\eta} (\eta\mathbf{b} - \eta\mathbf{a} + \nabla\varphi(\mathbf{z}_2) - \nabla\varphi(\mathbf{z}_1))^{\top}( \mathbf{z}_1 - \mathbf{z}_2), \end{align*}\] which implies \[\begin{align*} \frac{1}{\eta} (\nabla\varphi(\mathbf{z}_1) - \nabla\varphi(\mathbf{z}_2))^{\top}(\mathbf{z}_1 - \mathbf{z}_2) \leq (\mathbf{b} - \mathbf{a})^{\top}(\mathbf{z}_1 - \mathbf{z}_2) \leq \|\mathbf{b} - \mathbf{a}\|_* \| \mathbf{z}_1 - \mathbf{z}_2 \|. \end{align*}\] Since \(\varphi\) is \(\alpha\)-strongly convex, similar to Lemma 1.6 (c) we have \[\begin{align*} (\nabla\varphi(\mathbf{z}_1) - \nabla\varphi(\mathbf{z}_2))^{\top}(\mathbf{z}_1 - \mathbf{z}_2)\geq \alpha \|\mathbf{z}_1 - \mathbf{z}_2\|^2. \end{align*}\] Combining the above two inequalities, we have \(\|\mathbf{z}_1 - \mathbf{z}_2\| \leq \frac{\eta}{\alpha}\|\mathbf{a} - \mathbf{b}\|_*.\)
■
Lemma 3.10
(Generalized Three-point Equality) For any \(\mathbf{w}, \mathbf{w}_t, \mathbf{w}_{t+1}\), we have \[\begin{align*} (\nabla \varphi(\mathbf{w}_t) - \nabla \varphi(\mathbf{w}_{t+1}))^{\top}(\mathbf{w}_{t+1} - \mathbf{w}) = D_\varphi(\mathbf{w}, \mathbf{w}_t) - D_\varphi(\mathbf{w}, \mathbf{w}_{t+1}) - D_\varphi(\mathbf{w}_{t+1}, \mathbf{w}_t). \end{align*}\]
■
The following lemma is similar to Lemma 3.6.
Lemma 3.11
Consider the update
\[\begin{align}\label{eqn:smd-g} \mathbf{w}_{t+1} = \arg\min_{\mathbf{w}\in\mathbb{R}^d} \mathbf{z}_t^{\top}(\mathbf{w} - \mathbf{w}_t) + \frac{1}{\eta_t}D_{\varphi}(\mathbf{w},\mathbf{w}_t) + r(\mathbf{w}). \end{align}\]
If \(D_r(\mathbf{w}, \mathbf{w}')\geq \mu D_{\varphi}(\mathbf{w}, \mathbf{w}')\), then for any \(\mathbf{w}\) we have \[\begin{align*} &\mathbf{z}_t^{\top}(\mathbf{w}_{t+1} - \mathbf{w}) + r(\mathbf{w}_{t+1}) - r(\mathbf{w})\leq \frac{1}{\eta_t}D_{\varphi}(\mathbf{w}, \mathbf{w}_t) - \left(\frac{1}{\eta_t}+ \mu\right)D_{\varphi}(\mathbf{w}, \mathbf{w}_{t+1})\\ &- \frac{1}{\eta_t}D_{\varphi}(\mathbf{w}_{t+1}, \mathbf{w}_t). \end{align*}\]
Proof. By the first-order optimality condition of (\(\ref{eqn:smd-g}\)), we have
\[\begin{align}\label{eqn:1st} (\mathbf{z}_t + \partial r(\mathbf{w}_{t+1}) + \frac{1}{\eta_t}(\nabla\varphi(\mathbf{w}_{t+1}) - \nabla\varphi(\mathbf{w}_t)))^{\top}(\mathbf{w} - \mathbf{w}_{t+1})\geq 0. \end{align}\]
By the assumption of \(r\), we have \[\begin{align*} \mu D_{\varphi}(\mathbf{w}, \mathbf{w}_{t+1})\leq r(\mathbf{w}) - r(\mathbf{w}_{t+1}) - \partial r(\mathbf{w}_{t+1})^{\top}(\mathbf{w} - \mathbf{w}_{t+1}). \end{align*}\] Adding the above two inequalities, we have \[\begin{align*} &\mathbf{z}_t^{\top}(\mathbf{w}_{t+1} - \mathbf{w}) + r(\mathbf{w}_{t+1}) - r(\mathbf{w})\\ &\leq \frac{1}{\eta_t}(\nabla\varphi(\mathbf{w}_t) - \nabla\varphi(\mathbf{w}_{t+1}))^{\top}(\mathbf{w}_{t+1} - \mathbf{w}) - \mu D_{\varphi}(\mathbf{w}, \mathbf{w}_{t+1})\\ &=\frac{1}{\eta_t}D_\varphi(\mathbf{w}, \mathbf{w}_t) - \left(\frac{1}{\eta_t}+\mu\right)D_\varphi(\mathbf{w}, \mathbf{w}_{t+1}) - \frac{1}{\eta_t}D_\varphi(\mathbf{w}_{t+1}, \mathbf{w}_t). \end{align*}\] where the last equality uses Lemma 3.10.■
Non-smooth Composite Problems
Let us first analyze SMD (\(\ref{eqn:smd2}\)) for the composite problem under a modified assumption.
Assumption 3.8
Suppose the following conditions hold:
- \(g\) is convex and \(L\)-smooth with respect to the norm \(\|\cdot\|\), and \(r\) is convex.
- There exists \(\sigma>0\) such that \(\mathbb{E}_{\zeta}[\nabla g(\mathbf{w}; \zeta)]=\nabla g(\mathbf{w})\) and \(\mathbb{E}_{\zeta}[\|\nabla g(\mathbf{w}; \zeta)- \nabla g(\mathbf{w})\|_*^2]\leq \sigma^2\) for all \(\mathbf{w}\).
Theorem 3.10
Suppose Assumption 3.8 holds. Let \(\eta_t=\eta\leq \alpha/L\) and \(\bar{\mathbf{w}}_{T} = \frac{1}{T}\sum_{t=1}^T\mathbf{w}_{t+1}\). After \(T\) iterations of SMD update (\(\ref{eqn:smd2}\)) for the composite problem, we have \[\begin{align*} \mathbb{E}[F(\bar{\mathbf{w}}_{T}) - F(\mathbf{w}_*)]& \leq \frac{D_{\varphi}(\mathbf{w}_1, \mathbf{w}_*)}{\eta T}+ \frac{\eta \sigma^2}{\alpha}. \end{align*}\] If \(\eta= \min\left(\frac{\alpha}{L}, \frac{\sqrt{\alpha D_{\varphi}(\mathbf{w}_1, \mathbf{w}_*)}}{\sqrt{T}\sigma}\right)\), then \[\begin{align*} \mathbb{E}\left[F(\bar{\mathbf{w}}_{T}) - F(\mathbf{w}_*)\right]\leq \frac{2\sigma\sqrt{D_{\varphi}(\mathbf{w}_1, \mathbf{w}_*)}}{\sqrt{T\alpha }}+ \frac{2LD_{\varphi}(\mathbf{w}_1, \mathbf{w}_*)}{T\alpha}. \end{align*}\]
💡 Why it matters
The key difference of the above result of SMD from that of SPGD in Theorem 3.5 lies in the divergence
measure and the variance bound that is measured in the dual norm. Let us
consider \(r(\mathbf{w})=
\mathbb{I}_{0-\infty}(\mathbf{w}\in\Delta_d)\). With the
Euclidean setup, the convergence upper bound is dominated by \(O(\frac{\sigma_2\|\mathbf{w}_1-\mathbf{w}_*\|_2}{\sqrt{T}})\),
where \(\sigma_2^2\geq \mathbb{E}\|\nabla
g(\mathbf{w}, \zeta) - \nabla g(\mathbf{w})\|_2^2\) for all \(\mathbf{w}, \zeta\).
In contrast, with the stochastic exponential gradient descent update, the convergence upper bound is dominated by \(O(\frac{\sigma_\infty\sqrt{D_{\varphi}(\mathbf{w}_1, \mathbf{w}_*)}}{\sqrt{T}})\), where \(\sigma_\infty^2\geq \mathbb{E}\|\nabla g(\mathbf{w}, \zeta) - \nabla g(\mathbf{w})\|_\infty^2\) for all \(\mathbf{w}, \zeta\). If we set \([\mathbf{w}_{1}]_i= \frac{1}{n}\) for all \(i\), then we get \(D_\varphi(\mathbf{w}_1, \mathbf{w}_*)\leq \log d\) for all \(\mathbf{w}_*\in \Delta_d\). In addition, \(\|\mathbf{w}_1 - \mathbf{w}_*\|_2\) could be \(O(1)\). However, the constant \(\sigma^2_\infty\) can be smaller than \(\sigma^2_2\) by a factor of \(d\). Hence \(\frac{\sigma_\infty\sqrt{D_{\varphi}(\mathbf{w}_1, \mathbf{w}_*)}}{\sigma_2\|\mathbf{w}_1-\mathbf{w}_*\|_2}= O(\frac{\log d}{\sqrt{d}})\), which indicates that stochastic exponential gradient descent may converge faster than SGD.
Proof. From Lemma 3.11, we have \[\begin{align*} &\nabla g(\mathbf{w}_t, \zeta_t)^{\top}(\mathbf{w}_{t+1} - \mathbf{w}) + r(\mathbf{w}_{t+1}) - r(\mathbf{w})\leq \frac{1}{\eta_t}(D_{\varphi}(\mathbf{w}, \mathbf{w}_t) - D_{\varphi}(\mathbf{w}, \mathbf{w}_{t+1}))\\ &- \frac{1}{\eta_t}D_{\varphi}(\mathbf{w}_{t+1}, \mathbf{w}_t). \end{align*}\] Same as Eqn. 6 in Sec. 3.1 we have \[\begin{align*} g(\mathbf{w}_{t+1})\leq g(\mathbf{w}) + \nabla g(\mathbf{w}_t)^{\top}(\mathbf{w}_ t- \mathbf{w}) +\nabla g(\mathbf{w}_t)^{\top}(\mathbf{w}_{t+1} - \mathbf{w}_t) + \frac{L}{2}\|\mathbf{w}_{t+1} - \mathbf{w}_t\|^2. \end{align*}\] Adding the above two inequalities for \(\mathbf{w}=\mathbf{w}_*\), we have
\[\begin{align} &F(\mathbf{w}_{t+1})-F(\mathbf{w}_*)\leq \frac{1}{\eta_t}(D_{\varphi}(\mathbf{w}, \mathbf{w}_t) - D_{\varphi}(\mathbf{w}, \mathbf{w}_{t+1}))- \frac{1}{\eta_t}D_{\varphi}(\mathbf{w}_{t+1}, \mathbf{w}_t)\notag\\ &+\frac{L}{2}\|\mathbf{w}_{t+1} - \mathbf{w}_t\|^2+ (\nabla g(\mathbf{w}_t)- g(\mathbf{w}_t, \zeta_t))^{\top}(\mathbf{w}_{t+1} - \mathbf{w}_*).\label{eqn:Fr2} \end{align}\]
Similar to the analysis of SPGD, we define: \[\begin{align*} \hat{\mathbf{w}}_{t+1} = \arg\min_{\mathbf{w}\in\mathbb{R}^d} \nabla g(\mathbf{w}_t)^{\top}(\mathbf{w} - \mathbf{w}_t) + \frac{1}{\eta_t}D_{\varphi}(\mathbf{w}, \mathbf{w}_t) + r(\mathbf{w}), \end{align*}\] which uses the full gradient \(\nabla g(\mathbf{w}_t)\), making it independent of \(\zeta_t\). Then we have
\[\begin{align}\label{eqn:gr2} &(\nabla g(\mathbf{w}_t) - g(\mathbf{w}_t, \zeta_t))^{\top}(\mathbf{w}_{t+1} - \mathbf{w}_*)\\ &\leq (\nabla g(\mathbf{w}_t)- g(\mathbf{w}_t, \zeta_t))^{\top}(\mathbf{w}_{t+1} - \hat{\mathbf{w}}_{t+1}) + (\nabla g(\mathbf{w}_t)- g(\mathbf{w}_t, \zeta_t))^{\top}(\hat{\mathbf{w}}_{t+1} - \mathbf{w}_*)\notag. \end{align}\]
In addition,
\[\begin{align} &(\nabla g(\mathbf{w}_t)- g(\mathbf{w}_t, \zeta_t))^{\top}(\mathbf{w}_{t+1} - \hat{\mathbf{w}}_{t+1})\leq \|\nabla g(\mathbf{w}_t)- g(\mathbf{w}_t, \zeta_t)\|_*\|\mathbf{w}_{t+1} - \hat{\mathbf{w}}_{t+1}\|\notag\\ &\leq \frac{\eta_t}{\alpha} \|\nabla g(\mathbf{w}_t)- g(\mathbf{w}_t, \zeta_t)\|_*^2,\label{eqn:ggr2} \end{align}\] where the last inequality follows Lemma 3.9. Adding (\(\ref{eqn:Fr2}\)), (\(\ref{eqn:gr2}\)), and (\(\ref{eqn:ggr2}\)) and using (\(\ref{eqn:scvphi}\)), we have \[\begin{align*} &F(\mathbf{w}_{t+1})-F(\mathbf{w}_*) \leq \frac{1}{2\eta_t}(D_\varphi(\mathbf{w}, \mathbf{w}_t) - D_\varphi(\mathbf{w}, \mathbf{w}_{t+1})) - \left(\frac{\alpha}{2\eta_t} - \frac{L}{2}\right)\|\mathbf{w}_t - \mathbf{w}_{t+1}\|^2\\ &+ (\nabla g(\mathbf{w}_t)- g(\mathbf{w}_t, \zeta_t))^{\top}(\mathbf{w}_{t+1} - \hat{\mathbf{w}}_{t+1}) + \frac{\eta_t}{\alpha} \|\nabla g(\mathbf{w}_t)- g(\mathbf{w}_t, \zeta_t)\|_*^2. \end{align*}\] Taking expectation over \(\zeta_t\) on both sides, we have \[\begin{align*} \mathbb{E}_{\zeta_t}[F(\mathbf{w}_{t+1})- F(\mathbf{w}_*)] &\leq \mathbb{E}_{\zeta_t}\left[\frac{1}{\eta_t}(D_\varphi(\mathbf{w}, \mathbf{w}_t) - D_\varphi(\mathbf{w}, \mathbf{w}_{t+1})) - \left(\frac{\alpha}{2\eta_t} - \frac{L}{2}\right)\|\mathbf{w}_t - \mathbf{w}_{t+1}\|^2\right]+ \frac{\eta_t}{\alpha}\sigma^2. \end{align*}\] If \(\eta_t\leq \frac{\alpha}{L}\), we have \[\begin{align*} \mathbb{E}_{\zeta_t}[F(\mathbf{w}_{t+1})-F(\mathbf{w}_*)]& \leq \mathbb{E}_{\zeta_t}\left[\frac{1}{\eta_t}(D_\varphi(\mathbf{w}, \mathbf{w}_t) - D_\varphi(\mathbf{w}, \mathbf{w}_{t+1}))\right] + \frac{\eta_t}{\alpha}\sigma^2. \end{align*}\] Summing over \(t=1,\ldots, T\), we have \[\begin{align*} \mathbb{E}\left[\frac{1}{\sum_{t=1}^T\eta_t}\sum_{t=1}^T\eta_t(F(\mathbf{w}_{t+1}) - F(\mathbf{w}_*))\right]& \leq \frac{D_{\varphi}(\mathbf{w}_1, \mathbf{w}_*)}{\sum_{t=1}^T\eta_t}+ \frac{\sum_{t=1}^T\eta_t \sigma^2}{\alpha\sum_{t=1}^T\eta_t}. \end{align*}\] Let \(\eta_t=\eta\) and optimizing the upper bound over \(\eta\) finishes the proof.■
Non-smooth Problems
Next, we present the convergence analysis of SMD (\(\ref{eqn:smd1}\)) for non-smooth convex objectives under the following assumption.
Assumption 3.9
For any \(\mathbf{w}\), we have \(\mathbb{E}_{\zeta}[\mathcal{G}(\mathbf{w}; \zeta)] \in \partial g(\mathbf{w})\) and \(\mathbb{E}[\|\mathcal{G}(\mathbf{w}; \zeta)\|_*^2]\leq G^2\).
Theorem 3.11
Suppose Assumption 3.9 holds. Let the learning rate \(\{\eta_t\}\) be \(\eta_t =\eta\) and \(\bar{\mathbf{w}}_{T} = \frac{1}{T}\sum_{t=1}^T\mathbf{w}_{t}\). After \(T\) iterations of SMD update (\(\ref{eqn:smd1}\)), we have \[\begin{align*} & \mathbb{E}\left[g(\bar{\mathbf{w}}_{T}) - g(\mathbf{w}_*)\right]\leq \frac{D_{\varphi}(\mathbf{w}_*,\mathbf{w}_1)}{\eta T} +\frac{\eta G^2}{2\alpha}. \end{align*}\] If \(\eta= \frac{\sqrt{2\alpha D_\varphi(\mathbf{w}_*, \mathbf{w}_1)}}{\sqrt{T}G}\), then \[\begin{align*} \mathbb{E}\left[g(\bar{\mathbf{w}}_{T}) - g(\mathbf{w}_*)\right]\leq \frac{G\sqrt{2D_{\varphi}(\mathbf{w}_*, \mathbf{w}_1)}}{\sqrt{\alpha T}}. \end{align*}\]
From Lemma 3.11, we have \[\begin{align*} &\mathcal{G}(\mathbf{w}_t, \zeta_t)^{\top}(\mathbf{w}_{t+1} - \mathbf{w})\leq \frac{1}{\eta_t}(D_{\varphi}(\mathbf{w}, \mathbf{w}_t) - D_{\varphi}(\mathbf{w}, \mathbf{w}_{t+1}))- \frac{1}{\eta_t}D_{\varphi}(\mathbf{w}_{t+1}, \mathbf{w}_t). \end{align*}\] Rearranging it, we get \[\begin{align*} \eta_t&\mathcal{G}(\mathbf{w}_t; \zeta_t)^{\top}(\mathbf{w}_t-\mathbf{w})\\ &\leq D_\varphi(\mathbf{w},\mathbf{w}_t)- D_\varphi(\mathbf{w},\mathbf{w}_{t+1})- D_\varphi(\mathbf{w}_{t+1}, \mathbf{w}_t)+ \eta_t \mathcal{G}(\mathbf{w}_t; \zeta_t)^{\top}(\mathbf{w}_t- \mathbf{w}_{t+1})\\ &\leq D_\varphi(\mathbf{w}, \mathbf{w}_t)- D_\varphi(\mathbf{w}, \mathbf{w}_{t+1})- D_\varphi(\mathbf{w}_{t+1}, \mathbf{w}_t)\\ &+ \frac{\eta_t^2}{2\alpha}\|\mathcal{G}(\mathbf{w}_t; \zeta_t)\|_*^2+ \frac{\alpha}{2}\|\mathbf{w}_t- \mathbf{w}_{t+1}\|^2, \end{align*}\] where the last inequality uses the Cauchy-Schwarz inequality. Following (\(\ref{eqn:scvphi}\)), we have
\[\begin{equation}\label{eqn:smd-ns} \eta_t\mathcal{G}(\mathbf{w}_t; \zeta_t)^{\top}(\mathbf{w}_t- \mathbf{w})\leq D_\varphi(\mathbf{w}, \mathbf{w}_t)- D_\varphi(\mathbf{w}, \mathbf{w}_{t+1})+ \frac{\eta_t^2}{2\alpha}\|\mathcal{G}(\mathbf{w}_t; \zeta_t)\|_*^2. \end{equation}\]
The remaining proof is similar to that of Theorem 3.2.■