核心要点
OBD:引入二阶导数信息对神经网络进行最小损失剪枝。
OBS:引入删除权重补偿概念,使得剪枝损失降到更小。
提出问题
随着神经网络的发展,神经网络在现实世界中解决了越来越多的问题,但随之而来的是模型变的越来越大,结构变得越来越复杂,推理时间变的越来越长,部署起来也越来越不方便,并且随着参数逐渐变多,导致过拟合的风险也越来越大,所以我们就提出了下面这个问题:
如何选择性得删除一些权重来减小网络大小的同时不要引入太多的误差呢?
解决方法
理论推导
为了解决这一问题,我们需要定义什么是删除权重带来的误差呢?
目标函数在机器学习中起着核心作用,因此定义一个参数的显著性是指删除该参数所导致的目标函数的变化。但是通过暂时删除每个参数并重新评估目标函数来从这个定义中评估显著性将是极其繁琐的。
不过,可以构建一个错误函数的局部模型并分析地预测扰动参数向量的影响。我们用对目标函数E E E 进行泰勒展开。对参数向量的扰动δ U \delta U δ U 对目标函数的影响如下式所示。
δ E = ∑ i g i δ u i + 1 2 ∑ i h i i δ u i 2 + 1 2 ∑ i ≠ j h i j δ u i δ u j + O ( ∥ δ U ∥ 3 ) \begin{equation} \delta E=\sum_i g_i \delta u_i+\frac{1}{2} \sum_i h_{i i} \delta u_i^2+\frac{1}{2} \sum_{i \neq j} h_{i j} \delta u_i \delta u_j+O\left(\|\delta U\|^3\right) \end{equation} δ E = i ∑ g i δ u i + 2 1 i ∑ h ii δ u i 2 + 2 1 i = j ∑ h ij δ u i δ u j + O ( ∥ δ U ∥ 3 )
其中有g i = ∂ E ∂ u i and h i j = ∂ 2 E ∂ u i ∂ u j g_i=\frac{\partial E}{\partial u_i} \quad \text { and } \quad h_{i j}=\frac{\partial^2 E}{\partial u_i \partial u_j} g i = ∂ u i ∂ E and h ij = ∂ u i ∂ u j ∂ 2 E ,g i g_i g i 是U U U 对E E E 的梯度,h i j h_{ij} h ij 是U U U 对E E E 的海森矩阵的元素。
有了上面的误差函数,我们接下来的目标就是找到一组参数,使得这个δ E \delta E δ E 最小,但由于H H H 太大了,所以这个问题几乎无法解决,所以我们需要对这个函数做出一些近似。
a. Diagonal \text{Diagonal} Diagonal : 我们假设由于删除多个参数导致的E增量等于分别删除每个参数导致的E增量之和,换句话说就是对于参数的删除导致误差是独立的,参数之间不会有相互影响,所以我们可以把交叉项1 2 ∑ i ≠ j h i j δ u i δ u j \frac{1}{2} \sum_{i \neq j} h_{i j} \delta u_i \delta u_j 2 1 ∑ i = j h ij δ u i δ u j 忽略
b. Extremal \text{Extremal} Extremal : 我们假设网络是已经完美收敛了,此时我们的参数U U U 就是对于E E E 的局部最小值处,换句话说,就是U U U 对E E E 的梯度g i g_i g i 等于$$$$,所以我们可以把第一项∑ i g i δ u i \sum_i g_i \delta u_i ∑ i g i δ u i 忽略。
c. Quadratic \text{Quadratic} Quadratic : 我们假设这个δ E \delta E δ E 就是一个二次的函数,更高次的项我们不考虑,所以我们可以把最后一项O ( ∥ δ U ∥ 3 ) O\left(\|\delta U\|^3\right) O ( ∥ δ U ∥ 3 ) 忽略。
最终,经过一系列的优化,我们可以得到如下所示对于损失的一个近似表示
δ E = 1 2 ∑ i h i i δ u i 2 \begin{equation} \delta E=\frac{1}{2} \sum_i h_{i i} \delta u_i^2 \end{equation} δ E = 2 1 i ∑ h ii δ u i 2
具体实现
具体怎么计算海森矩阵我们到下一章进行描述
对于OBD来说,具体流程如下图所示:其中saliency有s k = h k k u k 2 / s_k=h_{k k} u_k^2 / s k = h kk u k 2 /
再提出问题
我们这种直接把它置零的方法是足够好的吗,有没有更好的缩减误差的方法?
再提出方案
Introducing OBS(Optimal Brain Surgeon).
In addition to cutting out weights, changing the strengths of other weights.
和上面一样,我们考虑训练到局部最小误差的网络。相对于权重扰动δ w \delta \mathbf{w} δ w 的的Loss函数的泰勒展开为:
δ E = ( ∂ E ∂ w ) T ⋅ δ w ⏟ ≈ 0 + 1 2 δ w T ⋅ H ⋅ δ w + O ( ∥ δ w ∥ 3 ) ⏟ ≈ 0 \begin{equation} \delta \mathrm{E}=\underbrace{\left(\frac{\partial \mathrm{E}}{\partial \mathbf{w}}\right)^{\mathrm{T}} \cdot \delta \mathbf{w}}_{\approx 0}+\frac{1}{2} \delta \mathbf{w}^{\mathrm{T}} \cdot \mathbf{H} \cdot \delta \mathbf{w}+\underbrace{O\left(\|\delta \mathbf{w}\|^3\right)}_{\approx 0} \end{equation} δ E = ≈ 0 ( ∂ w ∂ E ) T ⋅ δ w + 2 1 δ w T ⋅ H ⋅ δ w + ≈ 0 O ( ∥ δ w ∥ 3 )
其中H = ∂ 2 E ∂ w 2 \mathbf{H}=\frac{\partial^2 \mathrm{E}}{\partial \mathbf{w}^2} H = ∂ w 2 ∂ 2 E 是Loss对参数的海森矩阵。和上面一样,我们忽略第一项和第三项,那么我们对权重w q \mathrm{w_q} w q 进行删除的操作可以表示为(其中e q \mathbf{e}_q e q 是权重空间e q \mathbf{e}_q e q 中相对于w q \mathrm{w_q} w q 的单位向量
δ w q + w q = 0 or e q T ⋅ δ w + w q = 0 \begin{equation} \delta \mathrm{w}_q+\mathrm{w}_q=0 \quad \text { or } \quad \mathbf{e}_q^{\mathrm{T}} \cdot \delta \mathbf{w}+\mathrm{w}_q=0 \end{equation} δ w q + w q = 0 or e q T ⋅ δ w + w q = 0
这样,我们的目标就是解下面这个最优化问题:
Min q { Min δ w { 1 2 δ w T ⋅ H ⋅ δ w } such that e q T ⋅ δ w + w q = 0 } \begin{equation} \operatorname{Min}_q\left\{\operatorname{Min}_{\delta \mathbf{w}}\left\{\frac{1}{2} \delta \mathbf{w}^{\mathrm{T}} \cdot \mathbf{H} \cdot \delta \mathbf{w}\right\} \quad \text { such that } \mathbf{e}_q^T \cdot \delta \mathbf{w}+\mathrm{w}_q=0\right\} \end{equation} Min q { Min δ w { 2 1 δ w T ⋅ H ⋅ δ w } such that e q T ⋅ δ w + w q = 0 }
接下来我们把他转换为拉格朗日乘数法有如下表示:
L = 1 2 δ w T ⋅ H ⋅ δ w + λ ( e q T ⋅ δ w + w q ) \begin{equation} L=\frac{1}{2} \delta \mathbf{w}^{\mathrm{T}} \cdot \mathbf{H} \cdot \delta \mathbf{w}+\lambda\left(\mathbf{e}_q^T \cdot \delta \mathbf{w}+\mathrm{w}_q\right) \end{equation} L = 2 1 δ w T ⋅ H ⋅ δ w + λ ( e q T ⋅ δ w + w q )
不难解出:
δ w = − w q [ H − 1 ] q q H − 1 ⋅ e q and L = 1 2 w q 2 [ H − 1 ] q q \begin{equation} \delta \mathbf{w}=-\frac{\mathrm{w}_q}{\left[\mathbf{H}^{-1}\right]_{q q}} \mathbf{H}^{-1} \cdot \mathbf{e}_q \quad \text { and } \quad L=\frac{1}{2} \frac{\mathrm{w}_q^2}{\left[\mathbf{H}^{-1}\right]_{q q}} \end{equation} δ w = − [ H − 1 ] qq w q H − 1 ⋅ e q and L = 2 1 [ H − 1 ] qq w q 2
这样的话我们就可以得到一个δ w \delta \mathbf{w} δ w 使得我们剪切了w q \mathrm{w_q} w q 的同时调整了其他的权重使得Loss变动最小。
为什么OBS是更好的方法呢?
我们考虑如图所示的例子,来对比OBS、OBD和基于大小的权重剪切方法。
从w ∗ \mathrm{w}^* w ∗ 处的局部最小值开始,基于大小的方法删除错误的 weight2 \text{weight2} weight2 并且通过再训练,weight1 \text{weight1} weight1 将增加。而与之对比OBD和OBS删除正确的weight1 \text{weight1} weight1 ,并且OBS更改了 weight2 \text{weight2} weight2 来达到局部最小值。在这个例子中可能最终误差的大小差异并不明显,但是如果在一个比较大的网络中,使用基于大小的方法删除了错误的权重可能会导致最终误差的明显增加,显然这是不合理的。
并且在实验中表明了海森矩阵并不是只有对角的部分是有用的,其他的部分也是有和对角线部分几乎相同的作用,所以我们的方法并不会只考虑对角海森矩阵,而是通过外积近似的方式计算整个海森矩阵。
海森矩阵计算
现在我们已经知道了如何对神经网络进行修剪优化,但海森矩阵具体怎么算呢。接下来,我们参考 PRML \text{PRML} PRML 和 OBS \text{OBS} OBS 来对如何计算对角海森矩阵进行讲述。
假设我们有如图所示的一个网络,那么这个网络有两部分参数,input到hidden的参数u j i \mathbf{u}_{ji} u ji 、hidden到output的参数v j \mathbf{v}_j v j ,接下来我们以MSE为例解释如何计算这些参数对Loss的海森矩阵。
首先,我们易知MSE的表达式如下所示:
E = 1 2 P ∑ k = 1 P ( t [ k ] − o [ k ] ) 2 \begin{equation} E=\frac{1}{2 \mathrm{P}} \sum_{k=1}^{\mathrm{P}}\left(\mathrm{t}^{[k]}-\mathrm{o}^{[k]}\right)^2 \end{equation} E = 2 P 1 k = 1 ∑ P ( t [ k ] − o [ k ] ) 2
则Loss对这些参数的二阶导数有如下表示:
∂ 2 E ∂ v j ∂ v j ′ = 1 P ∑ k = 1 P { f ′ ( net [ k ] ) 2 − ( t [ k ] − o [ k ] ) f ′ ′ ( n e t [ k ] ) } o j [ k ] o j ′ [ k ] \begin{equation} \frac{\partial^2 E}{\partial \mathrm{v}_j \partial \mathrm{v}_{j^{\prime}}}=\frac{1}{\mathrm{P}} \sum_{k=1}^{\mathrm{P}}\left\{\mathrm{f}^{\prime}\left(\text { net }^{[k]}\right)^2-\left(\mathrm{t}^{[k]}-\mathrm{o}^{[k]}\right) \mathrm{f}^{\prime \prime}\left(\mathrm{net}^{[k]}\right)\right\} \mathrm{o}_j^{[k]} \mathrm{o}_{j^{\prime}}^{[k]} \end{equation} ∂ v j ∂ v j ′ ∂ 2 E = P 1 k = 1 ∑ P { f ′ ( net [ k ] ) 2 − ( t [ k ] − o [ k ] ) f ′′ ( net [ k ] ) } o j [ k ] o j ′ [ k ]
∂ 2 E ∂ v j ∂ u j ′ i ′ = 1 P ∑ k = 1 P { { { f ′ ( net [ k ] ) 2 − ( t [ k ] − o [ k ] ) f ′ ′ ( net [ k ] ) } v j ′ f ′ ( net j ′ [ k ] ) o i ′ [ k ] o j [ k ] } − ( t [ k ] − o [ k ] ) f ′ ( n e t [ k ] ) f ′ ( n e t j ′ [ k ] ) δ j j ′ o i ′ [ k ] } \begin{equation}
\begin{aligned}
\frac{\partial^2 E}{\partial \mathrm{v}_j \partial \mathrm{u}_{j^{\prime} i^{\prime}}}= & \frac{1}{\mathrm{P}} \sum_{k=1}^{\mathrm{P}}\left\{\left\{\left\{\mathrm{f}^{\prime}\left(\text { net }^{[k]}\right)^2-\left(\mathrm{t}^{[k]}-\mathrm{o}^{[k]}\right) \mathrm{f}^{\prime \prime}\left(\text { net }^{[k]}\right)\right\} \mathrm{v}_{j^{\prime}} \mathrm{f}^{\prime}\left(\text { net }_{j^{\prime}}^{[k]}\right) \mathrm{o}_{i^{\prime}}^{[k]} \mathrm{o}_j^{[k]}\right\}-\right. \\
& \left.\left(\mathrm{t}^{[k]}-\mathrm{o}^{[k]}\right) \mathrm{f}^{\prime}\left(\mathrm{net}^{[k]}\right) \mathrm{f}^{\prime}\left(\mathrm{net}_{j^{\prime}}^{[k]}\right) \delta_{j j^{\prime}} \mathrm{o}_{i^{\prime}}^{[k]}\right\}
\end{aligned}
\end{equation}
∂ v j ∂ u j ′ i ′ ∂ 2 E = P 1 k = 1 ∑ P { { { f ′ ( net [ k ] ) 2 − ( t [ k ] − o [ k ] ) f ′′ ( net [ k ] ) } v j ′ f ′ ( net j ′ [ k ] ) o i ′ [ k ] o j [ k ] } − ( t [ k ] − o [ k ] ) f ′ ( net [ k ] ) f ′ ( net j ′ [ k ] ) δ j j ′ o i ′ [ k ] }
∂ 2 E ∂ u j i ∂ u j ′ i ′ = 1 P ∑ k = 1 P { [ f ′ ( n e t [ k ] ) 2 − ( t [ k ] − o [ k ] ) f ′ ′ ( net [ k ] ) } v j v j ′ f ′ ( net j [ k ] ) f ′ ( net j ′ [ k ] ) o i [ k ] o i ′ [ k ] − ( t [ k ] − o [ k ] ) f ′ ( net [ k ] ) v j f ′ ′ ( net j [ k ] ) δ j j ′ o i [ k ] o i ′ [ k ] } \begin{equation} \begin{aligned} & \frac{\partial^2 E}{\partial \mathrm{u}_{j i} \partial \mathrm{u}_{j^{\prime} i^{\prime}}}=\frac{1}{\mathrm{P}} \sum_{k=1}^{\mathrm{P}}\left\{\left[\mathrm{f}^{\prime}\left(\mathrm{net}^{[k]}\right)^2-\left(\mathrm{t}^{[k]}-\mathrm{o}^{[k]}\right) \mathrm{f}^{\prime \prime}\left(\operatorname{net}^{[k]}\right)\right\} \mathrm{v}_j \mathrm{v}_{j^{\prime}} \mathrm{f}^{\prime}\left(\operatorname{net}_j^{[k]}\right) \mathrm{f}^{\prime}\left(\operatorname{net}_{j^{\prime}}^{[k]}\right) \mathrm{o}_i^{[k]} \mathrm{o}_{i^{\prime}}^{[k]}-\right. \\ & \left.\left(\mathrm{t}^{[k]}-\mathrm{o}^{[k]}\right) \mathrm{f}^{\prime}\left(\text { net }^{[k]}\right) \mathrm{v}_j \mathrm{f}^{\prime \prime}\left(\text { net }_{\mathrm{j}}^{[k]}\right) \delta_{j j^{\prime}} \mathrm{o}_i^{[k]} \mathrm{o}_{i^{\prime}}^{[k]}\right\} \\ & \end{aligned} \end{equation} ∂ u ji ∂ u j ′ i ′ ∂ 2 E = P 1 k = 1 ∑ P { [ f ′ ( net [ k ] ) 2 − ( t [ k ] − o [ k ] ) f ′′ ( net [ k ] ) } v j v j ′ f ′ ( net j [ k ] ) f ′ ( net j ′ [ k ] ) o i [ k ] o i ′ [ k ] − ( t [ k ] − o [ k ] ) f ′ ( net [ k ] ) v j f ′′ ( net j [ k ] ) δ j j ′ o i [ k ] o i ′ [ k ] }
由于我们进行剪切的网络已经是一个经过训练达到了局部最小的网络,即t [ k ] − o [ k ] ≈ 0 \mathrm{t}^{[k]}-\mathrm{o}^{[k]} \approx 0 t [ k ] − o [ k ] ≈ 0 ,上式可以简化为:
∂ 2 E ∂ v j ∂ v j ′ = 1 P ∑ k = 1 P f ′ ( n e t [ k ] ) 2 o j [ k ] o j ′ [ k ] \begin{equation} \frac{\partial^2 E}{\partial \mathrm{v}_j \partial \mathrm{v}_{j^{\prime}}}=\frac{1}{\mathrm{P}} \sum_{k=1}^{\mathrm{P}} \mathrm{f}^{\prime}\left(\mathrm{net}^{[k]}\right)^2 \mathrm{o}_j^{[k]} \mathrm{o}_{j^{\prime}}^{[k]} \end{equation} ∂ v j ∂ v j ′ ∂ 2 E = P 1 k = 1 ∑ P f ′ ( net [ k ] ) 2 o j [ k ] o j ′ [ k ]
∂ 2 E ∂ v j ∂ u j ′ i ′ = 1 P ∑ k = 1 P f ′ ( net [ k ] ) 2 v j ′ f ′ ( net j ′ [ k ] ) o i ′ [ k ] o j [ k ] \begin{equation} \frac{\partial^2 E}{\partial \mathrm{v}_j \partial \mathrm{u}_{j^{\prime} i^{\prime}}}=\frac{1}{\mathrm{P}} \sum_{k=1}^{\mathrm{P}} \mathrm{f}^{\prime}\left(\text { net }^{[k]}\right)^2 \mathrm{v}_{j^{\prime}} \mathrm{f}^{\prime}\left(\operatorname{net}_{j^{\prime}}^{[k]}\right) \mathrm{o}_{i^{\prime}}^{[k]} \mathrm{o}_j^{[k]} \end{equation} ∂ v j ∂ u j ′ i ′ ∂ 2 E = P 1 k = 1 ∑ P f ′ ( net [ k ] ) 2 v j ′ f ′ ( net j ′ [ k ] ) o i ′ [ k ] o j [ k ]
∂ 2 E ∂ u j i ∂ u j ′ i ′ = 1 P ∑ k = 1 P f ′ ( net [ k ] ) 2 v j v j ′ f ′ ( net j [ k ] ) f ′ ( net j ′ [ k ] ) o i [ k ] o i ′ [ k ] \begin{equation} \frac{\partial^2 E}{\partial \mathrm{u}_{j i} \partial \mathrm{u}_{j^{\prime} i^{\prime}}}=\frac{1}{\mathrm{P}} \sum_{k=1}^{\mathrm{P}} \mathrm{f}^{\prime}\left(\text { net }^{[k]}\right)^2 \mathrm{v}_j \mathrm{v}_{j^{\prime}} \mathrm{f}^{\prime}\left(\operatorname{net}_j^{[k]}\right) \mathrm{f}^{\prime}\left(\operatorname{net}_{j^{\prime}}^{[k]}\right) \mathrm{o}_i^{[k]} \mathrm{o}_{i^{\prime}}^{[k]} \end{equation} ∂ u ji ∂ u j ′ i ′ ∂ 2 E = P 1 k = 1 ∑ P f ′ ( net [ k ] ) 2 v j v j ′ f ′ ( net j [ k ] ) f ′ ( net j ′ [ k ] ) o i [ k ] o i ′ [ k ]
这样的话,我们就可以通过一阶导向量的外积对海森矩阵进行近似:
[ X v [ k ] ] T = ( f ′ ( net [ k ] ) o j = 1 [ k ] , … , f ′ ( net [ k ] ) o n j [ k ] ) \begin{equation} \left[\mathbf{X}_{\mathrm{v}}^{[k]}\right]^{\mathrm{T}}=\left(\mathrm{f}^{\prime}\left(\text { net }^{[k]}\right) \mathrm{o}_{j=1}^{[k]}, \ldots, \mathrm{f}^{\prime}\left(\text { net }^{[k]}\right) \mathrm{o}_{n_j}^{[k]}\right) \end{equation} [ X v [ k ] ] T = ( f ′ ( net [ k ] ) o j = 1 [ k ] , … , f ′ ( net [ k ] ) o n j [ k ] )
[ X u [ k ] ] T = ( f ′ ( n e t [ k ] ) f ′ ( net 1 [ k ] ) v 1 [ k ] o i = 1 [ k ] , … , f ′ ( net [ k ] ) f ′ ( net 1 [ k ] ) v 1 [ k ] o n i [ k ] , … , f ′ ( net [ k ] ) f ′ ( net n j [ k ] ) v n j [ k ] o 1 [ k ] , … , f ′ ( net [ k ] ) f ′ ( net n j [ k ] ) v n j [ k ] o n i [ k ] ) \begin{equation} \begin{aligned} {\left[\mathbf{X}_{\mathrm{u}}^{[k]}\right]^{\mathrm{T}}=} & \left(\mathrm{f}^{\prime}\left(\mathrm{net}^{[k]}\right) \mathrm{f}^{\prime}\left(\operatorname{net}_1^{[k]}\right) \mathrm{v}_1^{[k]} \mathrm{o}_{i=1}^{[k]}, \ldots, \mathrm{f}^{\prime}\left(\text { net }^{[k]}\right) \mathrm{f}^{\prime}\left(\operatorname{net}_1^{[k]}\right) \mathrm{v}_1^{[k]} \mathrm{o}_{n_i}^{[k]}, \ldots,\right. \\ & \left.\mathrm{f}^{\prime}\left(\text { net }^{[k]}\right) \mathrm{f}^{\prime}\left(\operatorname{net}_{n_j}^{[k]}\right) \mathrm{v}_{n_j}^{[k]} \mathrm{o}_1^{[k]}, \ldots, \mathrm{f}^{\prime}\left(\text { net }^{[k]}\right) \mathrm{f}^{\prime}\left(\operatorname{net}_{n_j}^{[k]}\right) \mathrm{v}_{n_j}^{[k]} \mathrm{o}_{n_i}^{[k]}\right) \end{aligned} \end{equation} [ X u [ k ] ] T = ( f ′ ( net [ k ] ) f ′ ( net 1 [ k ] ) v 1 [ k ] o i = 1 [ k ] , … , f ′ ( net [ k ] ) f ′ ( net 1 [ k ] ) v 1 [ k ] o n i [ k ] , … , f ′ ( net [ k ] ) f ′ ( net n j [ k ] ) v n j [ k ] o 1 [ k ] , … , f ′ ( net [ k ] ) f ′ ( net n j [ k ] ) v n j [ k ] o n i [ k ] )
接下来我们把这两个向量连接起来成为一个维度大小为该层网络参数量的一阶导向量X [ k ] \mathbf{X}^{[k]} X [ k ] ,
X [ k ] = ( X V [ k ] X u [ k ] ) \begin{equation} \mathbf{X}^{[k]}=\left(\begin{array}{c} \mathbf{X}_{\mathrm{V}}^{[k]} \\ \mathbf{X}_{\mathbf{u}}^{[k]} \end{array}\right) \end{equation} X [ k ] = ( X V [ k ] X u [ k ] )
那么我们可以得到海森矩阵的外积近似如下所示:
H = ∂ 2 E ∂ w 2 = 1 P ∑ k = 1 P X [ k ] ⋅ X [ k ] T \begin{equation} \mathbf{H}=\frac{\partial^2 \mathrm{E}}{\partial \mathbf{w}^2}=\frac{1}{\mathrm{P}} \sum_{k=1}^{\mathrm{P}} \mathbf{X}^{[k]} \cdot \mathbf{X}^{[k] \mathrm{T}} \end{equation} H = ∂ w 2 ∂ 2 E = P 1 k = 1 ∑ P X [ k ] ⋅ X [ k ] T
这样我们就得到了OBD中所需要的海森矩阵的对角矩阵
D = [ [ H ] 1 , 1 0 ⋯ 0 0 [ H ] 2 , 2 ⋯ 0 ⋮ ⋮ ⋱ ⋮ 0 0 ⋯ [ H ] n , n ] \begin{equation} D=\left[\begin{array}{cccc} [H]_{1,1} & 0 & \cdots & 0 \\ 0 & [H]_{2,2} & \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & [H]_{n, n} \end{array}\right] \end{equation} D = ⎣ ⎡ [ H ] 1 , 1 0 ⋮ 0 0 [ H ] 2 , 2 ⋮ 0 ⋯ ⋯ ⋱ ⋯ 0 0 ⋮ [ H ] n , n ⎦ ⎤
我们可以看到上面对海森矩阵\mathbf{H 的表示是一个加和最终结果的形式,具体每一步的计算如下所示:
H m + 1 = H m + 1 P X [ m + 1 ] ⋅ X [ m + 1 ] T with H 0 = α I and H P = H \begin{equation} \mathbf{H}_{\mathrm{m}+1}=\mathbf{H}_{\mathrm{m}}+\frac{1}{\mathrm{P}} \mathbf{X}^{[\mathrm{m}+1]} \cdot \mathbf{X}^{[\mathrm{m}+1] \mathrm{T}} \quad \text { with } \quad \mathbf{H}_0=\alpha \mathbf{I} \text { and } \mathbf{H}_{\mathrm{P}}=\mathbf{H} \end{equation} H m + 1 = H m + P 1 X [ m + 1 ] ⋅ X [ m + 1 ] T with H 0 = α I and H P = H
矩阵和的逆的计算法则如下所示:
( A + B ⋅ C ⋅ D ) − 1 = A − 1 − A − 1 ⋅ B ⋅ ( C − 1 + D ⋅ A − 1 ⋅ B ) − 1 ⋅ D ⋅ A − 1 \begin{equation} (\mathbf{A}+\mathbf{B} \cdot \mathbf{C} \cdot \mathbf{D})^{-1}=\mathbf{A}^{-1}-\mathbf{A}^{-1} \cdot \mathbf{B} \cdot\left(\mathbf{C}^{-1}+\mathbf{D} \cdot \mathbf{A}^{-1} \cdot \mathbf{B}\right)^{-1} \cdot \mathbf{D} \cdot \mathbf{A}^{-1} \end{equation} ( A + B ⋅ C ⋅ D ) − 1 = A − 1 − A − 1 ⋅ B ⋅ ( C − 1 + D ⋅ A − 1 ⋅ B ) − 1 ⋅ D ⋅ A − 1
那么我们则有海森矩阵的逆H − 1 \mathbf{H}^{-1} H − 1 的计算方式如下所示:
H m + 1 − 1 = H m − 1 − H m − 1 ⋅ X [ m + 1 ] ⋅ X [ m + 1 ] T ⋅ H m − 1 P + X [ m + 1 ] T ⋅ H m − 1 ⋅ X [ m + 1 ] T with H 0 − 1 = α − 1 I and H P − 1 = H − 1 \begin{equation} \mathbf{H}_{\mathrm{m}+1}^{-1}=\mathbf{H}_{\mathrm{m}}^{-1}-\frac{\mathbf{H}_{\mathrm{m}}^{-1} \cdot \mathbf{X}^{[\mathrm{m}+1]} \cdot \mathbf{X}^{[\mathrm{m}+1] \mathrm{T}} \cdot \mathbf{H}_{\mathrm{m}}^{-1}}{\mathrm{P}+\mathbf{X}^{[\mathrm{m}+1] \mathrm{T}} \cdot \mathbf{H}_{\mathrm{m}}^{-1} \cdot \mathbf{X}^{[\mathrm{m}+1] \mathrm{T}}} \quad \text { with } \quad \mathbf{H}_0^{-1}=\alpha^{-1} \mathbf{I} \text { and } \mathbf{H}_{\mathrm{P}}^{-1}=\mathbf{H}^{-1} \end{equation} H m + 1 − 1 = H m − 1 − P + X [ m + 1 ] T ⋅ H m − 1 ⋅ X [ m + 1 ] T H m − 1 ⋅ X [ m + 1 ] ⋅ X [ m + 1 ] T ⋅ H m − 1 with H 0 − 1 = α − 1 I and H P − 1 = H − 1
下面是不考虑bias的MSE,CE的海森矩阵的外积近似,大家有兴趣可以推导一下。
H θ = ∇ θ 2 M S E ( θ ) = 2 n ∑ i = 1 n ∇ θ f θ ( x i ) ∇ θ f θ ( x i ) T \begin{equation} H_\theta=\nabla_\theta^2 M S E(\theta)=\frac{2}{n} \sum_{i=1}^n \nabla_\theta f_\theta\left(x_i\right) \nabla_\theta f_\theta\left(x_i\right)^T \end{equation} H θ = ∇ θ 2 MSE ( θ ) = n 2 i = 1 ∑ n ∇ θ f θ ( x i ) ∇ θ f θ ( x i ) T
H θ = ∇ θ 2 C E ( θ ) = ∑ i = 1 n f θ ( x i ) ( 1 − f θ ( x i ) ) ∇ θ f θ ( x i ) ∇ θ f θ ( x i ) T \begin{equation} H_\theta=\nabla_\theta^2 CE(\theta)= \sum_{i=1}^n f_\theta\left(x_i\right)\left(1-f_\theta\left(x_i\right) \right)\nabla_\theta f_\theta\left(x_i\right) \nabla_\theta f_\theta\left(x_i\right)^T \end{equation} H θ = ∇ θ 2 CE ( θ ) = i = 1 ∑ n f θ ( x i ) ( 1 − f θ ( x i ) ) ∇ θ f θ ( x i ) ∇ θ f θ ( x i ) T