找回密码
 立即注册
首页 业界区 业界 AI可解释性 I | 对抗样本(Adversarial Sample)论文导 ...

AI可解释性 I | 对抗样本(Adversarial Sample)论文导读- Part I

蛟当罟 4 天前
AI可解释性 I | 对抗样本(Adversarial Sample)论文导读(持续更新)

导言

本文作为AI可解释性系列的第一部分,旨在以汉语整理并阅读对抗攻击(Adversarial Attack)相关的论文,并持续更新。与此同时,AI可解释性系列的第二部分:归因方法(Attribution)也即将上线,敬请期待。
Intriguing properties of neural networks(Dec 2013)

作者:Christian Szegedy
简介

Intriguing properties of neural networks 乃是对抗攻击的开山之作,首次发现并将对抗样本命名为Adversarial Sample,首先发现了神经网络存在的两个性质:

  • 单个高层神经元和多个高层神经元的线性组合之间并无差别,即表示语义信息的是高层神经元的空间而非某个具体的神经元
    there is no distinction between individual high level units and random linear combinations of high level units, ..., it is the space, rather than the individual units, that contains of the semantic information in the high layers of neural networks.

  • 神经网络的输入-输出之间的映射很大程度是不连续的,可以通过对样本施加难以觉察的噪声扰动(perturbation)最大化网络预测误差以使得网络错误分类,并且可以证明这种扰动并不是一种随机的学习走样(random artifact of learning),可以应用在不同数据集训练的不同结构的神经网络
    we find that deep neural networks learn input-output mappings that are fairly discontinuous to a significant extend. Specifically, we find that we can cause the network to misclassify an image by applying a certain imperceptible perturbation, which is found by maximizing the network's prediction error. In addition, the specific nature of these perturbations is not a random artifact of learning: the same perturbation can cause a different network, that was trained on a different subset of the dataset, to misclassify the same input.

神经元激活

文章通过实验证明,某个神经元的方向(natural basis direction)和随机挑选一个方向(random basis)再和整个层级的激活值作余弦相似度之后的结果完全不可区分。
1.png

2.png

可以证明,单个神经元的解释程度和整个层级的解释程度不分伯仲,即所谓”神经网络解耦了不同坐标上的特征“存在疑问。
This suggest that the natural basis is not better than a random basis in for inspecting the properties of \(\phi(x)\). Moreover, it puts into question the notion that neural networks disentangle variation factors across coordinates.
虽然每个层级似乎在了输入分布的某个部分存在不变性,但是很明显在这些部分的邻域中又存在着一种反直觉的未定义的行为。
神经网络的盲点(Blind Spots in Neural Networks)

观点认为神经网络的多层非线性叠加的目的就是为了使得模型对样本空间进行非局部泛化先验(non-local generalization prior)的编码,换句话说,输出可能会对其周围没有训练样本的输入空间邻域分配不显著(推测约为非\(\epsilon\))的概率(对抗攻击可以发生的假设)。这样做的好处在于,不同视角的同一张图片可能在像素空间产生变化,但是经过非局部泛化先验编码使得在概率上的输出是不变的。
In other words, it is possible for the output unit to assign non-significant (and, presumably, non-epsilon) probabilities to regions of the input space that contain no training examples in their vicinity.
这里可以推导出一种平滑性假设,即训练样本在\(\epsilon\)领域内(\(||x^\prime-x|| 0 for which the minimizer r of the following problem satisfies f (x + r) = l.
</blockquote>实验

通过实验,可以得到如下三个结论:

  • 对于文章研究的所有网络(包括MNIST、QuocNet、AlexNet),针对每个样本,始终能够生成与原始样本极其相似、视觉上无法区分的对抗样本,且这些样本均被原网络误分类。
  • 跨模型的泛化性:当使用不同超参数(如层数、正则化项或初始权重)从头训练网络时,仍有相当比例的对抗样本会被误分类。
  • 跨训练集的泛化性:在完全不同的训练集上从头训练的网络,同样会误分类相当数量的对抗样本。

  • For all the networks we studied (MNIST, QuocNet [10], AlexNet [9]), for each sample, we always manage to generate very close, visually indistinguishable, adversarial examples that are misclassified by the original network (see figure 5 for examples).
  • Cross model generalization: a relatively large fraction of examples will be misclassified by networks trained from scratch with different hyper-parameters (number of layers, regularization or initial weights).
  • Cross training-set generalization a relatively large fraction of examples will be misclassified by networks trained from scratch trained on a disjoint training set.
可以证明对抗样本存在普适性,一个微妙但关键的细节是:对抗样本需针对每一层的输出生成,并用于训练该层之上的所有层级。实验表明,高层生成的对抗样本比输入层或低层生成的更具训练价值。
A subtle, but essential detail is that adversarial examples are generated for each layer output and are used to train all the layers above. Adversarial examples for the higher layers seem to be more useful than those on the input or lower layers.
然而,这个实验仍然留下了关于训练集依赖性的问题。生成示例的难度是否仅仅依赖于我们训练集作为样本的特定选择,还是这一效应能够泛化到在完全不同训练集上训练的模型?
Still, this experiment leaves open the question of dependence over the training set. Does the hardness of the generated examples rely solely on the particular choice of our training set as a sample or does this effect generalize even to models trained on completely different training sets?
因此作者进行了一个迁移性实验,将MNIST的训练集切分成两个大小为30000的部分\(P_1\)和\(P_2\),用以训练三个全连接网络:
名称结构训练数据\(M_1\)100-100-10\(P_1\)\(M_1^\prime\)123-456-10\(P_1\)\(M_2\)100-100-10\(P_2\)接下来在每个网络上使用测试集训练对抗样本,并将这些对抗样本迁移到其他网络上,这些对抗样本对其他网络同样有效果。
Uzuki评论:这个性质为对抗攻击的迁移性(transferability)提供了保证,那些无法得知内部结构黑盒网络,可以通过一个结构类似的白盒作为代理(surrogate model)生成对抗样本。
因此可以得到一个有趣的结论:即使在不相交的训练集上训练的模型,对抗样本仍然难以处理,尽管它们的有效性显著降低。
The intriguing conclusion is that the adversarial examples remain hard for models trained even on a disjoint training set, although their effectiveness decreases considerably.
网络不稳定性的谱分析

作者将上一节提出这种监督学习网络对这些特定的扰动族存在的不稳定性可以被如下数学表示:

给定多个集合对\((x_i,n_i)\),使得\(||n_i||
您需要登录后才可以回帖 登录 | 立即注册