Adversarial machine learning


Adversarial machine learning is a machine learning technique that attempts to fool models by supplying deceptive input. The most common reason is to cause a malfunction in a machine learning model.
Most machine learning techniques were designed to work on specific problem sets in which the training and test data are generated from the same statistical distribution. When those models are applied to the real world, adversaries may supply data that violates that statistical assumption. This data may be arranged to exploit specific vulnerabilities and compromise the results.

History

In Snow Crash, the author offered scenarios of technology that was vulnerable to an adversarial attack. In Zero History, a character dons a t-shirt decorated in a way that renders him invisible to electronic surveillance.
In 2004, Nilesh Dalvi and others noted that linear classifiers used in spam filters could be defeated by simple "evasion attacks" as spammers inserted "good words" into their spam emails. In 2006, Marco Barreno and others published "Can Machine Learning Be Secure?", outlining a broad taxonomy of attacks. As late as 2013 many researchers continued to hope that non-linear classifiers might be robust to adversaries. In 2012, deep neural networks began to dominate computer vision problems; starting in 2014, Christian Szegedy and others demonstrated that deep neural networks could be fooled by adversaries.

Examples

Examples include attacks in spam filtering, where spam messages are obfuscated through the misspelling of “bad” words or the insertion of “good” words; attacks in computer security, such as obfuscating malware code within network packets or to mislead signature detection; attacks in biometric recognition where fake biometric traits may be exploited to impersonate a legitimate user; or to compromise users' template galleries that adapt to updated traits over time.
Researchers showed that by changing only one-pixel it was possible to fool deep learning algorithms. Others 3-D printed a toy turtle with a texture engineered to make Google's object detection AI classify it as a rifle regardless of the angle from which the turtle was viewed. Creating the turtle required only low-cost commercially available 3-D printing technology.
A machine-tweaked image of a dog was shown to look like a cat to both computers and humans. A 2019 study reported that humans can guess how machines will classify adversarial images. Researchers discovered methods for perturbing the appearance of a stop sign such that an autonomous vehicle classified it as a merge or speed limit sign.
McAfee attacked Tesla's former Mobileye system, fooling it into driving 50 mph over the speed limit, simply by adding a two-inch strip of black tape to a speed limit sign.
Adversarial patterns on glasses or clothing designed to deceive facial-recognition systems or license-plate readers, have led to a niche industry of "stealth streetwear".
An adversarial attack on a neural network can allow an attacker to inject algorithms into the target system. Researchers can also create adversarial audio inputs to disguise commands to intelligent assistants in benign-seeming audio.
Clustering algorithms are used in security applications. Malware and computer virus analysis aims to identify malware families, and to generate specific detection signatures.

Attack modalities

Taxonomy

Attacks against machine learning algorithms have been categorized along three primary axes: influence on the classifier, the security violation and their specificity.
This taxonomy has been extended into a more comprehensive threat model that allows explicit assumptions about the adversary's goal, knowledge of the attacked system, capability of manipulating the input data/system components, and on attack strategy. Two of the main attack scenarios are:

Strategies

Evasion

Evasion attacks are the most prevalent type of attack. For instance, spammers and hackers often attempt to evade detection by obfuscating the content of spam emails and malware. Samples are modified to evade detection; that is, to be classified as legitimate. This does not involve influence over the training data. A clear example of evasion is image-based spam in which the spam content is embedded within an attached image to evade textual analysis by anti-spam filters. Another example of evasion is given by spoofing attacks against biometric verification systems.

Poisoning

Poisoning is adversarial contamination of training data. Machine learning systems can be re-trained using data collected during operations. For instance, intrusion detection systems are often re-trained using such data. An attacker may poison this data by injecting malicious samples during operation that subsequently disrupt retraining.

Defense

Researchers have proposed a multi-step approach to protecting machine learning.
A number of defense mechanisms against evasion, poisoning, and privacy attacks have been proposed, including:
Available software libraries, mainly for testing and research.