Systems and Security : Attacks, Threats, and Vulnerabilities
1.2 Malware attacks
Adversarial artificial intelligence (AI)
1. Tainted training data for machine learning (ML)
2. Security of machine learning algorithms
1. Tainted Training Data for Machine Learning (ML): Adversarial AI refers to the use of AI to attack or manipulate machine learning models. Tainted training data is a type of attack in which the attacker intentionally provides incorrect or biased data to the machine learning model during the training phase. This can lead to the machine learning model making incorrect predictions or decisions, leading to security or privacy vulnerabilities.
2. Security of Machine Learning Algorithms: Another aspect of adversarial AI is the security of machine learning algorithms themselves. Attackers can try to manipulate the inputs to the algorithm to cause it to make incorrect predictions or decisions, or they can try to compromise the algorithm's training process to cause it to behave in unexpected ways. Additionally, machine learning algorithms can also be vulnerable to various types of attacks, such as evasion attacks, in which the attacker provides data specifically designed to evade detection, or poisoning attacks, in which the attacker intentionally introduces inaccuracies into the training data to manipulate the behavior of the algorithm.
To defend against adversarial AI, it is important to consider security and robustness during the development of machine learning algorithms, as well as during the selection and processing of training data. Techniques such as adversarial training and validation can also be used to improve the robustness of machine learning models to attack.