AI and ML are becoming increasingly used by hackers for the evasion of cybersecurity defenses. There are 3 main use cases we see hackers being able to use AI and ML: (1) evade detection, (2) hide, (3) adapt to countermeasures, according to Elham Tabassi, the chief of staff information technology laboratory at the National Institute of Standards and Technology. The solution, or first line of defense, as explained by Tim Bandos, a chief information security officer at Digital Guardian, is to rely on human minds to build stronger defenses.
So far, there are 3 main cybersecurity attack types made possible by AI and ML: Data Poisoning, Generative Adversarial Networks, and Manipulating Bots. Data Poisoning relies on corrupting the data being used to train the models. A hacker might inflict changes on a training dataset, causing incorrect predictions to be made by the model. Tabassi explains that to solve this, there should be industry-wide policies to guarantee data quality and trustworthiness.
The next kind, Generative Adversarial Networks (GANs), uses a framework of pitting 2 networks, a generator, and a discriminator which compete against each other to create content that passes for the original. In terms of the threat they produce, they can be used to recreate normal internet traffic patterns to help cover the traces of cyberattacks. Additionally, we have seen them been used for “password cracking, evading malware detection, and fooling facial recognition”. The only tool we have in response is to retrain algorithms frequently to adjust to evolving attacks.
The last threat comes from manipulating bots. AI algorithms’ primary weakness comes from the fact that once an attacker understands these models and how they make predictions, the algorithm can be tricked or abused and make bad decisions that have wide-scale damages to society.