Article on the use of AI by malicious actors and those trying to prevent them. Identifies key aspects of AI with respect to cybersecurity, including automating tasks, monitoring and identifying potential threats, and discovering patterns within large amounts of data. Points out that actions like the ones above can also be weaponized from the attackers perspective. Discusses the two main ways that AI can be potentially harmful to the cybersecurity industry. First, there exists AI Bias, which essentially means that your AI is only as good as the data you train it on. As cyberattacks become more and more complex, it may be difficult for AI to adapt quickly to new types of threats. Secondly, it is possible for bad actors to create “Adversarial AIs”, which learn the limits and baselines of preventative AIs and then create new attacks that are within the tolerance threshold of defensive AIs, which can lead to repeated escalation between the two parties. Currently, people are working in the field to try and decrease the amount of bias but actions to try and analyze adversarial AI are currently only academic.