MIT Logo

How to improve cybersecurity for artificial intelligence

Abstract:

The sixth principle of the Asilomar AI Principles states that “AI systems should be safe and secure throughout their operational lifetime, and verifiably so where applicable and feasible.” But there is still much discussion over what it means to make AI systems safe and secure. Artificial intelligence is a rapidly developing, as well as an incredibly nuanced field, and thus this complicates enumerating ways to make AI systems secure.
Machine learning and AI algorithms are being increasingly depended upon for tasks, whether those be technical, business related, or everyday tasks. These tasks include protecting systems from cybersecurity attacks. So, if AI and machine learning networks are being used more ubiquitously, specifically in the cyber sphere, then reasonably the desire to attack these systems is increasing, and the urgency to figure out how to protect AI systems is increasing.
Some forms of attacks to AI decision making systems is directly controlling the systems to achieve desired outputs, as well as adding malicious training data to the networks. One example is defacing stop signs so that self driving cars cannot detect them, and crash (adversarial machine learning). An alternative method is training the AI model on inaccurate training data, so that it performs poorly. It is incredibly difficult for AI developers to be able to foresee all of these possible scenarios and account for them in the model, and trying to acquire data to make their AI models more robust could also lead to potential data privacy issues as well. Thus, there are trade-offs that developers must juggle and decide what is the most beneficial path in the end.
Much public policy surrounding AI centers around funding more AI research and training more workers in the field, rather than dealing with AI security. In the United States, there exists a National Security Commission on Artificial Intelligence, however this agency has failed to show how some of their ideas regarding AI security would be implemented in practice. For most governments, the next stage of AI security will include figuring out transparency, auditing, and accountability.

Author:
Josephine Wolff
Year:
2020
Domain: ,
Dimension: , ,
Region:
Data Type: ,
Keywords: , , , , ,
MIT Political Science
MIT Political Science
ECIR
GSS