MIT Logo

Trusting artificial intelligence in cybersecurity is a double-edged sword

Abstract:

Artificial intelligence can make a system more robust, respond to a threat more quickly, and can effectively monitor a network. However the use of artificial intelligence could give rise to attacks against the AI services themselves. Such attacks could have the intention to change the behavior of machine learning models meant to deliberately miss potential threats, leaving the network open to more substantial attacks. Because it is difficult to gain insight on the inner workings of a machine learning model, it can be difficult to identify a compromised model until it’s already too late. There are initiatives to create robust systems to test machine learning models for these vulnerabilities but they are all still in very early stages.

Author:
Mariarosaria Taddeo, Tom McCutcheon & Luciano Floridi
Year:
2019
Domain:
Dimension:
Region:
Data Type:
Keywords: , ,
MIT Political Science
MIT Political Science
ECIR
GSS