MIT Logo

Contextualizing Deepfake Threats to Organizations

Abstract:

Deepfakes, defined as synthetically created media (often by means of artificial intelligence), are an increasing threat, as the barriers and cost of development continue to decrease. Innovations in machine learning techniques continue to enable the development of partial and fully synthetic texts, voice messages, and videos. Deepfakes threaten organizations around the world by imitating executives, political figures, and employees with the goal to harm brand reputation, reap financial gain, or gain system access. Current methods utilize passive (detection) and active (authentication) forensic techniques to identify deepfakes. The authors propose that such methods should extend current detection and authentication techniques while also developing or expanding their response protocol. The authors equally emphasize improving false positive identification. Underneath prevention protocols, organizations should implement real-time verification using physics-based, compression-based, and content-based examinations of media. Organizations should also focus on minimizing impact through planning and rehearsing response plans, reporting deepfakes, and training personnel.

Author:
National Security Agency, Federal Bureau of Investigation, Cybersecurity and Infrastructure Security Agency
Year:
2023
Domain:
Dimension:
Region:
Data Type:
Keywords: , ,
MIT Political Science
MIT Political Science
ECIR
GSS