MIT Logo

The hacking underworld has removed all of AI’s guardrails, but the good guys are closing in

Abstract:

As artificial intelligence becomes increasingly accessible and capable, cybercriminals are using AI to execute highly targeted attacks at scale, causing people to unwittingly send money and sensitive information to attackers. Business email compromise (BEC) attacks that use generative AI tools to create believable emails and social media posts have grown from 1% of all cybersecurity threats in 2022 to 18.6% in 2023. Malvertising attacks, where a malicious ad is placed on Google to impersonate an actual website, have also become significantly more common. AI is also being used by defenders to detect and stop these attacks, however. AI detection tools, such as McAfee’s Project Mockingbird, aim to expose AI generated text or altered audio. While these detection tools offer a potentially promising direction, public education remains the best proactive method in mitigating these threats.

Author:
Rachel Curry
Year:
2024
Domain:
Dimension:
Region:
Data Type:
Keywords: ,
MIT Political Science
MIT Political Science
ECIR
GSS