One of the main concerns in today’s age of technology is the rise in the use of AI technology. Improving every day, it becomes increasingly difficult to distinguish between authentic and generated content. With more and more data, these engines become better at mimicking human behavior. To combat this type of misinformation, information to spot this AI use is being increasingly shared. One such example includes Kevin Williams’s recent piece in CNBC, titled “How to Identify an AI Imposter in Video, Audio, and Text as Deepfake Technology Goes Mainstream.”
In this article, one of the key points the author shares is the ability to stop an AI imposter on the screen – they cannot currently generate in 3D, as if they cannot turn left or right without glitching, they are likely an AI-generated image. More sophisticated methods can also be utilized, such as code words and QR codes that are secured by companies. More reliably, companies can also turn to a more effective division of labor – if an attack were to happen, theoretically it would only affect a portion of the company, allowing for better recovery. Multi-factor authentication is also brought in as an old but reliable tool to verify identity.
Data breaches are not limited to large companies – malicious actors are also interested in preying upon vulnerable populations in society (such as the elderly) to access sensitive and personal information. Sharing this kind of information, and detailing where advancements are made so that people can recognize changes are critical to diminish the power that AI can have in communications.