The innovative use of AI technologies, particularly large language models (LLMs) to craft malware that circumvent conventional cybersecurity measures is emerging as a potential threat to international security. Through meticulous experimentation, this paper illustrates how AI can generate sophisticated malicious tools like keyloggers, screenshot malware, and trojans, which are highly effective at evading detection by contemporary security solutions. The study aims to develop new malware exclusively using ChatGPT, evaluate its effectiveness and evasion capabilities, and offer recommendations for security vendors, end-users, and AI developers. The experiment uncovers ChatGPT’s unexpected willingness to execute complex malicious instructions without encountering anticipated ethical or safety constraints. These findings emphasize the urgent need for stricter control mechanisms and ethical guidelines in AI development to prevent misuse while advocating for enhanced detection and defense strategies against AI-generated threats in the cybersecurity realm. The papers recognize the ongoing arms race between defenders and attackers who leverage AI for malicious purposes.