The article discusses different methods of AI poisoning and manipulating outputs. One way AI might be corrupted is by including malicious software in the training data. Since the output of AI models is often used to implement other systems, injecting malicious code into the training set might help corrupt future systems. For example, by including fake websites in a training dataset, hackers might perform fishing attacks by manipulating AI to redirect users to the fake websites. RAG models that combine external sources with LMM are especially vulnerable to data poisoning attacks since the LMM model has little information regarding the trustworthiness of the external datasets. Another method to attack AI would be by making manipulative requests, trying to get AI to disclose sensitive data it was trained on. In general, attacks on AI pose significant challenges to cybersecurity, since attackers have a large motivation and capabilities to perform those attacks.