This chapter highlights that machine learning (ML) AI faces difficulty and is less trustworthy in providing an accurate output when it is given to novel inputs that are dissimilar to what was covered in its training. There are instances in which novelty detection is insufficient and the user trusts that they are receiving accurate information, when in fact the ML AI is acting far outside its parameters. Lin points out that individuals have shown to trust AI, and follow AI-given directions – based upon data from controlled experiments. As AI becomes more prominent in daily life, it will inevitably “fade into the background” and humans will develop an inevitable trust for the technology. The chapter then goes on to discuss how AI would lead to or facilitate various forms of escalation – deliberative, inadvertent, accidental, and catalytic. AI can provide a state with technological power seen as sufficient over another state, provide misinformation that leads to an unexpected escalatory action or the accidental use of force, and create deep fakes that can be used by a third party adversary. AI can be a black box of information, Lin encourages readers to be skeptical of AI – a system one can not fully understand that is able to operate outside its parameters.