Rivera et al. analyze the behaviors of AI agents in multiple war game simulations. The study focuses on the preferences of AI to take escalatory actions in conflict that may cause exacerbation of particular scenarios. The authors use the word escalatory to mean any action that turns a non-violent situation to a violent one. This paper examines the simulation results both quantitatively and qualitatively and narrows in on large language models (LLMs) specifically. The findings of the paper conclude that LLMs have a tendency to engage in arms races, occasionally deploy nuclear weapons, and prefer first-strike tactics. Although, as a work in progress, it is important to note that LLM behavior is a complex area that needs consistent evaluation. Rivera et al. recommend more research on the subject and increased consideration prior to the implementation of generative AI systems into military and diplomatic decision-making, especially given the high stakes of such actions.