GitHub released a feature called “Copilot,” a generative AI model that purportedly helps software developers by auto-completing their code. However, research indicates that Copilot writes insecure code under many circumstances. Copilot was trained on large amounts of previously-written code of varying quality, and like other generative AI, it uses what the user has written so far to predict what comes next. Researchers at New York University found that Copilot often made vulnerable code suggestions, and if the developer was already writing error-ridden code, Copilot was more likely to make bad suggestions. The paper concludes that while Copilot may be helpful in some cases, software developers should take extra steps to ensure that Copilot-written code doesn’t introduce security flaws.