Federated Learning, a process by which neural networks can be trained with data from many sources, is thought to be privacy-preserving. This post, published on NIST’s official website, discuses circumstances in which confidential training data can be reconstructed by examining the weight structure of the trained model over time. The linked page offers a brief discussion of extant technology and a briefer section on best practices with respect to federated learning. The post’s value lies in its succinct synthesis of published research on the topic and its provision of links to the aforementioned papers.