“Poison Ivy” is dedicated to research methods to prevent and detect backdoors in AI applications. Learning-based systems are driven by large amounts of data and thus are prone to attacks that stealthily manipulate training data. We develop approaches to secure learning-based systems in practice, monitor access to detect attacks early on and help inspect learned models for manipulations to prevent backdoors. The project is funded by the “Ministerium für Wirtschaft, Arbeit und Wohnungsbau Baden-Württemberg” in the scope of the AI innovation contest of Baden-Wuerttemberg, Germany.
More details can be found at https://poison-ivy.de