“Poison Ivy” is dedicated to research methods to prevent and detect backdoors in AI applications. Learning-based systems are driven by large amounts of data and thus are prone to attacks that stealthily manipulate training data. We develop approaches to secure learning-based systems in practice, monitor access to detect attacks early on and help inspect learned models for manipulations to prevent backdoors.