Machine learning has made tremendous progress during the past decade. While continuing to improve our daily lives, recent research shows that machine learning models are vulnerable to various privacy attacks. In this talk, I'll cover our two recent works on quantifying the privacy risks of machine learning models. First, I will talk about some recent development of membership inference, including membership inference with only labels and attacks against machine unlearning. Second, I will present our work on the first link stealing attacks against graph neural networks.
Please email for a
is a faculty member at CISPA Helmholtz Center for Information Security, Germany. Previously, he was a group leader at CISPA. He obtained his Ph.D. degree from University of Luxembourg in November 2016. Yang's research interests lie at the intersection of privacy and machine learning. Over the years, he has published multiple papers at top venues in computer science, including WWW, CCS, NDSS, and USENIX Security. His work has received the NDSS 2019 distinguished paper award. Yang has served in the technical program committee of USENIX Security 2022 2021, ACM CCS 2021, 2020, 2019, WWW 2021 2020, AAAI 2022 2021, RAID 2020, ICWSM 2020, and PETS 2022, 2021, 2020.