Ensuring privacy of data used to train machine learning models is important for safe and responsible deployment of these models. At the same time, models are required to generalize across different data distributions for their wide adoption in practice. Balancing this privacy-utility trade-off has been a key challenge in designing privacy preserving ML solutions.
In this talk, I will focus on well-known privacy attack such as membership inference and present how causal learning techniques can play an important role in enhancing privacy guarantees of ML models. We will see that models learnt using causal structure generalize better compared to associational models while providing stronger privacy guarantees, both theoretically and empirically. Lastly, the talk will touch upon techniques to leverage the benefits of causal learning in domain generalization setting and understand their privacy implications.
Please email for a
Shruti is a senior researcher working in the Confidential Computing group at Microsoft Research Cambridge. Her research broadly includes security and privacy in cloud computing, machine learning algorithms, blockchains and trusted hardware techniques. Her current work aims at quantifying information leakage as well as evaluating the robustness of state-of-the-art deep neural networks with the focus on topics such as collaborative learning, differential privacy, causal learning as well as natural language models. Shruti’s work has appeared in top tier security and ML conferences such as CCS, NDSS, Usenix Security, and ICML, and she has been on the program committee of CCS'21, Usenix Security'22. and AISTATS'22. Prior to joining Microsoft, she pursued her Ph.D. from School of Computing at National University of Singapore where she received the Dean's Graduate Research Excellence award for her thesis work.