Seminars

PPFL: Privacy-preserving Federated Learning with Trusted Execution Environments
Fan Mo, Imperial College London
Abstract
Although user data are not collected at a centralized location in federated learning (FL), adversaries can still execute various types of privacy attacks to retrieve sensitive information from the FL model parameters themselves, thus breaking the initial privacy promises behind FL. PPFL, a practical, privacy-preserving federated learning framework, is proposed to protect clients’ private information against known privacy-related attacks. PPFL adopts greedy layer-wise FL training and updates layers always inside Trusted Execution Environments (TEEs) at both server and clients. We implemented PPFL with mobile-like TEE (i.e., TrustZone) and server-like TEE (i.e., Intel SGX) and empirically tested its performance. For the first time, we showed the possibility of fully guaranteeing privacy and achieving comparable ML model utility with regular end-to-end FL, without significant communication and system overhead.
Date & Time
Thursday, February 9, 2023 - 14:00
Location
Imperial College London