No day goes by without reading machine learning (ML) success stories across
different application domains. Systems security is no exception, where ML's
tantalizing results leave one to wonder whether there are any unsolved problems
left. However, machine learning has no clairvoyant abilities and once the magic
wears off, we're left in uncharted territory.
My research vision is focused on understanding and improving the effectiveness
of machine learning methods for systems security in the presence of adversaries.
One of the core challenges is related to the representation of problem space
objects (e.g., program binaries) in a numerical feature space, as the semantic
gap makes it harder to reason about attacks and defences and often leaves room
for adversarial manipulation. Inevitably, the effectiveness of machine learning
methods for systems security are intertwined with the underlying abstractions,
e.g., program analyses, used to represent the objects. In this context, is
robust machine learning possible?
In this talk, I will first illustrate the challenges in the context of
adversarial ML evasion attacks against malware classifiers. The classic
formulation of evasion attacks is ill-suited for reasoning about how to generate
realizable evasive malware in the problem space. I'll provide a deep dive into
our recent work that provides a theoretical reformulation of the problem and
enables more principled attack designs. With this framework we propose and
implement an end-to-end attack that can generate real-world adversarial malware,
at scale, that evades both vanilla and hardened classifiers.
Next, we'll broaden our conversation to include not just robustness against
specialized attacks, but also drifting scenarios, in which threats evolve and
change over time. Our work suggests adversarial ML evasion attacks are
intrinsically linked with concept drift and we will discuss how drift affects
the performance of malware classifiers, and what role the underlying feature
space abstraction has in the whole process.
Ultimately, these threats would not exist if the abstraction could capture the
'Platonic ideal' of interesting behaviour (e.g., maliciousness), however, such a
solution is still out of reach. I'll conclude by outlining our current research
efforts to make this goal a reality, including robust feature development,
assessing vulnerability to universal perturbations, and forecasting of future
drift, which illustrate what robust machine learning for systems security may
eventually look like.
Please email for a
Zoom link
Lorenzo grew up on pizza, spaghetti, and Phrack, first. Underground and academic
research interests followed shortly thereafter. He is a Full Professor of
Computer Science in the Department of Informatics at King's College London,
where he holds the Chair in Cybersecurity (Systems Security). He leads the
Cybersecurity group's Systems Security Research Lab (
https://s2lab.kcl.ac.uk),
which works at the intersection of program analysis and machine learning for
systems security. He speaks, publishes at, and sits on the technical program
committees of top-tier and well-known international conferences including IEEE
S&P, USENIX Security, ACM CCS, NDSS, USENIX Enigma, RAID, ACSAC, and DIMVA, as
well as emerging thematic workshops (e.g., Deep Learning for Security at IEEE
S&P, and AISec at ACM CCS), and received the USENIX WOOT Best Paper Award in
2017. Lorenzo is Program Co-Chair of Deep Learning and Security 2021, DIMVA
2021-22, and he was Program Co-Chair of ACM EuroSec 2019-20 and General Co-Chair
of ACM CCS 2019. He holds a PhD in Computer Science from the University of
Milan (2008), held Post-Doctoral and Visiting Scholar positions at Vrije
Universiteit Amsterdam (2010-2011), UC Santa Barbara (2008- 2009), and Stony
Brook University (2006-2008), and worked in the Information Security Group at
Royal Holloway, University of London (Assistant Professor, 2012; Associate
Professor, 2016; Full Professor, 2018). He’s definitely never stopped wondering
and having fun throughout.