Professor Lorenzo Cavallaro

Professor Lorenzo Cavallaro

Professor of Computer Science at UCL

Lorenzo grew up on pizza, spaghetti, and Phrack, first. Underground and academic research interests followed shortly thereafter. He is a Full Professor of Computer Science at UCL Computer Science, where he leads the Systems Security Research Lab within the Information Security Research Group. He speaks, publishes at, and sits on the technical program committees of top-tier and well-known international conferences including IEEE S&P, USENIX Security, ACM CCS, NDSS, USENIX Enigma, RAID, ACSAC, and DIMVA, as well as emerging thematic workshops (e.g., Deep Learning for Security at IEEE S&P, and AISec at ACM CCS), and received the USENIX WOOT Best Paper Award in 2017. Lorenzo is Program Co-Chair of Deep Learning and Security 2021, DIMVA 2021-22, and he was Program Co-Chair of ACM EuroSec 2019-20 and General Co-Chair of ACM CCS 2019. He holds a PhD in Computer Science from the University of Milan (2008), held Post-Doctoral and Visiting Scholar positions at Vrije Universiteit Amsterdam (2010-2011), UC Santa Barbara (2008- 2009), and Stony Brook University (2006-2008), and worked in the Department of Informatics at King's College London (2018-2021), where he held the Chair in Cybersecurity (Systems Security), and the Information Security Group at Royal Holloway, University of London (Assistant Professor, 2012; Associate Professor, 2016; Full Professor, 2018). He’s definitely never stopped wondering and having fun throughout.



Scroll down for more details...


Trustworthy Machine Learning... for Systems Security

Keynote Talk

No day goes by without reading machine learning (ML) success stories across different application domains. Systems security is no exception, where ML's tantalizing results leave one to wonder whether there are any unsolved problems left. However, machine learning has no clairvoyant abilities and once the magic wears off, we're left in uncharted territory. We, as a community, need to understand and improve the effectiveness of machine learning methods for systems security in the presence of adversaries. One of the core challenges is related to the representation of problem space objects (e.g., program binaries) in a numerical feature space, as the semantic gap makes it harder to reason about attacks and defences and often leaves room for adversarial manipulation. Inevitably, the effectiveness of machine learning methods for systems security are intertwined with the underlying abstractions, e.g., program analyses, used to represent the objects. In this context, is trustworthy machine learning possible? In this talk, I will first illustrate the challenges in the context of adversarial ML evasion attacks against malware classifiers. The classic formulation of evasion attacks is ill-suited for reasoning about how to generate realizable evasive malware in the problem space. I'll provide a deep dive into recent work that provides a theoretical reformulation of the problem and enables more principled attack designs. Implications are interesting, as the framework facilitates reasoning around end-to-end attacks that can generate real-world adversarial malware, at scale, that evades both vanilla and hardened classifiers, thus calling for novel defenses. Next, I'll broaden our conversation to include not just robustness against specialized attacks, but also drifting scenarios, in which threats evolve and change over time. Prior work suggests adversarial ML evasion attacks are intrinsically linked with concept drift and we will discuss how drift affects the performance of malware classifiers, hinting at the role the underlying feature space abstraction has in the whole process. Ultimately, these threats would not exist if the abstraction could capture the 'Platonic ideal' of interesting behaviour (e.g., maliciousness), however, such a solution is still out of reach. I'll conclude by outlining current research efforts to make this goal a reality, including robust feature development, assessing vulnerability to universal perturbations, and forecasting of future drift, which illustrate what trustworthy machine learning for systems security may eventually look like.