Recent News

Jun 04, 2025 Image I returned to SecAppDev 2025 with two talks for practitioners: on “Navigating the Security Landscape of Modern AI”, and on “The Engineer’s Guide to Data Privacy”. The first talk was also repeated for Cyber Security Coalition: Application Security Experience Sharing Day.
May 19, 2025 Image Our team published a pre-print of our research on adversarial purification: our method called FlowPure based on continuous normalizing flows outperforms state-of-the-art purifiers. We will present a poster at IEEE EuroS&P 2025 in Venice!
May 14, 2025 Image I had a pleasure to give a guest lecture on “Privacy Engineering Technologies” for the Data Application and Security course at the University of Liechtenstein.
Jan 22, 2025 Image I participated in the NDC Security 2025 conference in Oslo, where I gave a talk on Nagivating the Security and Privacy Landscape of Modern AI.
Dec 20, 2024 Image Our 4th Workshop on Rethinking Malware Analysis (WoRMA) is accepted to appear at IEEE EuroS&P 2025 in Vienna, Austria! Co-chaired with Fabio Pierazzi and Simone Aonzo.
Sep 27, 2024 Image I gave a keynote at The Security and Trustworthiness of AI workshop in the Netherlands on “The Ambivalence of Deep Learning in Cybersecurity: Balancing Promises and Pitfalls”.
Sep 13, 2024 Image We at KU Leuven organized a successful 3rd edition of the Summer School on Security & Privacy in the age of AI.
Jul 01, 2024 Image Fabio Pierazzi, Savino Dambra, and I organized the 3rd Workshop on Rethinking Malware Analysis (WoRMA) co-located with IEEE EuroS&P 2024 in Vienna!
Jul 01, 2024 Image Together with Lieven Desmet, I presented an overview on “Cybersecurity & AI” at the COSIC course in Leuven.
Jun 24, 2024 Image I will co-organize the Dagstuhl Seminar on Security and Privacy of Large Language Models in November 2025 together with Pavel Laskov, Emil Lupu, Stephan Günnemann and Nicholas Carlini.
Jun 01, 2024 Image After 7 years, I returned to SecAppDev this time as a speaker to give a talk on “Vulnerabilities of Large Language Model Applications” to practitioners.
Mar 01, 2024 Image I had the pleasure to give a lecture on “Vulnerabilities of Large Language Models” to Master’s students at the University of Edinburgh.
Dec 20, 2023 Image Our team published a pre-print of our research on Adversarial Machine Learning: our reinforcement learning-based approach enables an adaptive arms race between attacks and defenses against AI.
Jul 01, 2022 Image Presented our Trace Oddity paper on traffic correlation attacks on Tor at PETS in Sydney (pre-recorded presentation).