University of Oxford, DPhil (PhD) in Computer Science
Oxford, UK · 2017–2021
Thesis: Towards Effective, Efficient and Equitable Privacy-Preserving Machine Learning PDF
Supervisor: Prof Sir Nigel Shadbolt
Privacy-preserving ML • AI privacy • Fairness & Compliance
I’m Nitin Agrawal, a Privacy Engineer at Snap, working at the intersection of privacy, trustworthiness, safety, and fairness in AI systems.
My work focuses on building robust privacy validation and risk-measurement frameworks that act as an internal line of defense for machine-learning systems. This includes identifying vulnerabilities, auditing models and data pipelines, and designing proactive mitigations that reduce risk before it reaches users.
I have also worked closely with privacy-enhancing technologies such as cryptographic techniques and federated learning to enable high-utility machine learning while remaining compliant and preserving user trust. More broadly, I contribute to governance and assurance mechanisms that help ensure AI systems are not only effective, but also responsible and aligned with long-term regulatory, societal and business goals.
Across everything I do, the aim is simple: move AI forward responsibly, with privacy, safety, and fairness built in by design.
Academic background in privacy-preserving machine learning and systems.
Oxford, UK · 2017–2021
Thesis: Towards Effective, Efficient and Equitable Privacy-Preserving Machine Learning PDF
Supervisor: Prof Sir Nigel Shadbolt
Oxford, UK · 2016–2017
Thesis: Synthesis of Realistic Quantified-Self Data Using Deep Neural Networks
Delhi, India · 2012–2016
Major: Information Technology & Mathematical Innovations
Santa Monica, CA · 2023–present
Working on privacy validation as an internal line of defense through vulnerability detection and proactive privacy mitigation. In parallel, contributing to privacy, safety, fairness, and trustworthy AI for ML systems, including privacy risk measurement and auditing.
Seattle, WA · 2022–2023
Audited and quantified privacy leakage in ML systems and designed privacy-preserving ML solutions across audio, vision, and language models.
Oxford, UK · 2021–2022
Worked on decentralized privacy-preserving computation using secure multi-party computation and federated learning.
London, UK · 2021
Seattle, WA · 2020
Bangalore, India · 2019
London, UK · 2018
A few recent and representative papers. Full list on Google Scholar.
PACM-HCI, 2025
Libertas: Privacy-Preserving Computation for Decentralised Personal Data Stores
Zhao, R.; Goel, N.; Agrawal, N.; Zhao, J.; Stein, J.; Verborgh, R.; Binns, R.; Berners-Lee, T.; Shadbolt, N.
ICML Workshop (Foundation Models in the Wild), 2024
An Auditing Test to Detect Behavioral Shift in Language Models
Richter, L.; Agrawal, N.; He, X.; Minervini, P.; Kusner, M.
CHI, 2021 · Honorable Mention
Exploring Design and Governance Challenges in the Construction of Privacy-Preserving Computation
Agrawal, N.; Binns, R.; Van Kleek, M.; Laine, K.; Shadbolt, N.
CCS, 2021
MPC-Friendly Commitments for Publicly Verifiable Covert Security
Agrawal, N.; Bell, J.; Gascon, A.; Kusner, M.
CCS, 2019
QUOTIENT: Two-Party Secure Neural Network Training & Prediction
Agrawal, N.; Shamsabadi, A.; Kusner, M.; Gascon, A.
Chronological list of invited talks, panels, and conference presentations.
June 2025
USENIX PEPR 2025, Talk
Machine learning risk quantification for privacy and safety validation.
February 2024
Zoho Corporation, Invited External SME Talk & Panel
Invited speaker on AI and privacy, covering privacy validation, trustworthy AI, and deployment challenges in large-scale systems.
2023
IAPP Privacy. Security. Risk. (PSR) 2023, Panelist
Panelist on privacy workforce development, co-hosted with NIST.
2023
IAPP Privacy. Security. Risk. (PSR) 2023, Panelist
Privacy Design Patterns for AI Systems: Threats and Protections.
Selected venues:
If you’d like to collaborate (industry ↔ academia), chat privacy engineering, or discuss PETs and auditing, reach out.
© Nitin Agrawal · Built as a lightweight, static site.