About Me

Privacy-preserving ML • AI privacy • Fairness & Compliance

I’m Nitin Agrawal, a Privacy Engineer at Snap, working at the intersection of privacy, trustworthiness, safety, and fairness in AI systems.

My work focuses on building robust privacy validation and risk-measurement frameworks that act as an internal line of defense for machine-learning systems. This includes identifying vulnerabilities, auditing models and data pipelines, and designing proactive mitigations that reduce risk before it reaches users.

I have also worked closely with privacy-enhancing technologies such as cryptographic techniques and federated learning to enable high-utility machine learning while remaining compliant and preserving user trust. More broadly, I contribute to governance and assurance mechanisms that help ensure AI systems are not only effective, but also responsible and aligned with long-term regulatory, societal and business goals.

Across everything I do, the aim is simple: move AI forward responsibly, with privacy, safety, and fairness built in by design.

Santa Monica, CA
Portrait of Nitin Agrawal

Privacy Engineer @ Snap

AI Privacy & Fairness · Privacy Validation

PhD (DPhil) · University of Oxford

Current focus

  • • Privacy risk measurement & auditing for ML systems
  • • Privacy-preserving learning & inference (crypto)
  • • Fairness, equitability & compliance-by-design
  • • Human factors in security & privacy

Education

Academic background in privacy-preserving machine learning and systems.

University of Oxford

University of Oxford, DPhil (PhD) in Computer Science

Oxford, UK · 2017–2021

Thesis: Towards Effective, Efficient and Equitable Privacy-Preserving Machine Learning PDF

Supervisor: Prof Sir Nigel Shadbolt

University of Oxford

University of Oxford, MS in Computer Science

Oxford, UK · 2016–2017

Thesis: Synthesis of Realistic Quantified-Self Data Using Deep Neural Networks

University of Delhi

University of Delhi, B.Tech

Delhi, India · 2012–2016

Major: Information Technology & Mathematical Innovations

Work

Snap

Snap Inc., Privacy Engineer

Santa Monica, CA · 2023–present

Working on privacy validation as an internal line of defense through vulnerability detection and proactive privacy mitigation. In parallel, contributing to privacy, safety, fairness, and trustworthy AI for ML systems, including privacy risk measurement and auditing.

Amazon

Amazon, Applied Scientist II (Privacy & Machine Learning)

Seattle, WA · 2022–2023

Audited and quantified privacy leakage in ML systems and designed privacy-preserving ML solutions across audio, vision, and language models.

University of Oxford

University of Oxford, Research Associate

Oxford, UK · 2021–2022

Worked on decentralized privacy-preserving computation using secure multi-party computation and federated learning.

Research Internships

Callsign, Consultant · AI Privacy & Authentication

London, UK · 2021

Amazon

Amazon (Alexa AI), Privacy-Preserving On-Device AI

Seattle, WA · 2020

Microsoft

Microsoft Research, Privacy-Preserving AI

Bangalore, India · 2019

The Alan Turing Institute, Privacy-Preserving ML

London, UK · 2018

Publications

A few recent and representative papers. Full list on Google Scholar.

Open Scholar
  1. PACM-HCI, 2025

    Libertas: Privacy-Preserving Computation for Decentralised Personal Data Stores

    Zhao, R.; Goel, N.; Agrawal, N.; Zhao, J.; Stein, J.; Verborgh, R.; Binns, R.; Berners-Lee, T.; Shadbolt, N.

  2. ICML Workshop (Foundation Models in the Wild), 2024

    An Auditing Test to Detect Behavioral Shift in Language Models

    Richter, L.; Agrawal, N.; He, X.; Minervini, P.; Kusner, M.

  3. CHI, 2021 · Honorable Mention

    Exploring Design and Governance Challenges in the Construction of Privacy-Preserving Computation

    Agrawal, N.; Binns, R.; Van Kleek, M.; Laine, K.; Shadbolt, N.

  4. CCS, 2021

    MPC-Friendly Commitments for Publicly Verifiable Covert Security

    Agrawal, N.; Bell, J.; Gascon, A.; Kusner, M.

  5. CCS, 2019

    QUOTIENT: Two-Party Secure Neural Network Training & Prediction

    Agrawal, N.; Shamsabadi, A.; Kusner, M.; Gascon, A.

Events & Presentations

Chronological list of invited talks, panels, and conference presentations.

  1. June 2025

    USENIX PEPR 2025, Talk

    Machine learning risk quantification for privacy and safety validation.

  2. February 2024

    Zoho Corporation, Invited External SME Talk & Panel

    Invited speaker on AI and privacy, covering privacy validation, trustworthy AI, and deployment challenges in large-scale systems.

  3. 2023

    IAPP Privacy. Security. Risk. (PSR) 2023, Panelist

    Panelist on privacy workforce development, co-hosted with NIST.

  4. 2023

    IAPP Privacy. Security. Risk. (PSR) 2023, Panelist

    Privacy Design Patterns for AI Systems: Threats and Protections.

Honors & awards

  • • Enhanced Industrial Funding for DPhil at the University of Oxford (£135,000), 2017–21
  • • EPSRC International Doctoral Studentship + Departmental Premium Scholarship (Cambridge Computer Laboratory), 2017
  • • Commonwealth Scholarship for MS at Oxford (£50,000), 2016–17
  • • Mitacs Globalink Research Internship Award, 2015

Peer reviewing

Selected venues:

IEEE S&P IEEE TPDS IEEE TDSC Pattern Recognition Letters IJSA SOUPS NeurIPS

Contact

If you’d like to collaborate (industry ↔ academia), chat privacy engineering, or discuss PETs and auditing, reach out.

© Nitin Agrawal · Built as a lightweight, static site.