Hongyan Chang

myself.jpg

I am Hongyan Chang (常红燕), a postdoctoral researcher at MBZUAI, working with Prof. Ting Yu. I obtained my Ph.D. in Computer Science from the National University of Singapore, under the supervision of Prof. Reza Shokri. I received my Bachelor’s degree in 2018 from the University of Electronic Science and Technology of China (UESTC), and was an exchange student at National Chiao Tung University in 2016.

My research interests are in trustworthy machine learning, with a focus on the privacy and accountability of large language models (LLMs):

  • Evaluating the risks of LLM systems: Understanding and quantifying vulnerabilities in modern language models and their systems
  • Machine-generated text detection: Developing methods to identify AI-generated content while preserving utility

Publications

  • Watermark Smoothing Attacks against Language Models
    Hongyan Chang, Hamed Hassani, and Reza Shokri
    Empirical Methods in Natural Language Processing (EMNLP) Findings, 2025
    WMARK@International Conference on Learning Representations (ICLR), 2025

  • Context-Aware Membership Inference Attacks Against Pre-Trained Large Language Models
    Hongyan Chang, Ali Shahin Shamsabadi, Kleomenis Katevas, Hamed Haddadi, and Reza Shokri
    Empirical Methods in Natural Language Processing (EMNLP) Main, 2025
    [PDF] [Code] [Blogpost]

  • Efficient Privacy Auditing in Federated Learning
    Hongyan Chang, Brandon Edwards, Anindya S. Paul, and Reza Shokri
    USENIX Security Symposium (USENIX), 2024
    [PDF] [Code]

  • On The Impact of Machine Learning Randomness on Group Fairness
    Prakhar Ganesh, Hongyan Chang, Martin Strobel, and Reza Shokri
    ACM Conference on Fairness, Accountability, and Transparency (FAccT), 2023
    🏆 Best Paper Award
    [PDF] [Code]

  • Bias Propagation in Federated Learning
    Hongyan Chang and Reza Shokri
    International Conference on Learning Representations (ICLR), 2023
    [PDF] [Code]

  • Cronus: Robust and Heterogeneous Collaborative Learning with Black-box Knowledge Transfer
    Hongyan Chang*, Virat Shejwalkar*, Reza Shokri, and Amir Houmansadr
    NFFL@Neural Information Processing Systems (NeurIPS), 2021
    (*Equal contribution)
    [PDF]

  • On the Privacy Risks of Algorithmic Fairness
    Hongyan Chang and Reza Shokri
    6th IEEE European Symposium on Security and Privacy (Euro S&P), 2021
    [PDF] [Slides]

  • On Adversarial Bias and the Robustness of Fair Machine Learning
    Hongyan Chang, Ta Duy Nguyen, Sasi Kumar Murakonda, Ehsan Kazemi, and Reza Shokri
    2020
    [PDF] [Code]


PC services

  • 2026: VLDB (External Reviewer), ICLR (Reviewer), NDSS (AE Reviewer)
  • 2025: ICLR (Reviewer), NeurIPS (Reviewer), NDSS (AE Reviewer), USENIX Security (AE Reviewer)
  • 2024: IEEE Security & Privacy (Reviewer, Certificate)
  • 2023: ACM FAccT (Reviewer)
  • 2022: ACM FAccT (Reviewer)

Selected Awards


Open-source Projects

  • Privacy Meter @ NUS: A system for evaluating the privacy risks of machine learning models. GitHub (500+ stars)
    My role: leading the development team and spearheading the initial 1.0.1 release.

  • OpenFL @ Intel: Auditing privacy risks in real-time. GitHub. Blogpost.
    My role: leading the integration of privacy meter into OpenFL.


Teaching Experience

  • Teaching Assistant for Trustworthy Machine Learning (Summer 2021, 2022, 2023)
  • Teaching Assistant for Computer Security (Spring and Summer 2020)
  • Teaching Assistant for Introduction to Artificial Intelligence (Spring 2019)

Talks

Watermark in Large Language Models

  • WMARK@ICLR, 2025 (poster)

Privacy in Federated Learning

  • USENIX, 2024

Fairness in Federated Learning

  • FL@FM-Singapore, 2024
  • Brave, 2024
  • ICLR, 2024 (poster)
  • N-CRiPT, 2023

Trade-off in Privacy and Fairness

  • Private AI Collaborative Research Institute held by Intel, 2022
  • CyberSec&AI, 2021
  • PrivacyCon, 2021 by the US Federal Trade Commission (FTC)

Visitor counter