Hongyan Chang

myself.jpg

I am Hongyan Chang (常红燕), a postdoc at MBZUAI, working with Prof. Ting Yu. I obtained my Ph.D. from National University of Singapore working under the supervision of Reza Shokri.

My research interests are in trustworthy machine learning, with a focus on the privacy and accountability of large language models (LLMs):

  • Evaluating the risks of LLMs: Understanding and quantifying vulnerabilities in modern language models and their applications
  • Machine-generated text detection: Developing methods to identify AI-generated content while preserving utility

Publications

  • Watermark Smoothing Attacks against Language Models
    Hongyan Chang, Hamed Hassani, and Reza Shokri
    WMARK@International Conference on Learning Representations (ICLR), 2025

  • Context-Aware Membership Inference Attacks Against Pre-Trained Large Language Models
    Hongyan Chang, Ali Shahin Shamsabadi, Kleomenis Katevas, Hamed Haddadi, and Reza Shokri
    2024

  • Efficient Privacy Auditing in Federated Learning
    Hongyan Chang, Brandon Edwards, Anindya S. Paul, and Reza Shokri
    USENIX Security Symposium (USENIX), 2024
    [PDF] [Code]

  • On The Impact of Machine Learning Randomness on Group Fairness
    Prakhar Ganesh, Hongyan Chang, Martin Strobel, and Reza Shokri
    ACM Conference on Fairness, Accountability, and Transparency (FAccT), 2023
    🏆 Best Paper Award
    [PDF] [Code]

  • Bias Propagation in Federated Learning
    Hongyan Chang and Reza Shokri
    International Conference on Learning Representations (ICLR), 2023
    [PDF] [Code]

  • Cronus: Robust and Heterogeneous Collaborative Learning with Black-box Knowledge Transfer
    Hongyan Chang*, Virat Shejwalkar*, Reza Shokri, and Amir Houmansadr
    NFFL@Neural Information Processing Systems (NeurIPS), 2021
    (*Equal contribution)
    [PDF]

  • On the Privacy Risks of Algorithmic Fairness
    Hongyan Chang and Reza Shokri
    6th IEEE European Symposium on Security and Privacy (Euro S&P), 2021
    [PDF] [Slides]

  • On Adversarial Bias and the Robustness of Fair Machine Learning
    Hongyan Chang, Ta Duy Nguyen, Sasi Kumar Murakonda, Ehsan Kazemi, and Reza Shokri
    2020
    [PDF] [Code]


Service

  • External Reviewer for VLDB 2026
  • PC member for ICLR 2024, NeurIPS 2025
  • AE PC member of NDSS 2025, USENIX 2025, NDSS 2026
  • Reviewer of IEEE Security & Privacy 2024
  • PC member of ACM FAccT Conference 2022, 2023

Selected Awards

  • Notable Reviewer in ICLR 2025
  • Research Achievement Award in 2023 by NUS School of Computing
  • National University of Singapore Research Scholarship* (2018–2022)
  • First Prize of the 10th National Information Security Competition (2017) on fake news detection for WeiBo
  • National Scholarship (2015 & 2016, top 1%)

Open-source Projects

  • Privacy Meter @ NUS: A system for evaluating the privacy risks of machine learning models. GitHub (500+ stars)
    My role: leading the development team and spearheading the initial 1.0.1 release.

  • OpenFL @ Intel: Auditing privacy risks in real-time. GitHub. Blogpost.
    My role: leading the integration of privacy meter into OpenFL.


Teaching Experience

  • Teaching Assistant for Trustworthy Machine Learning (Summer 2021, 2022, 2023)
  • Teaching Assistant for Computer Security (Spring and Summer 2020)
  • Teaching Assistant for Introduction to Artificial Intelligence (Spring 2019)

Talks

Watermark in Large Language Models

  • WMARK@ICLR, 2025 (poster)

Privacy in Federated Learning

  • USENIX, 2024

Fairness in Federated Learning

  • FL@FM-Singapore, 2024
  • Brave, 2024
  • ICLR, 2024 (poster)
  • N-CRiPT, 2023

Trade-off in Privacy and Fairness

  • Private AI Collaborative Research Institute held by Intel, 2022
  • CyberSec&AI, 2021
  • PrivacyCon, 2021 by the US Federal Trade Commission (FTC)

Visitor counter