About Me

I am a Postdoctoral Scholar in the Computer Science Department at Stanford University, where I have the privilege of being advised by Prof. Sanmi Koyejo, in the Stanford Trustworthy AI Research (STAIR) Lab.

Research Interests

I strive to advance trustworthy and responsible AI. In particular, I conduct research on causal learning and reasoning to facilitate and enhance the capacity of intelligent systems, and also algorithmic fairness and computational justice to model and understand the social impact of computational technologies. My ultimate goal is to cultivate intelligence that is both safe and principled with the help of causal perspective and methodology, so that technology can improve our lives with transparent responsibility and clear purpose. I seek to foster a symbiotic dance between artificial and natural intelligence, where they inspire, collaborate, and enhance each other to drive scientific discovery and support societal progress.

News

January 2026 We are organizing the Algorithmic Fairness Across Alignment Procedures and Agentic Systems (AFAA) Workshop, which will take place at ICLR 2026, April 26, 2026, in Rio de Janeiro, Brazil!
May 2025 Our paper “Reflection-Window Decoding: Text Generation with Selective Refinement” is accepted to ICML 2025. We propose a selective refinement framework facilitated by the sliding reflection-window to address the sub-optimality of purely autoregressive way of LLM decoding.
January 2025 Our paper “Prompting Fairness: Integrating Causality to Debias Large Language Models” is accepted to ICLR 2025. We propose a causality-guide LLM debiasing framework, utilizing selection mechanisms to design various debiasing strategies.

Selected Publications

* denotes equal contribution

  1. arXivPreprint
    Position: Beyond Sensitive Attributes, ML Fairness Should Quantify Structural Injustice via Social Determinants
    arXiv preprint arXiv:2508.08337, 2025.
  2. Reflection-Window Decoding: Text Generation with Selective Refinement
    In Forty-Second International Conference on Machine Learning, 2025.
  3. Prompting Fairness: Integrating Causality to Debias Large Language Models
    In Thirteenth International Conference on Learning Representations (preliminary version titled "Steering LLMs Towards Unbiased Responses: A Causality-Guided Debiasing Framework"), 2025.
  4. ICLRSpotlight
    Procedural Fairness Through Decoupling Objectionable Data Generating Components
    Zeyu TangJialu WangYang LiuPeter Spirtes, and Kun Zhang
    In Twelfth International Conference on Learning Representations (preliminary version presented in NeurIPS 2023 AFT workshop), 2024.
  5. What-is and How-to for Fairness in Machine Learning: A Survey, Reflection, and Perspective
    Zeyu TangJiji Zhang, and Kun Zhang
    ACM Computing Surveys, 2023.
  6. Tier Balancing: Towards Dynamic Fairness over Underlying Causal Factors
    Zeyu TangYatong ChenYang Liu, and Kun Zhang
    In Eleventh International Conference on Learning Representations (preliminary version presented in NeurIPS 2022 AFCP workshop), 2023.
  7. CLeaRSpotlight
    Attainability and Optimality: The Equalized Odds Fairness Revisited
    Zeyu Tang, and Kun Zhang
    In First Conference on Causal Learning and Reasoning, 2022.