Xiang Wang

Xiang Wang

Professor · Trustworthy AI · LLMs · Information Retrieval

xiangwang@ustc.edu.cn
100+ Publications
35k+ Citations
h=60 H-Index

Biography

I am a Professor at the University of Science and Technology of China (USTC) and a Jointly-Appointed Researcher at Shanghai AI Laboratory. My research focuses on Large Language Models & Agents, LLM/Agentic Safety, and LLM & Agentic for Recommender Systems. With my students and collaborators, we build reliable, adaptable, and trustworthy AI systems — spanning alignment, knowledge editing, multi-agent reasoning, and next-generation recommenders.

Our work spans 100+ top-tier publications with over 35,000 Google Scholar citations (h-index = 60). Two papers rank as the most-cited and second-most-cited SIGIR papers of the past decade; three consecutively topped SIGIR citation charts in 2019, 2020, and 2021, and have been adopted in courses at Stanford and incorporated into PyTorch Geometric and Deep Graph Library. I serve as Area Chair for NeurIPS, ICML, and ICLR, Associate Editor for TPAMI and TOIS, and (Senior) PC for SIGIR, WWW, KDD, and ACL.

Research Interests

Large Language Models

Alignment (DPO/RLHF), reasoning, and knowledge editing — building reliable and adaptable language models.

AI Agents

Building intelligent agents with memory, multi-step reasoning, and collaborative multi-agent architectures.

LLM & Agentic for RecSys

Leveraging LLMs and agentic frameworks to build next-generation personalized recommendation systems.

LLM & Agentic Safety

Safety alignment, backdoor defense, and multimodal safety for trustworthy AI systems and agents.

Research Highlights

Selected highly-influential works  ·  View full publication list →

ICLR '25
AlphaEdit: Null-Space Constrained Knowledge Editing for Language Models
Junfeng Fang, Houcheng Jiang, Kun Wang, Yunshan Ma, Shi Jie, Xiang Wang*, Xiangnan He, Tat-Seng Chua
Outstanding Paper Award — 1 of 3 selected from 3,704 submissions
SIGIR '24
Llara: Aligning Large Language Models with Sequential Recommenders
Jiayi Liao, Sihang Li, Zhengyi Yang, Jiancan Wu, Yancheng Yuan, Xiang Wang*, Xiangnan He
Best Paper Finalist, SIGIR 2024 Most Influential Paper at SIGIR 2024
SIGIR '19
Neural Graph Collaborative Filtering
Xiang Wang, Xiangnan He, Meng Wang, Fuli Feng, Tat-Seng Chua
Most Cited Paper in SIGIR 2019 · 1000+ Google Citations
SIGIR '20
LightGCN: Simplifying and Powering Graph Convolution Network for Recommendation
Xiangnan He, Kuan Deng, Xiang Wang, Yan Li, Yongdong Zhang, Meng Wang
Most Cited Paper in SIGIR 2020
SIGIR '21
Self-Supervised Graph Learning for Recommendation
Jiancan Wu, Xiang Wang, Fuli Feng, Xiangnan He, Liang Chen, Jianxun Lian, Xing Xie
Most Cited Paper in SIGIR 2021
SIGIR '21
Disentangled Graph Collaborative Filtering
Xiang Wang, Hongye Jin, An Zhang, Xiangnan He, Tong Xu, Tat-Seng Chua
Top-3 Most Cited Paper in SIGIR 2021
KDD '19
KGAT: Knowledge Graph Attention Network for Recommendation
Xiang Wang, Xiangnan He, Yixin Cao, Meng Liu, Tat-Seng Chua
Top-2 Most Cited Paper in KDD 2019
CVPR '22
Invariant Grounding for Video Question Answering
Yicong Li, Xiang Wang, Junbin Xiao, Wei Ji, Tat-Seng Chua
Oral Presentation Best Paper Finalist

Recent News

Dec 2025
NeurIPS 8 papers accepted at NeurIPS 2025!
Think before Recommendation: Autonomous Reasoning-enhanced Recommender
3D-GSRD: 3D Molecular Graph Auto-Encoder with Selective Re-mask Decoding
Search and Refine During Think: Facilitating Knowledge Refinement for Improved Retrieval-Augmented Reasoning
On Efficiency-Effectiveness Trade-off of Diffusion-based Recommenders
RePO: Understanding Preference Learning Through ReLU-Based Optimization
On Reasoning Strength Planning in Large Reasoning Models
Towards Unified and Lossless Latent Space for 3D Molecular Latent Diffusion Modeling
Fading to Grow: Growing Preference Ratios via Preference Fading Discrete Diffusion for Recommendation
Jul 2025
ICML 7 papers accepted at ICML 2025, including one Spotlight!
Multi-agent Architecture Search via Agentic Supernet  [Spotlight]
AnyEdit: Edit Any Knowledge Encoded in Language Models
Reinforced Lifelong Editing for Language Models
AlphaDPO: Adaptive Reward Margin for Direct Preference Optimization
DAMO: Data- and Model-aware Alignment of Multi-modal LLMs
Larger or Smaller Reward Margins to Select Preferences for Alignment?
NExT-Mol: 3D Diffusion Meets 1D Language Modeling for 3D Molecule Generation
Apr 2025
Award Honoured with multiple prestigious awards in 2025: Wu Wenjun AI Science & Technology Award (First Prize, Natural Science), MIT Technology Review AI 100 Young Innovators, and Yunfan Award at the World Artificial Intelligence Conference (WAIC).
Jan 2025
Outstanding 4 papers accepted at ICLR 2025! AlphaEdit receives the Outstanding Paper Award — selected as 1 of only 3 outstanding papers from 3,704 submissions.
AlphaEdit: Null-Space Constrained Knowledge Editing for Language Models  [Outstanding Paper]
Language Representations Can be What Recommenders Need  [Oral]
Unified Parameter-Efficient Unlearning for LLMs
Towards Robust Alignment of Language Models: Distributionally Robustifying DPO

Honors & Awards

2025

ICLR Outstanding Paper Award

1 of only 3 papers selected from 3,704 submissions — ICLR 2025

2025

Wu Wenjun AI Science & Technology Award

First Prize, Natural Science Category

2025

MIT Technology Review — AI 100 Young Innovators

MIT Technology Review

2025

Yunfan Award

World Artificial Intelligence Conference (WAIC)

2024

SIGIR Early Career Researcher Award

ACM SIGIR

2024

MIT Technology Review — Innovators Under 35 China

MIT Technology Review

2024

Elsevier Most Cited Chinese Researcher (2023)

Elsevier

2023

Frontier of Science Award

1st International Basic Science Conference

2022–25

AI 2000 Most Influential Scholar in AI

Ranked 6th in "Information Retrieval and Recommendation"

2022–25

Stanford "World's Top 2% Scientists"

Lifetime Scientific Impact Ranking

2024

Best Paper Finalist

SIGIR 2024

2022

Best Paper Finalist

CVPR 2022

2022

Best Paper Finalist

WWW 2022

2020

Best Paper Finalist

IJCAI 2020

Background

2022 – Present

Professor

School of Artificial Intelligence and Data Science, University of Science and Technology of China

Shanghai Artificial Intelligence Laboratory

2019 – 2022

Research Fellow / Senior Research Fellow

NExT++, National University of Singapore

Supervisor: Prof. Tat-Seng Chua

2014 – 2019

Ph.D. in Computer Science

NExT++, National University of Singapore

Supervisor: Prof. Tat-Seng Chua · Mentors: Prof. Xiangnan He, Prof. Liqiang Nie

2010 – 2014

B.Sc. in Computer Science

Beihang University

Prospective Students

I am looking for highly motivated Ph.D., Master, and undergraduate students to work on cutting-edge topics including Large Language Models (alignment, reasoning, knowledge editing), AI Agents (multi-agent systems, long-context reasoning), LLM/Agentic for Recommender Systems, and LLM/Agentic Safety. If you are interested, please send me your CV and transcripts. We are also actively seeking research partnerships and collaborations in data science and AI.