Hello everyone, my name is Zhongjian Zhang. Currently, I am a third-year Ph.D. student from Beijing University of Posts and Telecommunications (BUPT), supervised by Prof. Chuan Shi.

My research interests primary focus on large language models and trustworthy graph machine learning. Specifically, my current research is mainly about the development of graph foundation model.

If you have any questions regarding my work or are interested in collaborating with me, please contact me via email.

Contact Information

  • Email: zhangzj@bupt.edu.cn
  • Wechat: zxc1957787636

🔥 News

  • [2025.12] 💼 I join the Hong Kong University of Science and Technology (Guangzhou) as a research intern, supervised by Prof. Jia Li.
  • [2025.12] 🎉 My research is supported by the CAS Youth Talent Training Program for PhD Students.
  • [2025.10] 🎉 I receive the National PhD Scholarship from the Ministry of Education of China.
  • [2025.04] 🎉 My research is supported by the BUPT Excellent PhD Students Foundation: CX20251005.
  • [2024.12] 🎉 Our paper Spattack is accepted to AAAI 2025.
  • [2024.11] 🎉 Our paper LLM4RGNN is accepted to KDD 2025.
  • [2024.06] 💼 I join China Telecommunications Corporation as a research intern.
  • [2024.01] 🎉 Our paper GraphPAR is accepted to WWW 2024.

📝 Publications

  • Can Large Language Models Improve the Adversarial Robustness of Graph Neural Networks? (CCF-A)
    • Zhongjian Zhang, Xiao Wang, Huichi Zhou, Yue Yu, Mengmei Zhang, Cheng Yang, Chuan Shi
    • KDD’25 [Paper]|Blog|Code]
  • Rethinking Byzantine Robustness in Federated Recommendation from Sparse Aggregation Perspective (CCF-A)
    • Zhongjian Zhang, Mengmei Zhang, Xiao Wang, Lingjuan Lyu, Bo Yan, Junping Du, Chuan Shi
    • AAAI’25 [Paper|Blog|Code]
  • Endowing Pre-trained Graph Models with Provable Fairness (CCF-A)
    • Zhongjian Zhang, Mengmei Zhang, Yue Yu, Cheng Yang, Jiawei Liu, Chuan Shi
    • WWW’24 [Paper|Blog|Code]
  • Data-centric graph learning: A survey
    • Yuxin Guo, Deyu Bo, Cheng Yang, Zhiyuan Lu, Zhongjian Zhang, Jixi Liu, Yufei Peng, Chuan Shi
    • IEEE TBD’24 [Paper|Blog]

💬 Talks

  • 2025.01, Rethinking Byzantine Robustness in Federated Recommendation from Sparse Aggregation Perspective, Invited Online Talk at AITIME. [video]
  • 2024.12, Can Large Language Models Improve the Adversarial Robustness of Graph Neural Networks?, Invited Online Talk at AITIME. [video]

💻 Experiences

  • 2024.06 - 2025.02, China Telecommunications Corporation, China
  • 2025.12 - Now, Hong Kong University of Science and Technology (Guangzhou), China
    • Research intern: Graph Foundation Model, Large Language Model
    • Mentor: Jia Li