Hello everyone, my name is Zhongjian Zhang. Currently, I am a third-year Ph.D. student from Beijing University of Posts and Telecommunications (BUPT), supervised by Prof. Chuan Shi. My research interests primary focus on large language models and trustworthy graph machine learning. Specifically, my current research is mainly about the development of graph foundation model. If you have any questions regarding my work or are interested in collaborating with me, please feel free to contact me via:

  • Email: zhangzj@bupt.edu.cn
  • Wechat: zxc1957787636

🔥 News

  • [2026.03] 🎉 Our papers FRiskGPT and RGLM are accepted to WWW 2026.
  • [2025.12] 💼 I join the Hong Kong University of Science and Technology (Guangzhou) as a research intern, supervised by Prof. Jia Li.
  • [2025.12] 🎉 My research is supported by the CAS Youth Talent Training Program for PhD Students.
  • [2025.10] 🎉 I receive the National PhD Scholarship from the Ministry of Education of China.
  • [2025.04] 🎉 My research is supported by the BUPT Excellent PhD Students Foundation: CX20251005.
  • [2024.12] 🎉 Our paper Spattack is accepted to AAAI 2025.
  • [2024.11] 🎉 Our paper LLM4RGNN is accepted to KDD 2025.
  • [2024.06] 💼 I join China Telecommunications Corporation as a research intern.
  • [2024.01] 🎉 Our paper GraphPAR is accepted to WWW 2024.

📝 Publications

  1. FRiskGPT: A Generative Foundation Model for Financial Risk Detection (CCF-A)
    • Zhongjian Zhang, Mengmei Zhang, Dehua Xu, Rongjun Shi, Jianfeng Liu, Fuli Meng, Huajian Xu, Xiao Wang, Ruijia Wang, Junze Chen, Minwei Tang, Chuan Shi
    • WWW’26. Deployed in China Telecom “BestPay” Risk Control System.
  2. Toward Graph-Tokenizing Large Language Models with Reconstructive Graph Instruction Tuning (CCF-A)
    • Zhongjian Zhang, Xiao Wang, Mengmei Zhang, Jiarui Tan, Chuan Shi
    • WWW’26 [Paper]|Code]
  3. Can Large Language Models Improve the Adversarial Robustness of Graph Neural Networks? (CCF-A)
    • Zhongjian Zhang, Xiao Wang, Huichi Zhou, Yue Yu, Mengmei Zhang, Cheng Yang, Chuan Shi
    • KDD’25 [Paper]|Blog|Code]
  4. Rethinking Byzantine Robustness in Federated Recommendation from Sparse Aggregation Perspective (CCF-A)
    • Zhongjian Zhang, Mengmei Zhang, Xiao Wang, Lingjuan Lyu, Bo Yan, Junping Du, Chuan Shi
    • AAAI’25 [Paper|Blog|Code]
  5. Endowing Pre-trained Graph Models with Provable Fairness (CCF-A)
    • Zhongjian Zhang, Mengmei Zhang, Yue Yu, Cheng Yang, Jiawei Liu, Chuan Shi
    • WWW’24 [Paper|Blog|Code]
  6. Data-centric graph learning: A survey
    • Yuxin Guo, Deyu Bo, Cheng Yang, Zhiyuan Lu, Zhongjian Zhang, Jixi Liu, Yufei Peng, Chuan Shi
    • IEEE TBD’24 [Paper|Blog]

🏆 Honors and Awards

  1. CAST PhD Support Program, Youth Talent Development Initiative2025
  2. National PhD Scholarship, Ministry of Education of China2025
  3. BUPT Excellent Ph.D. Students Foundation (CX20251005)2025
  4. Outstanding Graduate Student, Beijing University of Posts and Telecommunications2024
  5. First-class Scholarship, Beijing University of Posts and Telecommunications2023
  6. Outstanding Student of Shandong Province (Top 0.5%)2022
  7. 2nd Prize, 13th Lanqiao Cup National Finals, Java Software Development2022
  8. 2nd Prize, National Final of Shandong Data Innovation Competition (Team Leader)2021
  9. 1st Prize, Challenge Cup Shandong Province (Team Leader)2021
  10. Shandong Provincial Government Scholarship (Top 0.5%)2021

💬 Talks

  • 2025.01, Rethinking Byzantine Robustness in Federated Recommendation from Sparse Aggregation Perspective, Invited Online Talk at AITIME. [video]
  • 2024.12, Can Large Language Models Improve the Adversarial Robustness of Graph Neural Networks?, Invited Online Talk at AITIME. [video]

💻 Experiences

  • 2024.06 - 2025.02, China Telecommunications Corporation, China
  • 2025.12 - Now, Hong Kong University of Science and Technology (Guangzhou), China
    • Research intern: Graph Foundation Model, Large Language Model
    • Mentor: Jia Li