Hello everyone, my name is Zhongjian Zhang. Currently, I am a second-year Ph.D. student from Beijing University of Posts and Telecommunications (BUPT), supervised by Prof. Chuan Shi.
My research interests primary focus on trustworthy graph machine learning and large language models. Specifically, my current research is mainly about the development of graph foundation model.
If you have any questions regarding my work or are interested in collaborating with me, please contact me via email.
Contact Information
- Email: zhangzj@bupt.edu.cn
- Wechat: zxc1957787636
🔥 News
- [2024.12] 🎉 Our paper Spattack is accepted to AAAI 2025.
- [2024.11] 🎉 Our paper LLM4RGNN is accepted to KDD 2025.
- [2024.06] 💼 I join China Telecommunications Corporation as a research intern.
- [2024.01] 🎉 Our paper GraphPAR is accepted to WWW 2024.
📝 Publications
- Can Large Language Models Improve the Adversarial Robustness of Graph Neural Networks? (CCF-A)
- Rethinking Byzantine Robustness in Federated Recommendation from Sparse Aggregation Perspective (CCF-A)
- Endowing Pre-trained Graph Models with Provable Fairness (CCF-A)
- Data-centric graph learning: A survey
💬 Talks
- 2025.01, Rethinking Byzantine Robustness in Federated Recommendation from Sparse Aggregation Perspective, Invited Online Talk at AITIME. [video]
- 2024.12, Can Large Language Models Improve the Adversarial Robustness of Graph Neural Networks?, Invited Online Talk at AITIME. [video]
💻 Experiences
- 2024.06 - 2025.01, China Telecommunications Corporation, China
- Research intern: Financial Risk Service
- Mentor: Mengmei Zhang