About me
Welcome to Jinyuan Jia’s homepage!
I am an Assistant Professor of Information Sciences and Technology at the Pennsylvania State University. My current research focuses on 1) identifying security/safety issues of LLM-powered AI systems, and 2) enhancing the trustworthiness (e.g., transparency) of these systems.
Previously, I was a postdoc at the University of Illinois Urbana-Champaign under the supervision of Prof. Bo Li. I received a B.E. from the University of Science and Technology of China (USTC) in 2016, a M.E. from Iowa State University in 2019, and a Ph.D. from the Electrical and Computer Engineering Department at Duke University under the supervision of Prof. Neil Zhenqiang Gong in 2022.
Research Interests
- Security/safety of LLM-powered AI systems
- Provably secure/robust machine learning systems
- Security and privacy vulnerabilities of other machine learning systems (federated learning, foundation model ecosystem, graph neural network, etc.)
Selected Publications (Full List)
Wei Zou*, Runpeng Geng*, Binghui Wang, and Jinyuan Jia. “PoisonedRAG: Knowledge Poisoning Attacks to Retrieval-Augmented Generation of Large Language Models”. In USENIX Security Symposium, 2025. *Equal contribution
Yupei Liu, Yuqi Jia, Runpeng Geng, Jinyuan Jia, and Neil Zhenqiang Gong. “Formalizing and Benchmarking Prompt Injection Attacks and Defenses”. In USENIX Security Symposium, 2024.
Yanting Wang, Wei Zou, and Jinyuan Jia. “FCert: Provably Robust Few-Shot Classification in the Era of Foundation Model”. In IEEE Symposium on Security and Privacy (IEEE S&P), 2024.
Zaishuo Xia*, Han Yang*, Binghui Wang, and Jinyuan Jia. “GNNCert: Deterministic Certification of Graph Neural Networks against Adversarial Perturbations”. In International Conference on Learning Representations (ICLR), 2024. *Equal contribution
Hengzhi Pei, Jinyuan Jia, Wenbo Guo, Bo Li, and Dawn Song. “TextGuard: Provable Defense against Backdoor Attacks on Text Classification”. In Network and Distributed System Security Symposium (NDSS), 2024.
Jinyuan Jia*, Yupei Liu*, Yuepeng Hu, and Neil Zhenqiang Gong. “PORE: Provably Robust Recommender Systems against Data Poisoning Attacks”. In USENIX Security Symposium, 2023. *Equal contribution
Jinyuan Jia*, Yupei Liu*, and Neil Zhenqiang Gong. “BadEncoder: Backdoor Attacks to Pre-trained Encoders in Self-Supervised Learning”. In IEEE Symposium on Security and Privacy (IEEE S&P), 2022. *Equal contribution
Jinyuan Jia, Xiaoyu Cao, and Neil Zhenqiang Gong. “Intrinsic Certified Robustness of Bagging against Data Poisoning Attacks”. In AAAI Conference on Artificial Intelligence (AAAI), 2021.
Minghong Fang*, Xiaoyu Cao*, Jinyuan Jia, and Neil Zhenqiang Gong. “Local Model Poisoning Attacks to Byzantine-Robust Federated Learning”. In USENIX Security Symposium, 2020. *Equal contribution
Jinyuan Jia, Ahmed Salem, Michael Backes, Yang Zhang, and Neil Zhenqiang Gong. “MemGuard: Defending against Black-Box Membership Inference Attacks via Adversarial Examples”. In ACM Conference on Computer and Communications Security (CCS), 2019.
Jinyuan Jia and Neil Zhenqiang Gong. “AttriGuard: A Practical Defense Against Attribute Inference Attacks via Adversarial Machine Learning”. In USENIX Security Symposium, 2018.
Current Ph.D. Students
- Runpeng Geng (08/2024 - Now)
- Yanting Wang (08/2023 - Now)
- Wei Zou (08/2023 - Now)