Publications
2025
- Wei Zou*, Runpeng Geng*, Binghui Wang, and Jinyuan Jia. “PoisonedRAG: Knowledge Poisoning Attacks to Retrieval-Augmented Generation of Large Language Models”. In USENIX Security Symposium, 2025. *Equal contribution code
2024
Zhangchen Xu, Fengqing Jiang, Luyao Niu, Jinyuan Jia, Bo Li, and Radha Poovendran. “ACE: A Model Poisoning Attack on Contribution Evaluation Methods in Federated Learning”. In USENIX Security Symposium, 2024.
Yupei Liu, Yuqi Jia, Runpeng Geng, Jinyuan Jia, and Neil Zhenqiang Gong. “Formalizing and Benchmarking Prompt Injection Attacks and Defenses”. In USENIX Security Symposium, 2024. code
Yuxin Yang, Qiang Li, Jinyuan Jia, Yuan Hong, and Binghui Wang. “Distributed Backdoor Attacks on Federated Graph Learning and Certified Defenses”. In ACM Conference on Computer and Communications Security (CCS), 2024. code
Distinguished Paper AwardZhengyuan Jiang, Moyang Guo, Yuepeng Hu, Jinyuan Jia, and Neil Zhenqiang Gong. “Certifiably Robust Image Watermark”. In European Conference on Computer Vision (ECCV), 2024. code
Lingyu Du, Jinyuan Jia, Xucong Zhang, and Guohao Lan. “PrivateGaze: Preserving User Privacy in Black-box Mobile Gaze Tracking Services”. In ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies (UbiComp), 2024. code
Zhangchen Xu, Fengqing Jiang, Luyao Niu, Jinyuan Jia, Bill Yuchen Lin, and Radha Poovendran. “SafeDecoding: Defending against Jailbreak Attacks via Safety-Aware Decoding”. In Annual Meeting of the Association for Computational Linguistics (ACL), 2024. code
Hangfan Zhang, Zhimeng Guo, Huaisheng Zhu, Bochuan Cao, Lu Lin, Jinyuan Jia, Jinghui Chen, and Dinghao Wu. “Jailbreak Open-Sourced Large Language Models via Enforced Decoding ”. In Annual Meeting of the Association for Computational Linguistics (ACL), 2024. code
Jiate Li, Meng Pang, Yun Dong, Jinyuan Jia, and Binghui Wang. “Graph Neural Network Explanations are Fragile”. In International Conference on Machine Learning (ICML), 2024. code
Zhuowen Yuan, Wenbo Guo, Jinyuan Jia, Bo Li, and Dawn Song. “SHINE: Shielding Backdoors in Deep Reinforcement Learning”. In International Conference on Machine Learning (ICML), 2024.
Jinghuai Zhang, Hongbin Liu, Jinyuan Jia, and Neil Zhenqiang Gong. “Data Poisoning based Backdoor Attacks to Contrastive Learning”. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2024. code
Yuan Xiao, Shiqing Ma, Juan Zhai, Chunrong Fang, Jinyuan Jia, and Zhenyu Chen. “Towards General Robustness Verification of MaxPool-based Convolutional Neural Networks via Tightening Linear Approximation”. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2024.
Yanting Wang, Hongye Fu, Wei Zou, and Jinyuan Jia. “MMCert: Provable Defense against Adversarial Attacks to Multi-modal Models”. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2024. code
Yanting Wang, Wei Zou, and Jinyuan Jia. “FCert: Provably Robust Few-Shot Classification in the Era of Foundation Model”. In IEEE Symposium on Security and Privacy (IEEE S&P), 2024.
Zaishuo Xia*, Han Yang*, Binghui Wang, and Jinyuan Jia. “GNNCert: Deterministic Certification of Graph Neural Networks against Adversarial Perturbations”. In International Conference on Learning Representations (ICLR), 2024. *Equal contribution code
Oral PresentationHengzhi Pei, Jinyuan Jia, Wenbo Guo, Bo Li, and Dawn Song. “TextGuard: Provable Defense against Backdoor Attacks on Text Classification”. In Network and Distributed System Security Symposium (NDSS), 2024. code
2023
Bochuan Cao, Changjiang Li, Ting Wang, Jinyuan Jia, Bo Li, and Jinghui Chen. “IMPRESS: Evaluating the Resilience of Imperceptible Perturbations Against Unauthorized Data Usage in Diffusion-Based Generative AI”. In Conference on Neural Information Processing Systems (NeurIPS), 2023.
Hangfan Zhang, Jinyuan Jia, Jinghui Chen, Lu Lin, and Dinghao Wu. “A3FL: Adversarially Adaptive Backdoor Attacks to Federated Learning”. In Conference on Neural Information Processing Systems (NeurIPS), 2023.
Jinyuan Jia*, Zhuowen Yuan*, Dinuka Sahabandu, Luyao Niu, Arezoo Rajabi, Bhaskar Ramasubramanian, Bo Li, and Radha Poovendran. “FedGame: A Game-Theoretic Defense against Backdoor Attacks in Federated Learning”. In Conference on Neural Information Processing Systems (NeurIPS), 2023. *Equal contribution
Hanting Ye, Guohao Lan, Jinyuan Jia, and Qing Wang. “Screen Perturbation: Adversarial Attack and Defense on Under-Screen Camera”. In International Conference on Mobile Computing and Networking (MobiCom), 2023.
Jinyuan Jia*, Yupei Liu*, Yuepeng Hu, and Neil Zhenqiang Gong. “PORE: Provably Robust Recommender Systems against Data Poisoning Attacks”. In USENIX Security Symposium, 2023. *Equal contribution
Hangfan Zhang, Jinghui Chen, Lu Lin, Jinyuan Jia, and Dinghao Wu. “Graph Contrastive Backdoor Attacks”. In International Conference on Machine Learning (ICML), 2023.
Jinghuai Zhang, Jinyuan Jia, Hongbin Liu, and Neil Zhenqiang Gong. “PointCert: Point Cloud Classification with Deterministic Certified Robustness Guarantees”. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2023.
Xiaoyu Cao, Jinyuan Jia, Zaixi Zhang, and Neil Zhenqiang Gong. “FedRecover: Recovering from Poisoning Attacks in Federated Learning using Historical Information”. In IEEE Symposium on Security and Privacy (IEEE S&P), 2023.
Wenjie Qu, Jinyuan Jia, and Neil Zhenqiang Gong. “REaaS: Enabling Adversarially Robust Downstream Classifiers via Robust Encoder as a Service ”. In Network and Distributed System Security Symposium (NDSS), 2023.
2022
Xiaoyu Cao, Zaixi Zhang, Jinyuan Jia, and Neil Zhenqiang Gong. “FLCert: Provably Secure Federated Learning against Poisoning Attacks”. IEEE Transactions on Information Forensics and Security (TIFS), 2022.
Jinyuan Jia*, Wenjie Qu*, and Neil Zhenqiang Gong. “MultiGuard: Provably Robust Multi-label Classification against Adversarial Examples”. In Conference on Neural Information Processing Systems (NeurIPS), 2022. *Equal contribution code
Yupei Liu, Jinyuan Jia, Hongbin Liu, and Neil Zhenqiang Gong. “StolenEncoder: Stealing Pre-trained Encoders in Self-supervised Learning”. In ACM Conference on Computer and Communications Security (CCS), 2022.
Zaixi Zhang, Xiaoyu Cao, Jinayuan Jia, and Neil Zhenqiang Gong. “FLDetector: Defending Federated Learning Against Model Poisoning Attacks via Detecting Malicious Clients”. In ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD), 2022. code
Hongbin Liu, Jinyuan Jia, and Neil Zhenqiang Gong. “PoisonedEncoder: Poisoning the Unlabeled Pre-training Data in Contrastive Learning”. In USENIX Security Symposium, 2022.
Yongji Wu, Xiaoyu Cao, Jinyuan Jia, and Neil Zhenqiang Gong. “Poisoning Attacks to Local Differential Privacy Protocols for Key-Value Data”. In USENIX Security Symposium, 2022.
- Jinyuan Jia*, Yupei Liu*, and Neil Zhenqiang Gong. “BadEncoder: Backdoor Attacks to Pre-trained Encoders in Self-Supervised Learning”. In IEEE Symposium on Security and Privacy (IEEE S&P), 2022. *Equal contribution code
Jinyuan Jia, Binghui Wang, Xiaoyu Cao, Hongbin Liu, and Neil Zhenqiang Gong. “Almost Tight L0-norm Certified Robustness of Top-k Predictions against Adversarial Perturbations”. In International Conference on Learning Representations (ICLR), 2022.
- Jinyuan Jia, Yupei Liu, Xiaoyu Cao, and Neil Zhenqiang Gong. “Certified Robustness of Nearest Neighbors against Data Poisoning and Backdoor Attacks”. In AAAI Conference on Artificial Intelligence (AAAI), 2022.
2021
Hongbin Liu*, Jinyuan Jia*, Wenjie Qu, and Neil Zhenqiang Gong. “EncoderMI: Membership Inference against Pre-trained Encoders in Contrastive Learning”. In ACM Conference on Computer and Communications Security (CCS), 2021. *Equal contribution
Binghui Wang, Jinyuan Jia, Xiaoyu Cao, and Neil Zhenqiang Gong. “Certified Robustness of Graph Neural Networks against Adversarial Structural Perturbation”. In ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD), 2021.
Hongbin Liu, Jinyuan Jia, and Neil Zhenqiang Gong. “On the Intrinsic Differential Privacy of Bagging”. In International Joint Conference on Artificial Intelligence (IJCAI), 2021.
Xiaoyu Cao, Jinyuan Jia, and Neil Zhenqiang Gong. “Data Poisoning Attacks to Local Differential Privacy Protocols”. In USENIX Security Symposium, 2021.
Xinlei He, Jinyuan Jia, Michael Backes, Neil Zhenqiang Gong, and Yang Zhang. “Stealing Links from Graph Neural Networks”. In USENIX Security Symposium, 2021. code
Hongbin Liu*, Jinyuan Jia*, and Neil Zhenqiang Gong. “PointGuard: Provably Robust 3D Point Cloud Classification”. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2021. *Equal contribution
Zaixi Zhang*, Jinyuan Jia*, Binghui Wang, and Neil Zhenqiang Gong. “Backdoor Attacks to Graph Neural Networks”. In ACM Symposium on Access Control Models and Technologies (SACMAT), 2021. *Equal contribution code
Xiaoyu Cao, Jinyuan Jia, and Neil Zhenqiang Gong. “IPGuard: Protecting Intellectual Property of Deep Neural Networks via Fingerprinting the Classification Boundary”. In ACM ASIA Conference on Computer and Communications Security (ASIACCS), 2021.
Jinyuan Jia, Binghui Wang, and Neil Zhenqiang Gong. “Robust and Verifiable Information Embedding Attacks to Deep Neural Networks via Error-Correcting Codes”. In ACM ASIA Conference on Computer and Communications Security (ASIACCS), 2021.
Xiaoyu Cao, Jinyuan Jia, and Neil Zhenqiang Gong. “Provably Secure Federated Learning against Malicious Clients”. In AAAI Conference on Artificial Intelligence (AAAI), 2021.
Jinyuan Jia, Xiaoyu Cao, and Neil Zhenqiang Gong. “Intrinsic Certified Robustness of Bagging against Data Poisoning Attacks”. In AAAI Conference on Artificial Intelligence (AAAI), 2021. code
Binghui Wang, Jinyuan Jia, and Neil Zhenqiang Gong. “Semi-Supervised Node Classification on Graphs: Markov Random Fields vs. Graph Neural Networks”. In AAAI Conference on Artificial Intelligence (AAAI), 2021.
2020
Zaixi Zhang, Jinyuan Jia, Binghui Wang, and Neil Zhenqiang Gong. “Backdoor Attacks to Graph Neural Networks”. NeurIPS 2020 Workshop on Dataset Curation and Security, 2020.
Binghui Wang, Xiaoyu Cao, Jinyuan Jia, and Neil Zhenqiang Gong. “On Certifying Robustness against Backdoor Attacks via Randomized Smoothing”. CVPR 2020 Workshop on Adversarial Machine Learning in Computer Vision, 2020.
DeepMind Best Extended Abstract AwardJinyuan Jia*, Binghui Wang*, Xiaoyu Cao, and Neil Zhenqiang Gong. “Certified Robustness of Community Detection against Adversarial Structural Perturbation via Randomized Smoothing”. In The Web Conference (WWW), 2020. *Equal contribution
Jinyuan Jia, Xiaoyu Cao, Binghui Wang, and Neil Zhenqiang Gong. “Certified Robustness for Top-k Predictions against Adversarial Perturbations via Randomized Smoothing”. In International Conference on Learning Representations (ICLR), 2020. code
Minghong Fang*, Xiaoyu Cao*, Jinyuan Jia, and Neil Zhenqiang Gong. “Local Model Poisoning Attacks to Byzantine-Robust Federated Learning”. In USENIX Security Symposium, 2020. *Equal contribution
Jinyuan Jia and Neil Zhenqiang Gong. “Defending against Machine Learning based Inference Attacks via Adversarial Examples: Opportunities and Challenges”. Adaptive Autonomous Secure Cyber Systems. Springer, Cham, 2020.
2019
Jinyuan Jia, Ahmed Salem, Michael Backes, Yang Zhang, and Neil Zhenqiang Gong. “MemGuard: Defending against Black-Box Membership Inference Attacks via Adversarial Examples”. In ACM Conference on Computer and Communications Security (CCS), 2019. code
Jinyuan Jia and Neil Zhenqiang Gong. “Calibrate: Frequency Estimation and Heavy Hitter Identification with Local Differential Privacy via Incorporating Prior Knowledge”. In IEEE International Conference on Computer Communications (INFOCOM), 2019.
Binghui Wang, Jinyuan Jia, and Neil Zhenqiang Gong. “Graph-based Security and Privacy Analytics via Collective Classification with Joint Weight Learning and Propagation”. In ISOC Network and Distributed System Security Symposium (NDSS), 2019.
Distinguished Paper Award Honorable MentionBinghui Wang, Jinyuan Jia, Le Zhang, and Neil Zhenqiang Gong. “Structure-based Sybil Detection in Social Networks via Local Rule-based Propagation”. IEEE Transactions on Network Science and Engineering (TNSE), 6(3), 2019.
2018
- Jinyuan Jia and Neil Zhenqiang Gong. “AttriGuard: A Practical Defense Against Attribute Inference Attacks via Adversarial Machine Learning”. In USENIX Security Symposium, 2018. code
Featured by WIRED, Boing Boing
2017
Jinyuan Jia, Binghui Wang, and Neil Zhenqiang Gong. ‘‘Random Walk based Fake Account Detection in Online Social Networks”. In IEEE/IFIP International Conference on Dependable Systems and Networks (DSN), 2017.
Jinyuan Jia, Binghui Wang, Le Zhang, and Neil Zhenqiang Gong. ‘‘AttriInfer: Inferring User Attributes in Online Social Networks Using Markov Random Fields”. In International Conference on World Wide Web (WWW), 2017.