Yimeng(Damon) Zhang
About me
I received the B.Eng. degree in Electrical and Electronic Engineering (EEE) from The University of Sheffield, UK, in 2018, and M.Sc. degrees in Electrical Engineering (EE) from Columbia University, USA, in 2020. Currently, I am a PhD Candidate in Computer Science under the supervision of
Prof. Sijia Liu.
Research Focuses
My research centers on enhancing the efficiency of machine learning from multiple perspectives, including data optimization, model architecture, and parameter-efficient fine-tuning techniques. I aim to improve both the training and inference processes by reducing computational and resource demands while maintaining or enhancing performance. A key aspect of my work also involves ensuring model safety, with a focus on robustness against adversarial attacks. By integrating efficiency and safety, my research strives to create scalable, reliable, and secure AI systems that perform effectively in diverse real-world applications.
Deep Learning: Computer Vision (generative models, multi-modality), AI Safety (adversarial attack & defense)
Optimization: Zeroth-Order Optimization, Dataset/Model pruning
Recent Publications
(* represents equal contribution) [Research Summary]
Y. Zhang, X. Chen, J. Jia, Y. Zhang, C. Fan, J. Liu, M. Hong, K. Ding, S. Liu, Defensive Unlearning with Adversarial Training for Robust Concept Erasure in Diffusion Models, NeurIPS’24 [Code] [HF Model] [Demo]
Y. Zhang* , J. Jia*, X. Chen, A. Chen, Y. Zhang, J. Liu, K. Ding, S. Liu, To Generate or Not? Safety-Driven Unlearned Diffusion Models Are Still Easy To Generate Unsafe Images … For Now, ECCV’24 [Code] [Demo] [Poster] [Unlearned DM Benchmark]
Y. Zhang, C. Fan, Y. Zhang, Y. Yao, J. Jia, J. Liu, X. Liu, S. Liu, UnlearnCanvas: A Stylized Image Dataset to Benchmark Machine Unlearning for Diffusion Models, NeurIPS’24 [Code] [Benchmark] [Dataset]
J. Jia, Y. Zhang, Y. Zhang, J. Liu, B. Runwal, J. Diffenderfer, B. Kailkhura, S. Liu, SOUL: Unlocking the Power of Second-Order Optimization for LLM Unlearning, EMNLP’24 [Code]
Y. Zhang*, P. Li*, J. Hong*, J. Li, Y. Zhang, W. Zheng, P.-Y. Chen, J. D. Lee, W. Yin, M. Hong, Z. Wang, S. Liu, T. Chen, Revisiting Zeroth-Order Optimization for Memory-Efficient LLM Fine-Tuning: A Benchmark, ICML’24 [Code]
A. Chen*, Y. Zhang* , J. Jia, J. Diffenderfer, J. Liu, K. Parasyris, Y. Zhang, Z. Zhang, B. Kailkhura, S. Liu, DeepZero: Scaling up Zeroth-Order Optimization for Deep Model Training, ICLR’24 [Code] [Slide] [Poster]
M. Jafari, Y. Zhang , Y. Zhang, S. Liu, The power of few: accelerating and enhancing data reweighting with coreset selection, ICASSP’24
Y. Zhang*, Y. Zhang* , A. Chen*, J. Jia, J. Liu, G. Liu, M. Hong, S. Chang, S. Liu, Selectivity Drives Productivity: Efficient Dataset Pruning for Enhanced Transfer Learning, NeurIPS’23 [Paper] [Code] [Poster] [Slide]
Y. Zhang , X. Chen, J. Jia, S. Liu, K. Ding, Text-Visual Prompting for Efficient 2D Temporal Video Grounding, CVPR’23 [Code] [HF Model] [Poster] [Slide]
Y. Zhang* , A.K. Kamath*, Q. Wu*, Z. Fan*, W. Chen, Z. Wang, S. Chang, S. Liu, C. Hao, Data-Model-Circuit Tri-Design for Ultra-Light Video Intelligence on Edge Devices, ASP-DAC’23
Y. Zhang, Y. Yao, J. Jia, J. Yi, M. Hong, S. Chang, S. Liu, How to Robustify Black-Box ML Models? A Zeroth-Order Optimization Perspective, ICLR’22 (Spotlight, acceptance rate 5%) [Code] [Poster]
Y. Gong, Y. Yao, Y. Li, Y. Zhang, X. Liu, X. Lin, S. Liu, Reverse Engineering of Imperceptible Adversarial Image Perturbations, ICLR’22 [Code]
Y. Zhang,X. Y. Liu, B. Wu, A. Walid, Video Synthesis via Transform-Based Tensor Neural Network, ACM International Conference on Multimedia (ACM MM’20)
X. Han, B. Wu, Z. Shou, X. Y. Liu, Y. Zhang, L. Kong, Tensor FISTA-Net for real-time snapshot compressive imaging, AAAI Conference on Artificial Intelligence (AAAI’20)
Full list of publications.
|