Recent progress in accelerating text-to-image diffusion models enables high-fidelity synthesis within a single denoising step. However, customizing the fast one-step models remains challenging, as existing methods consistently fail to produce acceptable results, underscoring the need for new methodologies to personalize one-step models. Therefore, we propose One-step Personalized Adversarial Distillation (OPAD), a framework that combines teacher–student distillation with adversarial supervision. A multi-step diffusion model serves as the teacher, while a one-step student model is jointly trained with it. The student learns from alignment losses that preserve consistency with the teacher and from adversarial losses that align its output with real image distributions. Beyond one-step personalization, we further observe that the student’s efficient generation and adversarially enriched representations provide valuable feedback to improve the teacher model, forming a collaborative learning stage. Extensive experiments demonstrate that OPAD is the first approach to deliver reliable, high-quality personalization for one-step diffusion models; in contrast, prior methods largely fail and produce severe failure cases, while OPAD preserves single-step efficiency.
Figure 1. Overview of OPAD. The student and teacher jointly learn the new concept with a shared text encoder. The teacher learns from real images (green), and the text encoder is updated accordingly. The student is optimized with two objectives (gold): an adversarial loss to match real data distribution and alignment losses to match the denoised outputs of the teacher. The discriminators are trained to distinguish between the student's outputs and real images.
Figure 2. Our method compared with existing methods.
Figure 3. Qualitative results on CustomConcept101 dataset (Part 1).
Figure 4. Qualitative results on CustomConcept101 dataset (Part 2).
@inproceedings{yang2026adversarial,
title = {Adversarial Concept Distillation for One-Step Diffusion Personalization},
author = {Yixiong Yang and Tao Wu and Senmao Li and Shiqi Yang and Yaxing Wang and Joost van de Weijer and Kai Wang},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Findings},
year = {2026},
}
If you have any questions, please feel free to reach out at yangyxwork@gmail.com.