RIML Logo

Unlearning diffusion models


Abstract:

This paper introduces Single Layer Unlearning Gradient (SLUG), a new method for removing unwanted information from trained models efficiently. Unlike traditional unlearning approaches that require costly updates across many layers, SLUG updates only one carefully chosen layer using a single gradient step. The method relies on layer importance and gradient alignment to identify the optimal layer, preserving model performance while unlearning targeted content. Experiments show that SLUG works effectively across models like CLIP, Stable Diffusion, and vision-language models, handling both concrete concepts (e.g., objects, identities) and abstract ones (e.g., artistic styles). Compared to existing approaches, SLUG achieves similar unlearning results but with much lower computational cost, making it a practical solution for efficient and precise targeted unlearning.

📄 Paper: Targeted Unlearning with Single Layer Unlearning Gradient

Session Details:

  • 📅 Date: Sunday
  • 🕒 Time: 4:00 - 5:00 PM
  • 🌐 Location: Online at vc.sharif.edu/ch/rohban (http://vc.sharif.edu/ch/rohban)

We look forward to your participation! ✌️