How Are Artists Protecting Their Unique Styles from Imitation in AI Art?

Written by torts | Published 2024/12/14
Tech Story Tags: ai-forgery | generative-ai | ai-style-mimicry | image-theft-by-ai | protecting-art-from-ai | glaze-protection-tool | black-box-ai-access | user-study-on-ai-art-mimicry

TLDRStyle mimicry protections focus on encoder and denoiser methods. Tools like Glaze, Mist, and Anti-DreamBooth aim to defend against style forgery in AI-generated art.via the TL;DR App

Table of Links

Abstract and 1. Introduction

  1. Background and Related Work

  2. Threat Model

  3. Robust Style Mimicry

  4. Experimental Setup

  5. Results

    6.1 Main Findings: All Protections are Easily Circumvented

    6.2 Analysis

  6. Discussion and Broader Impact, Acknowledgements, and References

A. Detailed Art Examples

B. Robust Mimicry Generations

C. Detailed Results

D. Differences with Glaze Finetuning

E. Findings on Glaze 2.0

F. Findings on Mist v2

G. Methods for Style Mimicry

H. Existing Style Mimicry Protections

I. Robust Mimicry Methods

J. Experimental Setup

K. User Study

L. Compute Resources

H Existing Style Mimicry Protections

Naming convention. Depending on the context, style mimicry protections may be viewed either as attacks or as the targets of attacks. In an artistic setting, artists see style mimicry as an attack and utilize methods like Glaze as a defense. Conversely, in the context of adversarial robustness, Glaze can be seen as an attack against style mimicry methods through adversarial perturbations. The research community has not reached a consensus on terminology: Glaze’s authors consider style mimicry an attack and label Glaze as a defense, while the authors of Mist and Anti-DreamBooth describe their approaches as attacks. In our work, we distance ourselves from the attack/defense terminology and instead refer to these mechanisms as protections, and to the party performing mimicry as the “style forger”.

Existing protections can either target the encoder or the decoder of text-to-image models. We classify them accordingly.

H.1 Encoder Protections

H.2 Denoiser Protections

Denoiser protections use the prediction error of the denoiser ϵθ as a proxy of the quality of style mimicry, making it a feasible target for adversarial optimization. Current Denoiser protections, such as Mist (Liang et al., 2023) and Anti-DreamBooth (Van Le et al., 2023) assume that poorly reconstructed images will fail to mimic style

Authors:

(1) Robert Honig, ETH Zurich ([email protected]);

(2) Javier Rando, ETH Zurich ([email protected]);

(3) Nicholas Carlini, Google DeepMind;

(4) Florian Tramer, ETH Zurich ([email protected]).


This paper is available on arxiv under CC BY 4.0 license.

[7] Mist project also contains a denoiser attack that we fail to reproduce as a robust protection.


Written by torts | Exploring the legal landscape of the digital age. Read my articles to understand the why behind the what.
Published by HackerNoon on 2024/12/14