The Key to Better Art Style Mimicry

Written by torts | Published 2024/12/13
Tech Story Tags: ai-forgery | generative-ai | ai-style-mimicry | image-theft-by-ai | protecting-art-from-ai | glaze-protection-tool | black-box-ai-access | genai-finetuning

TLDRWe compare our finetuning approach with Glaze's, showing that our setup, using Stable Diffusion 2.1, produces better mimicry from unprotected art.via the TL;DR App

Table of Links

Abstract and 1. Introduction

  1. Background and Related Work

  2. Threat Model

  3. Robust Style Mimicry

  4. Experimental Setup

  5. Results

    6.1 Main Findings: All Protections are Easily Circumvented

    6.2 Analysis

  6. Discussion and Broader Impact, Acknowledgements, and References

A. Detailed Art Examples

B. Robust Mimicry Generations

C. Detailed Results

D. Differences with Glaze Finetuning

E. Findings on Glaze 2.0

F. Findings on Mist v2

G. Methods for Style Mimicry

H. Existing Style Mimicry Protections

I. Robust Mimicry Methods

J. Experimental Setup

K. User Study

L. Compute Resources

D Differences with Glaze Finetuning

In Section 4.1 and Figure 2, we discussed the brittleness of Glaze protections against small changes in the finetuning script. We also found our finetuning setup to be better at baseline style mimicry from unprotected art (see Figure 19).

Authors:

(1) Robert Honig, ETH Zurich ([email protected]);

(2) Javier Rando, ETH Zurich ([email protected]);

(3) Nicholas Carlini, Google DeepMind;

(4) Florian Tramer, ETH Zurich ([email protected]).


This paper is available on arxiv under CC BY 4.0 license.


Written by torts | Exploring the legal landscape of the digital age. Read my articles to understand the why behind the what.
Published by HackerNoon on 2024/12/13