REED-VAE: RE-Encode Decode Training for Iterative Image Editing with Diffusion Models

1Reichman University

REED-VAE preserves image quality over multiple editing iterations, allowing users to perform multiple edit operations using a combination of frameworks and techniques.

Teaser image

Abstract

While latent diffusion models achieve impressive image editing results, their application to iterative editing of the same image is severely restricted. When trying to apply consecutive edit operations using current models, they accumulate artifacts and noise due to repeated transitions between pixel and latent spaces. Some methods have attempted to address this limitation by performing the entire edit chain within the latent space, sacrificing flexibility by supporting only a limited, predetermined set of diffusion editing operations. We present a re-encode decode (REED) training scheme for variational autoencoders (VAEs), which promotes image quality preservation even after many iterations. Our work enables multi-method iterative image editing: users can perform a variety of iterative edit operations, with each operation building on the output of the previous one using both diffusion based operations and conventional editing techniques. We demonstrate the advantage of REED-VAE across a range of image editing scenarios, including text-based and mask-based editing frameworks. In addition, we show how REED-VAE enhances the overall editability of images, increasing the likelihood of successful and precise edit operations. We hope that this work will serve as a benchmark for the newly introduced task of multi-method image editing.

Multi-Method Editing

We evaluate REED-VAE across a variety of image editing scenarios:

Iterative Encode/Decode Cycles

We evaluate the effects of iterative encode/decode cycles on image quality and editability, even without a diffusion model in the pipeline. The Vanilla-VAE (left) accumulates artifacts and exhibits significant distortion very quickly throughout encode-decode iterations. The images lose their distinct shapes and edges, appearing more globular and less defined with a noticeable shift in colors. REED-VAE (right) produces successive images that are robust to such artifacts and distortions. The images retain their shape, color, and surface details, demonstrating remarkably high fidelity to the original image. We display the progression of n=25 iterative encode/decode cycles for both models.