header

LUMEN: Label-Free Neural Semantic Image Synthesis

teaser_figure


Our proposed neural layout conditioning enables the concept of neural semantic image synthesis. Neural layout allows the simultaneous specification of both semantic and spatial concepts, such as scene geometry, object semantics and orientation, all without requiring expensive pixel-wise label annotations for training. This is in contrast to existing conditioning, which as shown here in the red boxes can introduce spatial (Sem.Seg.) or semantic (MiDaS, Canny, HED) ambiguity.

Abstract

Recent work has shown great progress in integrating spatial conditioning to control large, pre-trained text-to-image diffusion models. Despite these advances, existing methods describe the spatial image content using hand-crafted conditioning inputs, which are either semantically ambiguous (e.g., edges) or require expensive manual annotations (e.g., semantic segmentation). To address these limitations, we propose a new label-free way of conditioning diffusion models to enable fine-grained spatial control. We introduce the concept of neural semantic image synthesis, which uses neural layouts extracted from pre-trained foundation models as conditioning. Neural layouts are advantageous as they provide rich descriptions of the desired image, containing both semantics and detailed geometry of the scene. We experimentally show that images synthesized via neural semantic image synthesis achieve similar or superior pixel-level alignment of semantic classes compared to those created using expensive semantic label maps. At the same time, they capture better semantics, instance separation, and object orientation than other label-free conditioning options, such as edges or depth. Moreover, we show that images generated by neural layout conditioning can effectively augment real data for training various perception tasks.

How does it work?

cars peace

LUMEN utilizes makes use of the rich spatial and semantic features within large-scale pre-trained foundation models (FMs) to extract neural layouts as conditioning input. Further, we incorporate a semantic separation step, i.e., by applying PCA, to remove nuisance appearance variations and enable diverse synthesis.

Comparison of Different Conditioning Types

Neural layouts provide rich description of the desired images, while other inputs contain limited information and are semantically ambiguous.

compare cond

Visual Results of Text Editability

compare editability

Flexible Control Over Diversity-Fidelity Tradeoff

Applying different number of PCA components can control over diversity-fidelity tradeoff flexiblely. More specifically, using fewer PCA components trades fidelity for diversity.

compare pca

Cross-Dataset Generalization

LUMEN trained on a diverse enough dataset, e.g., COCO-Stuff, can readily generalize to other datasets, e.g., Cityscapes, ADE20k and BDD100k, without requiring any finetuning.

compare pca

BibTeX

@inproceedings{wang2024lumen,
    title     = {Label-free Neural Semantic Image Synthesis},    
    author    = {Jiayi Wang and Kevin Alexander Laube and Yumeng Li and Jan Hendrik Metzen and Shin-I Cheng and Julio Borges and Anna Khoreva},
    booktitle = {ECCV},
    year      = {2024},
  }