Recent work has shown great progress in integrating spatial conditioning to control large, pre-trained text-to-image diffusion models. Despite these advances, existing methods describe the spatial image content using hand-crafted conditioning inputs, which are either semantically ambiguous (e.g., edges) or require expensive manual annotations (e.g., semantic segmentation). To address these limitations, we propose a new label-free way of conditioning diffusion models to enable fine-grained spatial control. We introduce the concept of neural semantic image synthesis, which uses neural layouts extracted from pre-trained foundation models as conditioning. Neural layouts are advantageous as they provide rich descriptions of the desired image, containing both semantics and detailed geometry of the scene. We experimentally show that images synthesized via neural semantic image synthesis achieve similar or superior pixel-level alignment of semantic classes compared to those created using expensive semantic label maps. At the same time, they capture better semantics, instance separation, and object orientation than other label-free conditioning options, such as edges or depth. Moreover, we show that images generated by neural layout conditioning can effectively augment real data for training various perception tasks.
LUMEN utilizes makes use of the rich spatial and semantic features within large-scale pre-trained foundation models (FMs) to extract neural layouts as conditioning input. Further, we incorporate a semantic separation step, i.e., by applying PCA, to remove nuisance appearance variations and enable diverse synthesis.
Neural layouts provide rich description of the desired images, while other inputs contain limited information and are semantically ambiguous.
Applying different number of PCA components can control over diversity-fidelity tradeoff flexiblely. More specifically, using fewer PCA components trades fidelity for diversity.
LUMEN trained on a diverse enough dataset, e.g., COCO-Stuff, can readily generalize to other datasets, e.g., Cityscapes, ADE20k and BDD100k, without requiring any finetuning.
@inproceedings{wang2024lumen,
title = {Label-free Neural Semantic Image Synthesis},
author = {Jiayi Wang and Kevin Alexander Laube and Yumeng Li and Jan Hendrik Metzen and Shin-I Cheng and Julio Borges and Anna Khoreva},
booktitle = {ECCV},
year = {2024},
}