Diffusion models have been recognized for their ability to generate images that are not only visually appealing but also of high artistic quality. As a result, Layout-to-Image (L2I) generation has been proposed to leverage region-specific positions and descriptions to enable more precise and controllable generation. However, previous methods primarily focus on UNet-based models (e.g., SD1.5 and SDXL), and limited effort has explored Multimodal Diffusion Transformers (MM-DiTs), which have demonstrated powerful image generation capabilities. Enabling MM-DiT for layout-to-image generation seems straightforward but is challenging due to the complexity of how layout is introduced, integrated, and balanced among multiple modalities. To this end, we explore various network variants to efficiently incorporate layout guidance into MM-DiT, and ultimately present SiamLayout. To Inherit the advantages of MM-DiT, we use a separate set of network weights to process the layout, treating it as equally important as the image and text modalities. Meanwhile, to alleviate the competition among modalities, we decouple the image-layout interaction into a siamese branch alongside the image-text one and fuse them in the later stage. Moreover, we contribute a large-scale layout dataset, named LayoutSAM, which includes 2.7 million image-text pairs and 10.7 million entities. Each entity is annotated with a bounding box and a detailed description. We further construct the LayoutSAM-Eval benchmark as a comprehensive tool for evaluating the L2I generation quality. Finally, we introduce the Layout Designer, which taps into the potential of large language models in layout planning, transforming them into experts in layout generation and optimization.
An overview of the proposed pipeline SiamLayout. Layout tokens are derived from the layout encoder based on spatial locations and region descriptions. SiamLayout employs separate transformer parameters to process the layout, treating it as an equally important modality as the image and text. Layout and text guide the image independently through siamese branches, and are then fused in the later stage. We experiment with two additional network variants that incorporate layout via cross-attention and M3-Attention. SiamLayout works best.
We contribute a large-scale layout dataset based on SAM, named LayoutSAM, which includes 2.7 million image-text pairs and 10.7 million entities. Each entity is annotated with a bounding box and a detailed description. We further construct the LayoutSAM-Eval benchmark as a comprehensive tool for evaluating the layout-to-image generation quality. We design a mechanism to automatically annotate the layout for any given image.
To support diverse user inputs rather than just bounding boxes of entities, we turn a large language model into a layout planner named LayoutDesigner. This model can convert and optimize various user inputs such as center points, masks, scribbles, or even a rough idea, into a harmonious and aesthetically pleasing layout.
@article{zhang2024creatilayout,
title={CreatiLayout: Siamese Multimodal Diffusion Transformer for Creative Layout-to-Image Generation},
author={Zhang, Hui and Hong, Dexiang and Gao, Tingwei and Wang, Yitong and Shao, Jie and Wu, Xinglong and Wu, Zuxuan and Jiang, Yu-Gang},
journal={arXiv preprint arXiv:2412.03859},
year={2024}
}