Li, B., & Lukasiewicz, T. (2022). Learning to Model Multimodal Semantic Alignment for Story Visualization. In Findings of the Association for Computational Linguistics: EMNLP 2022 (pp. 4741–4747). Association for Computational Linguistics. https://doi.org/10.18653/v1/2022.findings-emnlp.346
Findings of the Association for Computational Linguistics: EMNLP 2022
-
Date (published):
2022
-
Event name:
The 2022 Conference on Empirical Methods in Natural Language Processing (EMNLP 2022)
en
Event date:
7-Dec-2022 - 11-Dec-2022
-
Event place:
Abu Dhabi, United Arab Emirates (the)
-
Number of Pages:
7
-
Publisher:
Association for Computational Linguistics
-
Peer reviewed:
Yes
-
Keywords:
Story visualization; Multimodal semantic alignment
en
Abstract:
Story visualization aims to generate a sequence of images to narrate each sentence in a multi-sentence story, where the images should be realistic and keep global consistency across dynamic scenes and characters. Current works face the problem of semantic misalignment because of their fixed architecture and diversity of input modalities. To address this problem, we explore the semantic alignment between text and image representations by learning to match their semantic levels in the GAN-based generative model. More specifically, we introduce dynamic interactions according to learning to dynamically explore various semantic depths and fuse the different-modal information at a matched semantic level, which thus relieves the text-image semantic misalignment problem. Extensive experiments on different datasets demonstrate the improvements of our approach, neither using segmentation masks nor auxiliary captioning networks, on image quality and story consistency, compared with state-of-the-art methods.
en
Project (external):
EPSRC EPSRC EPSRC
-
Project ID:
EP/N510129/1 EP/R013667/1 EP/P020275/1
-
Research Areas:
Visual Computing and Human-Centered Technology: 50% Information Systems Engineering: 50%