EarthVQA: Towards Queryable Earth via Relational Reasoning-Based Remote Sensing Visual Question Answering
AAAI 2024

Abstract

overview
overview

This is follow-up work of our LoveDA (NeurIPS2021)

Earth vision research typically focuses on extracting geospatial object locations and categories but neglects the exploration of relations between objects and comprehensive reasoning. Based on city planning needs, we develop a multi-modal multi-task VQA dataset (EarthVQA) to advance relational relational-based judging, counting, and comprehensive analysis. The EarthVQA dataset contains 6000 images, corresponding semantic masks, and 208,593 QA pairs with urban and rural governance requirements embedded. As objects are the basis for complex relational reasoning, we propose a Semantic OBject Awareness framework (SOBA) to advance VQA in an object-centric way. To preserve refined spatial locations and semantics, SOBA leverages a segmentation network for object semantics generation. The object-guided attention aggregates object interior features via pseudo masks, and bidirectional cross attention further models object external relations hierarchically. To optimize object counting, we propose a numerical difference loss that dynamically adds difference penalties, unifying the classification and regression tasks. Experimental results show that SOBA outperforms both advanced general and remote sensing methods. We believe this dataset and framework provide a strong benchmark for Earth vision's complex analysis.

Experiments on EarthVQA

overview

BibTeX

@article{wang2024earthvqa, 
    title={EarthVQA: Towards Queryable Earth via Relational Reasoning-Based Remote Sensing Visual Question Answering},
    url={https://ojs.aaai.org/index.php/AAAI/article/view/28357}, 
    DOI={10.1609/ai.v38i6.28357}, 
    author={Junjue Wang and Zhuo Zheng and Zihang Chen and Ailong Ai and Yanfei Zhong}, 
    year={2024}, 
    month={Mar.},
    volume={38},
    pages={5481-5489}}
@article{earthvqanet,
    title = {EarthVQANet: Multi-task visual question answering for remote sensing image understanding},
    journal = {ISPRS Journal of Photogrammetry and Remote Sensing},
    volume = {212},
    pages = {422-439},
    year = {2024},
    issn = {0924-2716},
    doi = {https://doi.org/10.1016/j.isprsjprs.2024.05.001},
    url = {https://www.sciencedirect.com/science/article/pii/S0924271624001990},
    author = {Junjue Wang and Ailong Ma and Zihang Chen and Zhuo Zheng and Yuting Wan and Liangpei Zhang and Yanfei Zhong}}
                

Acknowledgments

This work was supported by National Natural Science Foundation of China under Grant Nos. 42325105, 42071350, and 42171336.

The website template was borrowed from Bowen Cheng and Michaƫl Gharbi.