All

What are you looking for?

All
Projects
Results
Organizations

Quick search

  • Projects supported by TA ČR
  • Excellent projects
  • Projects with the highest public support
  • Current projects

Smart search

  • That is how I find a specific +word
  • That is how I leave the -word out of the results
  • “That is how I can find the whole phrase”

Transformer-based Semantic Segmentation for Large-Scale Building Footprint Extraction from Very-High Resolution Satellite Images

The result's identifiers

  • Result code in IS VaVaI

    <a href="https://www.isvavai.cz/riv?ss=detail&h=RIV%2F00216305%3A26220%2F24%3APU151061" target="_blank" >RIV/00216305:26220/24:PU151061 - isvavai.cz</a>

  • Result on the web

    <a href="https://www.sciencedirect.com/science/article/pii/S0273117724002205" target="_blank" >https://www.sciencedirect.com/science/article/pii/S0273117724002205</a>

  • DOI - Digital Object Identifier

    <a href="http://dx.doi.org/10.1016/j.asr.2024.03.002" target="_blank" >10.1016/j.asr.2024.03.002</a>

Alternative languages

  • Result language

    angličtina

  • Original language name

    Transformer-based Semantic Segmentation for Large-Scale Building Footprint Extraction from Very-High Resolution Satellite Images

  • Original language description

    Extracting building footprints from extensive very-high spatial resolution (VHSR) remote sensing data is crucial for diverse applications, including surveying, urban studies, population estimation, identification of informal settlements, and disaster management. Although convolutional neural networks (CNNs) are commonly utilized for this purpose, their effectiveness is constrained by limitations in capturing long-range relationships and contextual details due to the localized nature of convolution operations. This study introduces the masked-attention mask transformer (Mask2Former), based on the Swin Transformer, for building footprint extraction from large-scale satellite imagery. To enhance the capture of large-scale semantic information and extract multiscale features, a hierarchical vision transformer with shifted windows (Swin Transformer) serves as the backbone network. An extensive analysis compares the efficiency and generalizability of Mask2Former with four CNN models (PSPNet, DeepLabV3+, UpperNet-ConvNext, and SegNeXt) and two transformer-based models (UpperNet-Swin and SegFormer) featuring different complexities. Results reveal superior performance of transformer-based models over CNN-based counterparts, showcasing exceptional generalization across diverse testing areas with varying building structures, heights, and sizes. Specifically, Mask2Former with the Swin transformer backbone achieves a mean intersection over union between 88% and 93%, along with a mean F-score (mF-score) ranging from 91% to 96.35% across various urban landscapes.

  • Czech name

  • Czech description

Classification

  • Type

    J<sub>imp</sub> - Article in a specialist periodical, which is included in the Web of Science database

  • CEP classification

  • OECD FORD branch

    20202 - Communication engineering and systems

Result continuities

  • Project

  • Continuities

    S - Specificky vyzkum na vysokych skolach

Others

  • Publication year

    2024

  • Confidentiality

    S - Úplné a pravdivé údaje o projektu nepodléhají ochraně podle zvláštních právních předpisů

Data specific for result type

  • Name of the periodical

    ADVANCES IN SPACE RESEARCH

  • ISSN

    0273-1177

  • e-ISSN

    1879-1948

  • Volume of the periodical

    73

  • Issue of the periodical within the volume

    10

  • Country of publishing house

    GB - UNITED KINGDOM

  • Number of pages

    17

  • Pages from-to

    „4937 “-„4954“

  • UT code for WoS article

    001226582700001

  • EID of the result in the Scopus database

    2-s2.0-85188559920