All

What are you looking for?

All
Projects
Results
Organizations

Quick search

  • Projects supported by TA ČR
  • Excellent projects
  • Projects with the highest public support
  • Current projects

Smart search

  • That is how I find a specific +word
  • That is how I leave the -word out of the results
  • “That is how I can find the whole phrase”

CoInNet: A Convolution-Involution Network with a Novel Statistical Attention for Automatic Polyp Segmentation

The result's identifiers

  • Result code in IS VaVaI

    <a href="https://www.isvavai.cz/riv?ss=detail&h=RIV%2F62690094%3A18450%2F23%3A50020726" target="_blank" >RIV/62690094:18450/23:50020726 - isvavai.cz</a>

  • Result on the web

    <a href="https://ieeexplore.ieee.org/document/10266385" target="_blank" >https://ieeexplore.ieee.org/document/10266385</a>

  • DOI - Digital Object Identifier

    <a href="http://dx.doi.org/10.1109/TMI.2023.3320151" target="_blank" >10.1109/TMI.2023.3320151</a>

Alternative languages

  • Result language

    angličtina

  • Original language name

    CoInNet: A Convolution-Involution Network with a Novel Statistical Attention for Automatic Polyp Segmentation

  • Original language description

    Polyps are very common abnormalities in human gastrointestinal regions. Their early diagnosis may help in reducing the risk of colorectal cancer. Vision-based computer-aided diagnostic systems automatically identify polyp regions to assist surgeons in their removal. Due to their varying shape, color, size, texture, and unclear boundaries, polyp segmentation in images is a challenging problem. Existing deep learning segmentation models mostly rely on convolutional neural networks that have certain limitations in learning the diversity in visual patterns at different spatial locations. Further, they fail to capture inter-feature dependencies. Vision transformer models have also been deployed for polyp segmentation due to their powerful global feature extraction capabilities. But they too are supplemented by convolution layers for learning contextual local information. In the present paper, a polyp segmentation model CoInNet is proposed with a novel feature extraction mechanism that leverages the strengths of convolution and involution operations and learns to highlight polyp regions in images by considering the relationship between different feature maps through a statistical feature attention unit. To further aid the network in learning polyp boundaries, an anomaly boundary approximation module is introduced that uses recursively fed feature fusion to refine segmentation results. It is indeed remarkable that even tiny-sized polyps with only 0.01% of an image area can be precisely segmented by CoInNet. It is crucial for clinical applications, as small polyps can be easily overlooked even in the manual examination due to the voluminous size of wireless capsule endoscopy videos. CoInNet outperforms thirteen state-of-the-art methods on five benchmark polyp segmentation datasets. IEEE

  • Czech name

  • Czech description

Classification

  • Type

    J<sub>imp</sub> - Article in a specialist periodical, which is included in the Web of Science database

  • CEP classification

  • OECD FORD branch

    10201 - Computer sciences, information science, bioinformathics (hardware development to be 2.2, social aspect to be 5.8)

Result continuities

  • Project

  • Continuities

    I - Institucionalni podpora na dlouhodoby koncepcni rozvoj vyzkumne organizace

Others

  • Publication year

    2023

  • Confidentiality

    S - Úplné a pravdivé údaje o projektu nepodléhají ochraně podle zvláštních právních předpisů

Data specific for result type

  • Name of the periodical

    IEEE Transactions on Medical Imaging

  • ISSN

    0278-0062

  • e-ISSN

    1558-254X

  • Volume of the periodical

    42

  • Issue of the periodical within the volume

    12

  • Country of publishing house

    US - UNITED STATES

  • Number of pages

    14

  • Pages from-to

    3987-4000

  • UT code for WoS article

    001122030500041

  • EID of the result in the Scopus database

    2-s2.0-85173076286