Vše

Co hledáte?

Vše
Projekty
Výsledky výzkumu
Subjekty

Rychlé hledání

  • Projekty podpořené TA ČR
  • Významné projekty
  • Projekty s nejvyšší státní podporou
  • Aktuálně běžící projekty

Chytré vyhledávání

  • Takto najdu konkrétní +slovo
  • Takto z výsledků -slovo zcela vynechám
  • “Takto můžu najít celou frázi”

Fusion Transformer with Object Mask Guidance for Image Forgery Analysis

Identifikátory výsledku

  • Kód výsledku v IS VaVaI

    <a href="https://www.isvavai.cz/riv?ss=detail&h=RIV%2F68407700%3A21230%2F24%3A00379884" target="_blank" >RIV/68407700:21230/24:00379884 - isvavai.cz</a>

  • Výsledek na webu

    <a href="https://doi.org/10.1109/CVPRW63382.2024.00438" target="_blank" >https://doi.org/10.1109/CVPRW63382.2024.00438</a>

  • DOI - Digital Object Identifier

    <a href="http://dx.doi.org/10.1109/CVPRW63382.2024.00438" target="_blank" >10.1109/CVPRW63382.2024.00438</a>

Alternativní jazyky

  • Jazyk výsledku

    angličtina

  • Název v původním jazyce

    Fusion Transformer with Object Mask Guidance for Image Forgery Analysis

  • Popis výsledku v původním jazyce

    In this work we introduce OMG-Fuser a fusion transformer-based network designed to extract information from various forensic signals to enable robust image forgery detection and localization. Our approach can operate with an arbitrary number of forensic signals and leverages object information for their analysis -- unlike previous methods that rely on fusion schemes with few signals and often disregard image semantics. To this end we design a forensic signal stream composed of a transformer guided by an object attention mechanism associating patches that depict the same objects. In that way we incorporate object-level information from the image. Each forensic signal is processed by a different stream that adapts to its peculiarities. A token fusion transformer efficiently aggregates the outputs of an arbitrary number of network streams and generates a fused representation for each image patch. % These representations are finally processed by a long-range dependencies transformer that captures the intrinsic relations between the image patches. We assess two fusion variants on top of the proposed approach: (i) score-level fusion that fuses the outputs of multiple image forensics algorithms and (ii) feature-level fusion that fuses low-level forensic traces directly. Both variants exceed state-of-the-art performance on seven datasets for image forgery detection and localization with a relative average improvement of 12.1% and 20.4% in terms of F1. Our model is robust against traditional and novel forgery attacks and can be expanded with new signals without training from scratch. Our code is publicly available at: https://github.com/mever-team/omgfuser

  • Název v anglickém jazyce

    Fusion Transformer with Object Mask Guidance for Image Forgery Analysis

  • Popis výsledku anglicky

    In this work we introduce OMG-Fuser a fusion transformer-based network designed to extract information from various forensic signals to enable robust image forgery detection and localization. Our approach can operate with an arbitrary number of forensic signals and leverages object information for their analysis -- unlike previous methods that rely on fusion schemes with few signals and often disregard image semantics. To this end we design a forensic signal stream composed of a transformer guided by an object attention mechanism associating patches that depict the same objects. In that way we incorporate object-level information from the image. Each forensic signal is processed by a different stream that adapts to its peculiarities. A token fusion transformer efficiently aggregates the outputs of an arbitrary number of network streams and generates a fused representation for each image patch. % These representations are finally processed by a long-range dependencies transformer that captures the intrinsic relations between the image patches. We assess two fusion variants on top of the proposed approach: (i) score-level fusion that fuses the outputs of multiple image forensics algorithms and (ii) feature-level fusion that fuses low-level forensic traces directly. Both variants exceed state-of-the-art performance on seven datasets for image forgery detection and localization with a relative average improvement of 12.1% and 20.4% in terms of F1. Our model is robust against traditional and novel forgery attacks and can be expanded with new signals without training from scratch. Our code is publicly available at: https://github.com/mever-team/omgfuser

Klasifikace

  • Druh

    D - Stať ve sborníku

  • CEP obor

  • OECD FORD obor

    10201 - Computer sciences, information science, bioinformathics (hardware development to be 2.2, social aspect to be 5.8)

Návaznosti výsledku

  • Projekt

    <a href="/cs/project/GM21-28830M" target="_blank" >GM21-28830M: Učení Univerzální Vizuální Reprezentace s Omezenou Supervizí</a><br>

  • Návaznosti

    P - Projekt vyzkumu a vyvoje financovany z verejnych zdroju (s odkazem do CEP)

Ostatní

  • Rok uplatnění

    2024

  • Kód důvěrnosti údajů

    S - Úplné a pravdivé údaje o projektu nepodléhají ochraně podle zvláštních právních předpisů

Údaje specifické pro druh výsledku

  • Název statě ve sborníku

    2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)

  • ISBN

    979-8-3503-6547-4

  • ISSN

    2160-7508

  • e-ISSN

    2160-7516

  • Počet stran výsledku

    11

  • Strana od-do

    4345-4355

  • Název nakladatele

    IEEE Computer Society

  • Místo vydání

    Los Alamitos

  • Místo konání akce

    Seattle

  • Datum konání akce

    16. 6. 2024

  • Typ akce podle státní příslušnosti

    WRD - Celosvětová akce

  • Kód UT WoS článku

    001327781704051