Vše

Co hledáte?

Vše
Projekty
Výsledky výzkumu
Subjekty

Rychlé hledání

  • Projekty podpořené TA ČR
  • Významné projekty
  • Projekty s nejvyšší státní podporou
  • Aktuálně běžící projekty

Chytré vyhledávání

  • Takto najdu konkrétní +slovo
  • Takto z výsledků -slovo zcela vynechám
  • “Takto můžu najít celou frázi”

Explainable AI in the military domain

Identifikátory výsledku

  • Kód výsledku v IS VaVaI

    <a href="https://www.isvavai.cz/riv?ss=detail&h=RIV%2F67985955%3A_____%2F24%3A00586451" target="_blank" >RIV/67985955:_____/24:00586451 - isvavai.cz</a>

  • Výsledek na webu

    <a href="https://doi.org/10.1007/s10676-024-09762-w" target="_blank" >https://doi.org/10.1007/s10676-024-09762-w</a>

  • DOI - Digital Object Identifier

    <a href="http://dx.doi.org/10.1007/s10676-024-09762-w" target="_blank" >10.1007/s10676-024-09762-w</a>

Alternativní jazyky

  • Jazyk výsledku

    angličtina

  • Název v původním jazyce

    Explainable AI in the military domain

  • Popis výsledku v původním jazyce

    Artificial intelligence (AI) has become nearly ubiquitous in modern society, from components of mobile applications to medical support systems, and everything in between. In societally impactful systems imbued with AI, there has been increasing concern related to opaque AI, that is, artificial intelligence where it is unclear how or why certain decisions are reached. This has led to a recent boom in research on “explainable AI” (XAI), or approaches to making AI more explainable and understandable to human users. In the military domain, numerous bodies have argued that autonomous and AI-enabled weapon systems ought not incorporate unexplainable AI, with the International Committee of the Red Cross and the United States Department of Defense both explicitly including explainability as a relevant factor in the development and use of such systems. In this article, I present a cautiously critical assessment of this view, arguing that explainability will be irrelevant for many current and near-future autonomous systems in the military (which do not incorporate any AI), that it will be trivially incorporated into most military systems which do possess AI (as these generally possess simpler AI systems), and that for those systems with genuinely opaque AI, explainability will prove to be of more limited value than one might imagine. In particular, I argue that explainability, while indeed a virtue in design, is a virtue aimed primarily at designers and troubleshooters of AI-enabled systems, but is far less relevant for users and handlers actually deploying these systems. I further argue that human-machine teaming is a far more important element of responsibly using AI for military purposes, adding that explainability may undermine efforts to improve human-machine teamings by creating a prima facie sense that the AI, due to its explainability, may be utilized with little (or less) potential for mistakes. I conclude by clarifying that the arguments are not against XAI in the military, but are instead intended as a caution against over-inflating the value of XAI in this domain, or ignoring the limitations and potential pitfalls of this approach.

  • Název v anglickém jazyce

    Explainable AI in the military domain

  • Popis výsledku anglicky

    Artificial intelligence (AI) has become nearly ubiquitous in modern society, from components of mobile applications to medical support systems, and everything in between. In societally impactful systems imbued with AI, there has been increasing concern related to opaque AI, that is, artificial intelligence where it is unclear how or why certain decisions are reached. This has led to a recent boom in research on “explainable AI” (XAI), or approaches to making AI more explainable and understandable to human users. In the military domain, numerous bodies have argued that autonomous and AI-enabled weapon systems ought not incorporate unexplainable AI, with the International Committee of the Red Cross and the United States Department of Defense both explicitly including explainability as a relevant factor in the development and use of such systems. In this article, I present a cautiously critical assessment of this view, arguing that explainability will be irrelevant for many current and near-future autonomous systems in the military (which do not incorporate any AI), that it will be trivially incorporated into most military systems which do possess AI (as these generally possess simpler AI systems), and that for those systems with genuinely opaque AI, explainability will prove to be of more limited value than one might imagine. In particular, I argue that explainability, while indeed a virtue in design, is a virtue aimed primarily at designers and troubleshooters of AI-enabled systems, but is far less relevant for users and handlers actually deploying these systems. I further argue that human-machine teaming is a far more important element of responsibly using AI for military purposes, adding that explainability may undermine efforts to improve human-machine teamings by creating a prima facie sense that the AI, due to its explainability, may be utilized with little (or less) potential for mistakes. I conclude by clarifying that the arguments are not against XAI in the military, but are instead intended as a caution against over-inflating the value of XAI in this domain, or ignoring the limitations and potential pitfalls of this approach.

Klasifikace

  • Druh

    J<sub>imp</sub> - Článek v periodiku v databázi Web of Science

  • CEP obor

  • OECD FORD obor

    60302 - Ethics (except ethics related to specific subfields)

Návaznosti výsledku

  • Projekt

    <a href="/cs/project/GN24-12638I" target="_blank" >GN24-12638I: Etické a právní aspekty autonomních zbraňových systémů</a><br>

  • Návaznosti

    I - Institucionalni podpora na dlouhodoby koncepcni rozvoj vyzkumne organizace

Ostatní

  • Rok uplatnění

    2024

  • Kód důvěrnosti údajů

    S - Úplné a pravdivé údaje o projektu nepodléhají ochraně podle zvláštních právních předpisů

Údaje specifické pro druh výsledku

  • Název periodika

    Ethics and Information Technology

  • ISSN

    1388-1957

  • e-ISSN

    1572-8439

  • Svazek periodika

    26

  • Číslo periodika v rámci svazku

    2

  • Stát vydavatele periodika

    NL - Nizozemsko

  • Počet stran výsledku

    13

  • Strana od-do

    29

  • Kód UT WoS článku

    001203946100001

  • EID výsledku v databázi Scopus

    2-s2.0-85190401777