AI Life Cycle and Human Rights: Risks and Remedies
Identifikátory výsledku
Kód výsledku v IS VaVaI
<a href="https://www.isvavai.cz/riv?ss=detail&h=RIV%2F68407700%3A21230%2F23%3A00372857" target="_blank" >RIV/68407700:21230/23:00372857 - isvavai.cz</a>
Výsledek na webu
<a href="https://prg.ai/projekty/ai-lidska-prava/" target="_blank" >https://prg.ai/projekty/ai-lidska-prava/</a>
DOI - Digital Object Identifier
—
Alternativní jazyky
Jazyk výsledku
angličtina
Název v původním jazyce
AI Life Cycle and Human Rights: Risks and Remedies
Popis výsledku v původním jazyce
This interdisciplinary chapter, based on the collaboration of international lawyers and artificial intelligence (AI) experts, introduces various phases of the AI life cycle in light of possible human rights violations that may arise from each of them. It identifies the root cause of the risks to human rights and analyses the possible remedies that are common to all AI systems despite their great diversity and domain of use today. The risk of human rights violation arises notably due to unbalanced or biased data, insufficiently identified system boundary conditions or modified context, the existence of the black box, malicious use, or abuse of AI. The gist is to introduce the human rights risk assessment throughout the whole AI life cycle and integrate it into the user requirements and the system specifications in the initial phase. This ensures, inter alia, that the AI system will be developed, tested, and monitored in light of the applicable human rights limitations. Requirements related to transparency, explainability, certification, or the selection of development data are all highly relevant for protection of human rights
Název v anglickém jazyce
AI Life Cycle and Human Rights: Risks and Remedies
Popis výsledku anglicky
This interdisciplinary chapter, based on the collaboration of international lawyers and artificial intelligence (AI) experts, introduces various phases of the AI life cycle in light of possible human rights violations that may arise from each of them. It identifies the root cause of the risks to human rights and analyses the possible remedies that are common to all AI systems despite their great diversity and domain of use today. The risk of human rights violation arises notably due to unbalanced or biased data, insufficiently identified system boundary conditions or modified context, the existence of the black box, malicious use, or abuse of AI. The gist is to introduce the human rights risk assessment throughout the whole AI life cycle and integrate it into the user requirements and the system specifications in the initial phase. This ensures, inter alia, that the AI system will be developed, tested, and monitored in light of the applicable human rights limitations. Requirements related to transparency, explainability, certification, or the selection of development data are all highly relevant for protection of human rights
Klasifikace
Druh
O - Ostatní výsledky
CEP obor
—
OECD FORD obor
10201 - Computer sciences, information science, bioinformathics (hardware development to be 2.2, social aspect to be 5.8)
Návaznosti výsledku
Projekt
<a href="/cs/project/TL05000484" target="_blank" >TL05000484: Umělá inteligence a lidská práva: rizika, příležitosti a regulace</a><br>
Návaznosti
P - Projekt vyzkumu a vyvoje financovany z verejnych zdroju (s odkazem do CEP)
Ostatní
Rok uplatnění
2023
Kód důvěrnosti údajů
S - Úplné a pravdivé údaje o projektu nepodléhají ochraně podle zvláštních právních předpisů