Vigilance System for High-Risk Autonomous Devices
Identifikátory výsledku
Kód výsledku v IS VaVaI
<a href="https://www.isvavai.cz/riv?ss=detail&h=RIV%2F00216224%3A14220%2F20%3A00117984" target="_blank" >RIV/00216224:14220/20:00117984 - isvavai.cz</a>
Nalezeny alternativní kódy
RIV/00216224:14220/21:00121200
Výsledek na webu
—
DOI - Digital Object Identifier
—
Alternativní jazyky
Jazyk výsledku
angličtina
Název v původním jazyce
Vigilance System for High-Risk Autonomous Devices
Popis výsledku v původním jazyce
Current EU regulatory discourse regarding autonomous products and systems focuses on ensuring safety of high-risk autonomous technologies. This priority has been established in the White paper on AI and further discussed at the AI Alliance Assembly this October. However, the means to ensure safety and the definition of high-risk technologies both remain unclear. This a proposition paper on how some of the aspects of mentioned regulation could look like. The level of risk should be evaluated on the basis of these criteria: 1) the level of interaction between a user and a device (level of human control); 2) the probability of inflicting harm to privacy, health, life or other human rights; 3) sector-specific conditions for autonomous operations. The means to address these high-risk devices shall include a vigilance system. A framework for monitoring the safety of devices during their whole life cycle through supervision and testing. This framework shall be horizontal for all autonomous devices and shall be complimented by sector-specific regulation and technical standards that will allow to verify compliance with standards of care. The framework should also provide for certification of persons handling the technology. A visual aid showing the proposed system is introduced.
Název v anglickém jazyce
Vigilance System for High-Risk Autonomous Devices
Popis výsledku anglicky
Current EU regulatory discourse regarding autonomous products and systems focuses on ensuring safety of high-risk autonomous technologies. This priority has been established in the White paper on AI and further discussed at the AI Alliance Assembly this October. However, the means to ensure safety and the definition of high-risk technologies both remain unclear. This a proposition paper on how some of the aspects of mentioned regulation could look like. The level of risk should be evaluated on the basis of these criteria: 1) the level of interaction between a user and a device (level of human control); 2) the probability of inflicting harm to privacy, health, life or other human rights; 3) sector-specific conditions for autonomous operations. The means to address these high-risk devices shall include a vigilance system. A framework for monitoring the safety of devices during their whole life cycle through supervision and testing. This framework shall be horizontal for all autonomous devices and shall be complimented by sector-specific regulation and technical standards that will allow to verify compliance with standards of care. The framework should also provide for certification of persons handling the technology. A visual aid showing the proposed system is introduced.
Klasifikace
Druh
O - Ostatní výsledky
CEP obor
—
OECD FORD obor
50500 - Law
Návaznosti výsledku
Projekt
<a href="/cs/project/EF16_019%2F0000822" target="_blank" >EF16_019/0000822: Centrum excelence pro kyberkriminalitu, kyberbezpečnost a ochranu kritických informačních infrastruktur</a><br>
Návaznosti
P - Projekt vyzkumu a vyvoje financovany z verejnych zdroju (s odkazem do CEP)
Ostatní
Rok uplatnění
2020
Kód důvěrnosti údajů
S - Úplné a pravdivé údaje o projektu nepodléhají ochraně podle zvláštních právních předpisů