Openness and Safety in the Development of Large Language Models
Identifikátory výsledku
Kód výsledku v IS VaVaI
<a href="https://www.isvavai.cz/riv?ss=detail&h=RIV%2F00216208%3A11230%2F24%3A10484333" target="_blank" >RIV/00216208:11230/24:10484333 - isvavai.cz</a>
Výsledek na webu
—
DOI - Digital Object Identifier
—
Alternativní jazyky
Jazyk výsledku
angličtina
Název v původním jazyce
Openness and Safety in the Development of Large Language Models
Popis výsledku v původním jazyce
The development of the state-of-the-art LLMs (Large Language Models) and other types of generative AI is increasingly carried out behind closed doors of private AI laboratories that belong to competing technological companies. Concerns over risks associated with LLMs led to proposals of regulatory models, many of them motivated by the technological companies. Some then argue that the reduction of risks associated with private, versatile, and powerful LLMs should remain in the hands of their owners because other actors are not equipped to handle the task. This approach makes regulation ineffective by failing to guarantee fair and unbiased representation of social preferences on the capabilities of LLMs and by denying accountability of private LLM owners in domains impacted by LLMs. We argue that alternative regulatory models are possible and outline what needs to change to make them viable. Instead of versatile and private LLMs, domain-specific and open models should be encouraged by regulation to allow oversight by domains regulators best equipped to reduce the risk. The decentralized regulation of domain-specific and open LLMs enables a better representation of values of marginalized groups of people. Something which is harder to achieve with opaque and versatile models developed by private companies.
Název v anglickém jazyce
Openness and Safety in the Development of Large Language Models
Popis výsledku anglicky
The development of the state-of-the-art LLMs (Large Language Models) and other types of generative AI is increasingly carried out behind closed doors of private AI laboratories that belong to competing technological companies. Concerns over risks associated with LLMs led to proposals of regulatory models, many of them motivated by the technological companies. Some then argue that the reduction of risks associated with private, versatile, and powerful LLMs should remain in the hands of their owners because other actors are not equipped to handle the task. This approach makes regulation ineffective by failing to guarantee fair and unbiased representation of social preferences on the capabilities of LLMs and by denying accountability of private LLM owners in domains impacted by LLMs. We argue that alternative regulatory models are possible and outline what needs to change to make them viable. Instead of versatile and private LLMs, domain-specific and open models should be encouraged by regulation to allow oversight by domains regulators best equipped to reduce the risk. The decentralized regulation of domain-specific and open LLMs enables a better representation of values of marginalized groups of people. Something which is harder to achieve with opaque and versatile models developed by private companies.
Klasifikace
Druh
O - Ostatní výsledky
CEP obor
—
OECD FORD obor
50601 - Political science
Návaznosti výsledku
Projekt
<a href="/cs/project/LX22NPO5101" target="_blank" >LX22NPO5101: Národní institut pro výzkum socioekonomických dopadů nemocí a systémových rizik</a><br>
Návaznosti
P - Projekt vyzkumu a vyvoje financovany z verejnych zdroju (s odkazem do CEP)
Ostatní
Rok uplatnění
2024
Kód důvěrnosti údajů
S - Úplné a pravdivé údaje o projektu nepodléhají ochraně podle zvláštních právních předpisů