Fairness in Forecasting of Observations of Linear Dynamical Systems
Identifikátory výsledku
Kód výsledku v IS VaVaI
<a href="https://www.isvavai.cz/riv?ss=detail&h=RIV%2F68407700%3A21230%2F23%3A00368296" target="_blank" >RIV/68407700:21230/23:00368296 - isvavai.cz</a>
Výsledek na webu
<a href="https://doi.org/10.1613/jair.1.14050" target="_blank" >https://doi.org/10.1613/jair.1.14050</a>
DOI - Digital Object Identifier
<a href="http://dx.doi.org/10.1613/jair.1.14050" target="_blank" >10.1613/jair.1.14050</a>
Alternativní jazyky
Jazyk výsledku
angličtina
Název v původním jazyce
Fairness in Forecasting of Observations of Linear Dynamical Systems
Popis výsledku v původním jazyce
In machine learning, training data often capture the behaviour of multiple subgroups of some underlying human population. This behaviour can often be modelled as observations of an unknown dynamical system with an unobserved state. When the training data for the subgroups are not controlled carefully, however, under-representation bias arises. To counter under-representation bias, we introduce two natural notions of fairness in timeseries forecasting problems: subgroup fairness and instantaneous fairness. These notion extend predictive parity to the learning of dynamical systems. We also show globally convergent methods for the fairness-constrained learning problems using hierarchies of convexifications of non-commutative polynomial optimisation problems. We also show that by exploiting sparsity in the convexifications, we can reduce the run time of our methods considerably. Our empirical results on a biased data set motivated by insurance applications and the well-known COMPAS data set demonstrate the efficacy of our methods.
Název v anglickém jazyce
Fairness in Forecasting of Observations of Linear Dynamical Systems
Popis výsledku anglicky
In machine learning, training data often capture the behaviour of multiple subgroups of some underlying human population. This behaviour can often be modelled as observations of an unknown dynamical system with an unobserved state. When the training data for the subgroups are not controlled carefully, however, under-representation bias arises. To counter under-representation bias, we introduce two natural notions of fairness in timeseries forecasting problems: subgroup fairness and instantaneous fairness. These notion extend predictive parity to the learning of dynamical systems. We also show globally convergent methods for the fairness-constrained learning problems using hierarchies of convexifications of non-commutative polynomial optimisation problems. We also show that by exploiting sparsity in the convexifications, we can reduce the run time of our methods considerably. Our empirical results on a biased data set motivated by insurance applications and the well-known COMPAS data set demonstrate the efficacy of our methods.
Klasifikace
Druh
J<sub>imp</sub> - Článek v periodiku v databázi Web of Science
CEP obor
—
OECD FORD obor
10201 - Computer sciences, information science, bioinformathics (hardware development to be 2.2, social aspect to be 5.8)
Návaznosti výsledku
Projekt
<a href="/cs/project/EF16_019%2F0000765" target="_blank" >EF16_019/0000765: Výzkumné centrum informatiky</a><br>
Návaznosti
P - Projekt vyzkumu a vyvoje financovany z verejnych zdroju (s odkazem do CEP)
Ostatní
Rok uplatnění
2023
Kód důvěrnosti údajů
S - Úplné a pravdivé údaje o projektu nepodléhají ochraně podle zvláštních právních předpisů
Údaje specifické pro druh výsledku
Název periodika
Journal of Artificial Intelligence Research
ISSN
1076-9757
e-ISSN
1943-5037
Svazek periodika
76
Číslo periodika v rámci svazku
April
Stát vydavatele periodika
US - Spojené státy americké
Počet stran výsledku
34
Strana od-do
1247-1280
Kód UT WoS článku
000982549100001
EID výsledku v databázi Scopus
2-s2.0-85160287419