Learning not to regret
Identifikátory výsledku
Kód výsledku v IS VaVaI
<a href="https://www.isvavai.cz/riv?ss=detail&h=RIV%2F68407700%3A21230%2F24%3A00378617" target="_blank" >RIV/68407700:21230/24:00378617 - isvavai.cz</a>
Nalezeny alternativní kódy
RIV/00216208:11320/24:10490822
Výsledek na webu
<a href="https://doi.org/10.1609/aaai.v38i14.29443" target="_blank" >https://doi.org/10.1609/aaai.v38i14.29443</a>
DOI - Digital Object Identifier
<a href="http://dx.doi.org/10.1609/aaai.v38i14.29443" target="_blank" >10.1609/aaai.v38i14.29443</a>
Alternativní jazyky
Jazyk výsledku
angličtina
Název v původním jazyce
Learning not to regret
Popis výsledku v původním jazyce
The literature on game-theoretic equilibrium finding predominantly focuses on single games or their repeated play. Nevertheless, numerous real-world scenarios feature playing a game sampled from a distribution of similar, but not identical games, such as playing poker with different public cards or trading correlated assets on the stock market. As these similar games feature similar equilibra, we investigate a way to accelerate equilibrium finding on such a distribution. We present a novel "learning not to regret" framework, enabling us to meta-learn a regret minimizer tailored to a specific distribution. Our key contribution, Neural Predictive Regret Matching, is uniquely meta-learned to converge rapidly for the chosen distribution of games, while having regret minimization guarantees on any game. We validated our algorithms' faster convergence on a distribution of river poker games. Our experiments show that the meta-learned algorithms outpace their non-meta-learned counterparts, achieving more than tenfold improvements.
Název v anglickém jazyce
Learning not to regret
Popis výsledku anglicky
The literature on game-theoretic equilibrium finding predominantly focuses on single games or their repeated play. Nevertheless, numerous real-world scenarios feature playing a game sampled from a distribution of similar, but not identical games, such as playing poker with different public cards or trading correlated assets on the stock market. As these similar games feature similar equilibra, we investigate a way to accelerate equilibrium finding on such a distribution. We present a novel "learning not to regret" framework, enabling us to meta-learn a regret minimizer tailored to a specific distribution. Our key contribution, Neural Predictive Regret Matching, is uniquely meta-learned to converge rapidly for the chosen distribution of games, while having regret minimization guarantees on any game. We validated our algorithms' faster convergence on a distribution of river poker games. Our experiments show that the meta-learned algorithms outpace their non-meta-learned counterparts, achieving more than tenfold improvements.
Klasifikace
Druh
D - Stať ve sborníku
CEP obor
—
OECD FORD obor
10201 - Computer sciences, information science, bioinformathics (hardware development to be 2.2, social aspect to be 5.8)
Návaznosti výsledku
Projekt
<a href="/cs/project/GA22-26655S" target="_blank" >GA22-26655S: Algoritmy pro hraní masivních her s neúplnou informací</a><br>
Návaznosti
P - Projekt vyzkumu a vyvoje financovany z verejnych zdroju (s odkazem do CEP)
Ostatní
Rok uplatnění
2024
Kód důvěrnosti údajů
S - Úplné a pravdivé údaje o projektu nepodléhají ochraně podle zvláštních právních předpisů
Údaje specifické pro druh výsledku
Název statě ve sborníku
Proceedings of the 38th AAAI Conference on Artificial Intelligence
ISBN
978-1-57735-887-9
ISSN
2159-5399
e-ISSN
2374-3468
Počet stran výsledku
9
Strana od-do
15202-15210
Název nakladatele
AAAI Press
Místo vydání
Menlo Park
Místo konání akce
Vancouver
Datum konání akce
20. 2. 2024
Typ akce podle státní příslušnosti
WRD - Celosvětová akce
Kód UT WoS článku
001239983500003