资源说明:This paper investigates the evaluation of learned multiagent strategies in the incomplete information setting, which plays a critical role in ranking and training of
agents. Traditionally, researchers have relied on Elo ratings for this purpose, with
recent works also using methods based on Nash equilibria. Unfortunately, Elo is
unable to handle intransitive agent interactions, and other techniques are restricted
to zero-sum, two-player settings or are limited by the fact that the Nash equilibrium
is intractable to compute. Recently, a ranking method called α-Rank, relying on a
new graph-based game-theoretic solution concept, was shown to tractably apply
to general games. However, evaluations based on Elo or α-Rank typically assume
noise-free game outcomes, despite the data often being collected from noisy simulations, making this assumption unrealistic in practice. This paper investigates
multiagent evaluation in the incomplete information regime, involving general-sum
many-player games with noisy outcomes. We derive sample complexity guarantees
required to confidently rank agents in this setting. We propose adaptive algorithms
for accurate ranking, provide correctness and sample complexity guarantees, then
introduce a means of connecting uncertainties in noisy match outcomes to uncertainties in rankings. We evaluate the performance of these approaches in several
domains, including Bernoulli games, a soccer meta-game, and Kuhn poker.
本源码包内暂不包含可直接显示的源代码文件,请下载源码包。
English
