# Geiringer-like Theorems for Decision Making in the Environments with Randomness and Incomplete Information.

Algebra/Geometry/Topology Seminar

#### by Boris Mitavskiy

Institution: Aberstwyth University
Date: Tue 22nd January 2013
Time: 3:15 PM
Location: JH Michell Theatre, Richard Berry

Abstract: Abstract: In recent years a computer game intelligence method known as Monte-Carlo Tree search has gained significant popularity after having beaten a top human Go" player in 2006. Unlike the traditional techniques such as the mini-max method (that, by the way, have not succeeded for Go"), Monte-Carlo Tree search (MCT) is based on running simulated self-plays, called rollouts" until the end of the game when a terminal state with a known payoff has been encountered. A position in a game is represented by a state-action pair $$\vec{s} = (s, \, \vec{\alpha})$$ where $$\vec{\alpha} = (\alpha_1, \, \alpha_2, \ldots, \alpha_l(s))$$ is the collection of actions (or moves) available in a position $$s$$. The goal is to evaluate actions through back-propagation and averaging. While a rather clever technique, based on mathematical considerations, called upper confidence tree bounds (UCB) exists for action selection during the simulation stage to preserve a delicate balance between exploration and exploitations to select the actions to evaluate, the policies for evaluating the actions, particularly for games involving randomness and/or incomplete information, can be drastically improved based on a classical theorem originated from population genetics that has been later adopted and widely extended in evolutionary computation theory, and, finally, extended even further and adopted to the setting of MCT to motivate novel action-evaluation policies that evaluate actions from an exponentially larger sample of rollouts comparing to the one simulated at relatively little additional computational cost through exploiting intrinsic similarities within the state-action space in a specific application. In this talk I will explain the theorem and how this can be done. The new method should be rather valuable when coping with partially observable Markov decision processes in general and is not limited to computer game intelligence or MCT alone.