Author: Routley, Kurt Douglas
Evaluating player actions is very important for general managers and coaches in the National Hockey League. Researchers have developed a variety of advanced statistics to assist general managers and coaches in evaluating player actions. These advanced statistics fail to account for the context in which an action occurs or to look ahead to the long-term effects of an action. I apply the Markov Game formalism to play-by-play events recorded in the National Hockey League to develop a novel approach to valuing player actions. The Markov Game formalism incorporates context and lookahead across play-by-play sequences. A dynamic programming algorithm for value iteration learns the values of Q-functions in different states of the Markov Game model. These Q-values quantify the impact of actions on goal scoring, receiving penalties, and winning games. Learning is based on a massive dataset that contains over 2.8 million events in the National Hockey League. The impact of player actions varies widely depending on the context, with possible positive and negative effects for the same action. My results show using context features and lookahead makes a substantial difference to the action impact scores. Accounting for context and lookahead also increases the information in the model. Players are ranked according to the aggregate impact of their actions, and compared with previous player metrics, such as plus-minus, total points, and salary, as well as recent advanced statistics metrics.
Copyright is held by the author.
The author granted permission for the file to be printed and for the text to be copied and pasted.
Supervisor or Senior Supervisor
Thesis advisor: Schulte, Oliver
Member of collection