Skip to main content

Applications of Reinforcement Learning to Routing and Virtualization in Computer Networks

Resource type
Thesis type
(Dissertation) Ph.D.
Date created
2016-03-17
Authors/Contributors
Abstract
Computer networks and reinforcement learning algorithms have substantially advanced over the past decade. The Internet is a complex collection of inter-connected networks with a numerous of inter-operable technologies and protocols. Current trend to decouple the network intelligence from the network devices enabled by Software-Defined Networking (SDN) provides a centralized implementation of network intelligence. This offers great computational power and memory to network logic processing units where the network intelligence is implemented. Hence, reinforcement learning algorithms viable options for addressing a variety of computer networking challenges. In this dissertation, we propose two applications of reinforcement learning algorithms in computer networks.We first investigate the applications of reinforcement learning for deflection routing in buffer-less networks. Deflection routing is employed to ameliorate packet loss caused by contention in buffer-less architectures such as optical burst-switched (OBS) networks. We present a framework that introduces intelligence to deflection routing (iDef). The iDef framework decouples design of the signaling infrastructure from the underlying learning algorithm. It is implemented in the ns-3 network simulator and is made publicly available. We propose the predictive Q-learning deflection routing (PQDR) algorithm that enables path recovery and reselection, which improves the decision making ability of the node in high load conditions. We also introduce the Node Degree Dependent (NDD) signaling algorithm. The complexity of the algorithm only depends on the degree of the node that is NDD compliant while the complexity of the currently available reinforcement learning-based deflection routing algorithms depends on the size of the network. Therefore, NDD is better suited for larger networks. Simulation results show that NDD-based deflection routing algorithms scale well with the size of the network and outperform the existing algorithms. We also propose a feed-forward neural network (NN) and a feed-forward neural network with episodic updates (ENN). They employ a single hidden layer and update their weights using an associative learning algorithm. Current reinforcement learning-based deflection routing algorithms employ Q-learning, which does not efficiently utilize the received feedback signals. We introduce the NN and ENN decision-making algorithms to address the deficiency of Q-learning. The NN-based deflection routing algorithms achieve better results than Q-learning-based algorithms in networks with low to moderate loads.The second application of reinforcement learning that we consider in this dissertation is for modeling the Virtual Network Embedding (VNE) problem. We develop a VNE simulator (VNE-Sim) that is also made publicly available. We define a novel VNE objective function and prove its upper bound. We then formulate the VNE as a reinforcement learning problem using the Markov Decision Process (MDP) framework and then propose two algorithms (MaVEn-M and MaVEn-S) that employ Monte Carlo Tree Search (MCTS) for solving the VNE problem. In order to further improve the performance, we parallelize the algorithms by employing MCTS root parallelization. The advantage of the proposed algorithms is that, time permitting, they search for more profitable embeddings compared to the available algorithms that find only a single embedding solution. The simulation results show that proposed algorithms achieve superior performance.
Document
Identifier
etd9526
Copyright statement
Copyright is held by the author.
Permissions
This thesis may be printed or downloaded for non-commercial research and scholarly purposes.
Scholarly level
Supervisor or Senior Supervisor
Thesis advisor: Trajkovic, Ljiljana
Member of collection
Download file Size
etd9526_SHaeri.pdf 1.58 MB

Views & downloads - as of June 2023

Views: 0
Downloads: 1