Learning efficiency in the Inverse Ising Problem

Author: 
Peer reviewed: 
No, item is not peer reviewed.
Scholarly level: 
Undergraduate student
Date created: 
2018-04
Keywords: 
Ising model
Machine learning
Fisher information
Optimization
Abstract: 

In recent years, the amount of data available on biological systems such as genetic regulatory networks and neural networks has increased exponentially, thanks to improvements in experimental methods such as drop-seq [1], which enables biologists to simultaneously analyze RNA expression in thousands of cells. To keep pace with the available data, modern machine learning requires efficient methods for using this data to develop predictive models about the natural world. Using a canonical statistical physics example, the Inverse Ising problem, we ask how physical factors such as temperature affect the learning efficiency. In a network governed by a Hamiltonian with spin-spin interactions, we construct a linear system of equations based on equilibrium observations of spin states, and use linear algebra to solve for the underlying spin-spin couplings. We show that there exists an optimal temperature Topt for which learning is most efficient. Furthermore, we discuss several physical correlates for the scaling of Topt with network size for a simple uniform-coupling network and discuss the extension to more general distributions of couplings. The Fisher information, which depends strongly on the variance of the spin-spin alignment, is shown to predict this scaling most accurately.

Language: 
English
Document type: 
Thesis
Rights: 
Rights remain with the author.
File(s): 
Senior supervisor: 
David Sivak
Department: 
Science: Department of Physics
Thesis type: 
Honours Bachelor of Science
Statistics: