Skip to main content

Learning efficiency in the Inverse Ising Problem

Resource type
Thesis type
Honours Bachelor of Science
Date created
2018-04
Authors/Contributors
Abstract
In recent years, the amount of data available on biological systems such as genetic regulatory networks and neural networks has increased exponentially, thanks to improvements in experimental methods such as drop-seq [1], which enables biologists to simultaneously analyze RNA expression in thousands of cells. To keep pace with the available data, modern machine learning requires efficient methods for using this data to develop predictive models about the natural world. Using a canonical statistical physics example, the Inverse Ising problem, we ask how physical factors such as temperature affect the learning efficiency. In a network governed by a Hamiltonian with spin-spin interactions, we construct a linear system of equations based on equilibrium observations of spin states, and use linear algebra to solve for the underlying spin-spin couplings. We show that there exists an optimal temperature Topt for which learning is most efficient. Furthermore, we discuss several physical correlates for the scaling of Topt with network size for a simple uniform-coupling network and discuss the extension to more general distributions of couplings. The Fisher information, which depends strongly on the variance of the spin-spin alignment, is shown to predict this scaling most accurately.
Document
Copyright statement
Copyright is held by the author.
Scholarly level
Supervisor or Senior Supervisor
Thesis advisor: Sivak, David
Language
English
Member of collection
Download file Size
BSheldan-UGThesis2018.pdf 2.06 MB

Views & downloads - as of June 2023

Views: 0
Downloads: 1