## About Summit

# Mathematics - Theses, Dissertations, and other Required Graduate Degree Essays

Receive updates for this collection## Lattice walks in cones: Combinatorial and probabilistic aspects

Lattice walks in cones have many applications in combinatorics and probability theory. While walks restricted to the first quadrant have been well studied, the case of non-convex cones and three-dimensional walks has been systematically approached recently. In this thesis, we extend the analytic method of the study of walks and its discrete harmonic functions in the quarter plane to the three-quarter plane applying the strategy of splitting the domain into two symmetric convex cones. This method is composed of three main steps: write a system of functional equations satisfied by the generating function, which may be simplified into one single equation under symmetry conditions; transform the functional equation into a boundary value problem; and solve this problem using conformal mappings. We obtain explicit expressions for the generating functions of walks and its associated harmonic functions. The advantage of this method is the uniform treatment of models corresponding to different step sets. In a second part of this thesis, we explore the asymptotic enumeration of three-dimensional excursions confined to the positive octant. The critical exponent is related to the smallest eigenvalue of a Dirichlet problem in a spherical triangle. Combinatorial properties of the step set are related to geometric and analytic properties of the associate spherical triangle.

## Neural style transfer with primitive features

Neural Style Transfer (NST) is an algorithm that creates an image by combining the stylistic features of a piece of artwork with the content features of a photograph. The defining characteristic of NST which sets it apart from other image stylization techniques is the use of Deep Neural Networks trained for image recognition. First derived in 2016 by Leon Gatys and Matthias Bethge, the algorithm uses a single neural network to extract and recombine the content of one image with the artistic style of another. Now a field unto itself, NST has shown an uncanny ability to produce approximations to human artistic style. However, the exact process in which style is represented in the model is highly unintuitive and is still not well understood. Moreover, there is little reasoning given towards the choice of network architecture, size or training environment. This leads to questions like: can any network designed for image recognition preform NST? Will the network's depth or training set have an affect on NST? To explore these questions this thesis presents several experiments on Neural Style Transfer using networks that have been trained on small image recognition tasks. In this simplified setting we assess various aspects of the NST algorithm with these primitive networks. The results of our experiments show that certain architectures do not have the capacity for NST while other networks can produce NST-like results but suffer in visual quality.

## Cops and robbers with speed restrictions

The game of Cops and Robbers is a pursuit-evasion game played on graphs with two players, the cops and the robber, who take turns moving on the graph. In each turn, they may move to a vertex adjacent to their current position or stay where they are. The cops’ objective is to get to the same position where the robber is, which we refer to as to capture the robber, and the robber’s goal is to evade capture indefinitely. The basic question is to find the minimum number of cops that can guarantee capturing the robber in a given graph. A very fruitful research area has been developed around the idea of modifying the way in which the cops or the robber move and analysing how these changes affect the strategies and outcome of the game. In this thesis we will study the game when we impose additional speed restrictions on the players, variants of the game popularly known as “lazy-cops and robbers” and “active cops and robbers”. In order to do so, we introduce the concept of the wide shadow, aiming to improve known results and obtain new tools and techniques which may provide further insight into other open problems in the area.

## A new bivariate Hensel lifting algorithm for n factors

We present a new algorithm for performing Linear Hensel Lifting of bivariate polynomials over the finite field F_p for some prime p. Our algorithm lifts n monic, univariate polynomials to recover the factors of a polynomial A(x,y) in F_p[x,y] which is monic in x, and bounded by degrees d_x = deg(A,x) and d_y = deg(A,y). Our algorithm improves upon Bernardin's algorithm in [2] and reduces the number of arithmetic operations in F_p from O(n d_x^2 d_y^2) to O(d_x^2 d_y + d_x d_y^2) for p >= d_x. Experimental results in C verify that our algorithm compares favorably with Bernardin's for large degree polynomials.

## Computation of mountain wave clouds in a moist Boussinesq fluid model

This thesis presents computations of time-steady 2D clouds forming over mountain topography, with an interest in resolving the fine-scale physics at the cloud edge. The underlying model takes the incompressible Euler equations as its basis, and couples the atmospheric fluid flows to the physics of phase change in order to compute cloud-edge boundary locations in a vertically-stratified atmosphere. This coupling gives rise to a free-boundary problem for the cloud edge. This model has been employed here to successfully recover mountain cloud behaviour for a variety of atmospheric conditions, including computations of the well-known lens shape of the lenticular cloud. The problem of time-steady, density-stratified flow over ground topography can be reduced to a 2D Helmholtz problem for the streamfunction due to Long's theory. The domain of this PDE is a perturbed 2D half-space, and the streamfunction is specified by a bottom boundary that follows a localized mountain terrain. This thesis presents a new extension of Long's theory to include cloudy air, where the derived Helmholtz equation now includes localized forcing associated with small regions of cloud. The nature of the PDE domain makes the problem well-suited for a boundary method application. The scheme used in this work utilizes the method of fundamental solutions (MFS), a numerical approach related to the boundary integral equation method, which approximates the solutions to elliptic problems by finite sums of fundamental solutions of the PDE operator. The MFS is coupled with an iterative solver to resolve the free-boundary problem for the cloud geometry, and the numerical performance of this scheme is analyzed.

## Simultaneous prime values of binary forms

The twin prime conjecture asserts that there are infinitely many positive integers x such that x and x+2 are simultaneously prime. In this thesis, we consider a two-variable analogue of this problem. Let F(x, y) be a positive definite quadratic form and G(x, y) a linear form, both with integer coefficients. Suppose for any prime p there exist l, m such that p t F(l, m)G(l, m). Then we prove that there are infinitely many l, m in Z such that both F(l, m), G(l, m) are primes. In fact, our proof extends to primes in arithmetic progressions F(l, m)= a mod q and G(l, m)=b mod q. The main result (when q=1) was first obtained independently by the author and another team of researchers D. Schindler and S. Y. Xiao. The extension is joint work with them.

## An implementation of two-cover descent on plane quartic curves

We gather experimental evidence related to the question of deciding whether a smooth plane quartic curve has a rational point. Smooth plane quartics describe curves in genus 3, the first genus in which non-hyperelliptic curves occur. We present an algorithm that determines a set of unramified covers of a given plane quartic curve, with the property that any rational point will lift to one of the covers. In particular, if the algorithm returns the empty set, then the curve has no rational points. We apply our algorithm to a total of 1000 isomorphism classes of randomly-generated plane quartic curves.

## Low rank quadratic assignment problem: Formulations and experimental analysis

In this thesis, we study the quadratic assignment problem (QAP) with a special emphasis on the case where the associated cost matrix is of rank r (QAP(r)), for small values of r. We first consider different representations of the cost matrix Q which were shown to be beneficial for the quadratic set covering problem (QSCP). Unlike QSCP, these representations were unable to solve QAP of size n >= 20 and had a behaviour different from that of QSCP. To reconfirm this, additional experiments were carried out using the quadratic knapsack problem (QKP). We did notice statistically significant preferred representations for QKP and QAP, but were different from what was observed and known for QSCP. Next we consider four different mixed integer linear programming (MILP) formulations of QAP(r), extending the known case of r=1. Extensive experimental results are provided for r=2,3,4. One of our new formulations was shown to be very effective in solving large size QAP(r) for r=2,3,4. The performance of the model is observed to deteriorate as the rank is increased. Finally, we present theoretical and experimental comparisons of the linear programming relaxations of our MILP formulations of QAP(r). Our MILP formulations for QAP(r) could be used as a heuristic for QAP by computing a low-rank approximation of the data matrix Q.

## Formulating Quadratic Traveling Salesman Problems for computation

The Traveling Salesman Problem (TSP) is a fundamental combinatorial optimization problem. Adding costs associated with pairs of edges included in a tour gives the Quadratic Traveling Salesman Problem (QTSP). This increases modeling power by allowing, for example, the inclusion of transfer costs between edges. We consider a general version of this problem, where costs are attached to all pairs of edges. We give a brief history of computational solvers, especially in relation to the TSP. For the QTSP, we consider modifying the structure of the quadratic cost input and linearizing the quadratic objective function, with detail on how to generate the modifications and linearizations. We study the impact of these structures on computational efficiency for randomly generated instances, using the Gurobi solver. We find that by making the quadratic cost matrix negative semidefinite, we improve solve times, and that solving the problem as a quadratic minimization problem outperforms linearization approaches.

## Compressive imaging with total variation regularization and application to auto-calibration of parallel magnetic resonance imaging

An important task in imaging is to recover an image from samples of its Fourier transform. With compressed sensing, this is done by applying a sparsifying transform and solving a l1 minimization problem. One possible transform is the discrete gradient operator, in which case penalizing the l1 norm leads to Total Variation (TV) minimization. We present new recovery guarantees for TV regularization in arbitrary dimensions using two sampling strategies: uniform random Fourier sampling and variable density Fourier sampling. In particular, we determine a near-optimal choice of sampling density in any dimension. Our theoretical and numerical results show that variable density Fourier sampling increases the stability and robustness of TV regularization over uniform random Fourier sampling. As an application, we consider auto-calibration in parallel magnetic resonance imaging (pMRI). We develop a two-step algorithm: firstly, using sparse regularization to recover the coil images; secondly, using linear least squares to obtain the overall image.