Mathematics, Department of

Receive updates for this collection

COVID-19 in Schools: Mitigating Classroom Clusters in the Context of Variable Transmission

Peer reviewed: 
Yes, item is peer reviewed.
Date created: 
2021-07-08
Abstract: 

Widespread school closures occurred during the COVID-19 pandemic. Because closures are costly and damaging, many jurisdictions have since reopened schools with control measures in place. Early evidence indicated that schools were low risk and children were unlikely to be very infectious, but it is becoming clear that children and youth can acquire and transmit COVID-19 in school settings and that transmission clusters and outbreaks can be large. We describe the contrasting literature on school transmission, and argue that the apparent discrepancy can be reconciled by heterogeneity, or “overdispersion” in transmission, with many exposures yielding little to no risk of onward transmission, but some unfortunate exposures causing sizeable onward transmission. In addition, respiratory viral loads are as high in children and youth as in adults, pre- and asymptomatic transmission occur, and the possibility of aerosol transmission has been established. We use a stochastic individual-based model to find the implications of these combined observations for cluster sizes and control measures. We consider both individual and environment/activity contributions to the transmission rate, as both are known to contribute to variability in transmission. We find that even small heterogeneities in these contributions result in highly variable transmission cluster sizes in the classroom setting, with clusters ranging from 1 to 20 individuals in a class of 25. None of the mitigation protocols we modeled, initiated by a positive test in a symptomatic individual, are able to prevent large transmission clusters unless the transmission rate is low (in which case large clusters do not occur in any case). Among the measures we modeled, only rapid universal monitoring (for example by regular, onsite, pooled testing) accomplished this prevention. We suggest approaches and the rationale for mitigating these larger clusters, even if they are expected to be rare.

Document type: 
Article
File(s): 

A Data Assimilation Framework that Uses the Kullback-Leibler Divergence

Peer reviewed: 
Yes, item is peer reviewed.
Date created: 
2021-08-26
Abstract: 

The process of integrating observations into a numerical model of an evolving dynamical system, known as data assimilation, has become an essential tool in computational science. These methods, however, are computationally expensive as they typically involve large matrix multiplication and inversion. Furthermore, it is challenging to incorporate a constraint into the procedure, such as requiring a positive state vector. Here we introduce an entirely new approach to data assimilation, one that satisfies an information measure and uses the unnormalized Kullback-Leibler divergence, rather than the standard choice of Euclidean distance. Two sequential data assimilation algorithms are presented within this framework and are demonstrated numerically. These new methods are solved iteratively and do not require an adjoint. We find them to be computationally more efficient than Optimal Interpolation (3D-Var solution) and the Kalman filter whilst maintaining similar accuracy. Furthermore, these Kullback-Leibler data assimilation (KL-DA) methods naturally embed constraints, unlike Kalman filter approaches. They are ideally suited to systems that require positive valued solutions as the KL-DA guarantees this without need of transformations, projections, or any additional steps. This Kullback-Leibler framework presents an interesting new direction of development in data assimilation theory. The new techniques introduced here could be developed further and may hold potential for applications in the many disciplines that utilize data assimilation, especially where there is a need to evolve variables of large-scale systems that must obey physical constraints.

Document type: 
Article
File(s): 

A Tight Local Algorithm for the Minimum Dominating Set Problem in Outerplanar Graphs

Peer reviewed: 
Yes, item is peer reviewed.
Date created: 
2021-10-04
Abstract: 

We show that there is a deterministic local algorithm (constant-time distributed graph algorithm) that finds a 5-approximation of a minimum dominating set on outerplanar graphs. We show there is no such algorithm that finds a (5-ε)-approximation, for any ε > 0. Our algorithm only requires knowledge of the degree of a vertex and of its neighbors, so that large messages and unique identifiers are not needed.

Document type: 
Article
File(s): 

Modelling Extracellular Matrix and Cellular Contributions to Whole Muscle Mechanics

Peer reviewed: 
Yes, item is peer reviewed.
Date created: 
2021-04-02
Abstract: 

Skeletal muscle tissue has a highly complex and heterogeneous structure comprising several physical length scales. In the simplest model of muscle tissue, it can be represented as a one dimensional nonlinear spring in the direction of muscle fibres. However, at the finest level, muscle tissue includes a complex network of collagen fibres, actin and myosin proteins, and other cellular materials. This study shall derive an intermediate physical model which encapsulates the major contributions of the muscle components to the elastic response apart from activation-related along-fibre responses. The micro-mechanical factors in skeletal muscle tissue (eg. connective tissue, fluid, and fibres) can be homogenized into one material aggregate that will capture the behaviour of the combination of material components. In order to do this, the corresponding volume fractions for each type of material need to be determined by comparing the stress-strain relationship for a volume containing each material. This results in a model that accounts for the micro-mechanical features found in muscle and can therefore be used to analyze effects of neuro-muscular diseases such as cerebral palsy or muscular dystrophies. The purpose of this study is to construct a model of muscle tissue that, through choosing the correct material parameters based on experimental data, will accurately capture the mechanical behaviour of whole muscle. This model is then used to look at the impacts of the bulk modulus and material parameters on muscle deformation and strain energy-density distributions.

Document type: 
Article
File(s): 

Quantifying the Impact of COVID-19 Control Measures Using a Bayesian Model of Physical Distancing

Peer reviewed: 
Yes, item is peer reviewed.
Date created: 
2020-12-03
Abstract: 

Extensive non-pharmaceutical and physical distancing measures are currently the primary interventions against coronavirus disease 2019 (COVID-19) worldwide. It is therefore urgent to estimate the impact such measures are having. We introduce a Bayesian epidemiological model in which a proportion of individuals are willing and able to participate in distancing, with the timing of distancing measures informed by survey data on attitudes to distancing and COVID-19. We fit our model to reported COVID-19 cases in British Columbia (BC), Canada, and five other jurisdictions, using an observation model that accounts for both underestimation and the delay between symptom onset and reporting. We estimated the impact that physical distancing (social distancing) has had on the contact rate and examined the projected impact of relaxing distancing measures. We found that, as of April 11 2020, distancing had a strong impact in BC, consistent with declines in reported cases and in hospitalization and intensive care unit numbers; individuals practising physical distancing experienced approximately 0.22 (0.11–0.34 90% CI [credible interval]) of their normal contact rate. The threshold above which prevalence was expected to grow was 0.55. We define the “contact ratio” to be the ratio of the estimated contact rate to the threshold rate at which cases are expected to grow; we estimated this contact ratio to be 0.40 (0.19–0.60) in BC. We developed an R package ‘covidseir’ to make our model available, and used it to quantify the impact of distancing in five additional jurisdictions. As of May 7, 2020, we estimated that New Zealand was well below its threshold value (contact ratio of 0.22 [0.11–0.34]), New York (0.60 [0.43–0.74]), Washington (0.84 [0.79–0.90]) and Florida (0.86 [0.76–0.96]) were progressively closer to theirs yet still below, but California (1.15 [1.07–1.23]) was above its threshold overall, with cases still rising. Accordingly, we found that BC, New Zealand, and New York may have had more room to relax distancing measures than the other jurisdictions, though this would need to be done cautiously and with total case volumes in mind. Our projections indicate that intermittent distancing measures—if sufficiently strong and robustly followed—could control COVID-19 transmission. This approach provides a useful tool for jurisdictions to monitor and assess current levels of distancing relative to their threshold, which will continue to be essential through subsequent waves of this pandemic.

Document type: 
Article
File(s): 

The Distance and Median Problems in the Single-Cut-Or-Join Model with Single-Gene Duplications

Peer reviewed: 
Yes, item is peer reviewed.
Date created: 
2020-05-04
Abstract: 

Background.

In the field of genome rearrangement algorithms, models accounting for gene duplication lead often to hard problems. For example, while computing the pairwise distance is tractable in most duplication-free models, the problem is NP-complete for most extensions of these models accounting for duplicated genes. Moreover, problems involving more than two genomes, such as the genome median and the Small Parsimony problem, are intractable for most duplication-free models, with some exceptions, for example the Single-Cut-or-Join (SCJ) model.

Results.

We introduce a variant of the SCJ distance that accounts for duplicated genes, in the context of directed evolution from an ancestral genome to a descendant genome where orthology relations between ancestral genes and their descendant are known. Our model includes two duplication mechanisms: single-gene tandem duplication and the creation of single-gene circular chromosomes. We prove that in this model, computing the directed distance and a parsimonious evolutionary scenario in terms of SCJ and single-gene duplication events can be done in linear time. We also show that the directed median problem is tractable for this distance, while the rooted median problem, where we assume that one of the given genomes is ancestral to the median, is NP-complete. We also describe an Integer Linear Program for solving this problem. We evaluate the directed distance and rooted median algorithms on simulated data.

Conclusion.

Our results provide a simple genome rearrangement model, extending the SCJ model to account for single-gene duplications, for which we prove a mix of tractability and hardness results. For the NP-complete rooted median problem, we design a simple Integer Linear Program. Our publicly available implementation of these algorithms for the directed distance and median problems allow to solve efficiently these problems on large instances.

Document type: 
Article
File(s): 

A Fast Integral Equation Method for the Two-dimensional Navier-Stokes Equations

Peer reviewed: 
Yes, item is peer reviewed.
Date created: 
2020-02-24
Abstract: 

The integral equation approach to partial differential equations (PDEs) provides significant advantages in the numerical solution of the incompressible Navier-Stokes equations. In particular, the divergence-free condition and boundary conditions are handled naturally, and the ill-conditioning caused by high order terms in the PDE is preconditioned analytically. Despite these advantages, the adoption of integral equation methods has been slow due to a number of difficulties in their implementation. This work describes a complete integral equation-based flow solver that builds on recently developed methods for singular quadrature and the solution of PDEs on complex domains, in combination with several more well-established numerical methods. We apply this solver to flow problems on a number of geometries, both simple and challenging, studying its convergence properties and computational performance. This serves as a demonstration that it is now relatively straightforward to develop a robust, efficient, and flexible Navier-Stokes solver, using integral equation methods.

Document type: 
Article

Breaking the Coherence Barrier: A New Theory for Compressed Sensing

Peer reviewed: 
Yes, item is peer reviewed.
Date created: 
2017-02-15
Abstract: 

This paper presents a framework for compressed sensing that bridges a gap between existing theory and the current use of compressed sensing in many real-world applications. In doing so, it also introduces a new sampling method that yields substantially improved recovery over existing techniques. In many applications of compressed sensing, including medical imaging, the standard principles of incoherence and sparsity are lacking. Whilst compressed sensing is often used successfully in such applications, it is done largely without mathematical explanation. The framework introduced in this paper provides such a justification. It does so by replacing these standard principles with three more general concepts: asymptotic sparsity, asymptotic incoherence and multilevel random subsampling. Moreover, not only does this work provide such a theoretical justification, it explains several key phenomena witnessed in practice. In particular, and unlike the standard theory, this work demonstrates the dependence of optimal sampling strategies on both the incoherence structure of the sampling operator and on the structure of the signal to be recovered. Another key consequence of this framework is the introduction of a new structured sampling method that exploits these phenomena to achieve significant improvements over current state-of-the-art techniques.

Document type: 
Article
File(s): 

Social and Structural Factors Associated With Substance Use within the Support Network of Adults Living In Precarious Housing in A Socially Marginalized Neighborhood of Vancouver, Canada

Peer reviewed: 
Yes, item is peer reviewed.
Date created: 
2019-09-23
Abstract: 

Background The structure of a social network as well as peer behaviours are thought to affect personal substance use. Where substance use may create health risks, understanding the contribution of social networks to substance use may be valuable for the design and implementation of harm reduction or other interventions. We examined the social support network of people living in precarious housing in a socially marginalized neighborhood of Vancouver, and analysed associations between social network structure, personal substance use, and supporters’ substance use.

Methods An ongoing, longitudinal study recruited 246 participants from four single room occupancy hotels, with 201 providing social network information aligned with a 6-month observation period. Use of tobacco, alcohol, cannabis, cocaine (crack and powder), methamphetamine, and heroin was recorded at monthly visits. Ego- and graph-level measures were calculated; the dispersion and prevalence of substances in the network was described. Logistic mixed effects models were used to estimate the association between ego substance use and peer substance use. Permutation analysis was done to test for randomness of substance use dispersion on the social network.

Results The network topology corresponded to residence (Hotel) with two clusters differing in demographic characteristics (Cluster 1 –Hotel A: 94% of members, Cluster 2 –Hotel B: 95% of members). Dispersion of substance use across the network demonstrated differences according to network topology and specific substance. Methamphetamine use (overall 12%) was almost entirely limited to Cluster 1, and absent from Cluster 2. Different patterns were observed for other substances. Overall, ego substance use did not differ over the six-month period of observation. Ego heroin, cannabis, or crack cocaine use was associated with alter use of the same substances. Ego methamphetamine, powder cocaine, or alcohol use was not associated with alter use, with the exception for methamphetamine in a densely using part of the network. For alters using multiple substances, cannabis use was associated with lower ego heroin use, and lower ego crack cocaine use. Permutation analysis also provided evidence that dispersion of substance use, and the association between ego and alter use was not random for all substances.

Conclusions In a socially marginalized neighborhood, social network topology was strongly influenced by residence, and in turn was associated with type(s) of substance use. Associations between personal use and supporter’s use of a substance differed across substances. These complex associations may merit consideration in the design of interventions to reduce risk and harms associated with substance use in people living in precarious housing.

Document type: 
Article
File(s): 

A Parallel Non-Uniform Fast Fourier Transform Library Based on an "Exponential of Semicircle" Kernel

Peer reviewed: 
Yes, item is peer reviewed.
Date created: 
2019-09-19
Abstract: 

The nonuniform fast Fourier transform (NUFFT) generalizes the FFT to off-grid data. Its many applications include image reconstruction, data analysis, and the numerical solution of differential equations. We present FINUFFT, an efficient parallel library for type 1 (nonuniform to uniform), type 2 (uniform to nonuniform), or type 3 (nonuniform to nonuniform) transforms, in dimensions 1, 2, or 3. It uses minimal RAM, requires no precomputation or plan steps, and has a simple interface to several languages. We perform the expensive spreading/interpolation between nonuniform points and the fine grid via a simple new kernel---the ``exponential of semicircle"" e \beta \surd 1 - x2 in x \in [ - 1, 1]---in a cache-aware load-balanced multithreaded implementation. The deconvolution step requires the Fourier transform of the kernel, for which we propose efficient numerical quadrature. For types 1 and 2, rigorous error bounds asymptotic in the kernel width approach the fastest known exponential rate, namely that of the Kaiser--Bessel kernel. We benchmark against several popular CPU-based libraries, showing favorable speed and memory footprint, especially in three dimensions when high accuracy and/or clustered point distributions are desired.

Document type: 
Article
File(s):