## About Summit

# Computing Science - Theses, Dissertations, and other Required Graduate Degree Essays

Receive updates for this collection## ExquiMo: An Exquisite Corpse Tool for Co-creative 3D Shape Modeling

We introduce a shape modeling tool, ExquiMo, which is guided by the idea of improving the creativity of 3D shape designs through collaboration. Inspired by the game of Exquisite Corpse, our tool allocates distinct parts of a shape to multiple players who model the assigned parts in a sequence. Our approach is motivated by the understanding that effective surprise leads to creative outcomes. Hence, to maintain the surprise factor of the output, we conceal the previously modeled parts from the most recent player. Part designs from individual players are fused together to produce an often unexpected, hence creative, end result. We demonstrate the effectiveness of collaborative modeling for both man-made and natural shapes. Our results show that, when compared to models designed by individual users, multi-user collaborative modeling via ExquiMo tends to lead to more creative designs in terms of the most common criteria used to identify creative artifacts.

## SimBoost: A Read-Across Approach for Drug-Target Interaction Prediction Using Gradient Boosting Machines

Computational prediction of the interaction between drugs and targets is a standing challenge in drug discovery. High performance on binary drug-target benchmark datasets was reported for a number of methods. A possible drawback of binary data is that missing values and non-interacting drug-target pairs are not differentiated. In this paper, we present a method called SimBoost that predicts the continuous binding affinities of drugs and targets and thus incorporates the whole interaction spectrum from true negative to true positive interactions in the learning phase. Additionally, we propose a version called SimBoostQuant which computes a prediction interval in order to assess the confidence of the predicted affinity. We evaluate SimBoost and SimBoostQuant on three continuous drug-target datasets and show that our methods outperform the state-of-the-art models.

## Continuous Conditional Random Fields for Drug Target Interaction Prediction

Knowledge about the interaction between drugs and proteins is essential in the drug discovery process. Understanding the relationship between compounds and proteins through wetlab experiments alone is time-consuming and costly. To support the experimental work by prioritizing the most potent compounds for a target, numerous methods for the in-silico prediction of drug-target interaction have been proposed and high performance on binary datasets have been reported. A drawback of binary datasets is that missing values and non-interacting drug-target pairs are not differentiated. In this thesis, a model is developed that predicts the drug-target binding strengths as continuous values and thus incorporates the whole interaction spectrum from true negative to true positive interaction. The developed model combines two previously used approaches for the problem, which are Matrix Factorization and Conditional Random Fields. The model is evaluated on three datasets and a slight performance improvement is observed when compared to the state of the art method.

## Leveraging Compiler Alias Analysis To Free Accelerators from Load-Store Queues

Hardware accelerators are an energy efficient alternative to general purpose processors for specific program regions. They have relied on the compiler to extract instruction level parallelism but may waste significant energy in memory disambiguation and discovering memory level parallelism (MLP). Currently, accelerators either i) Define the problem away, and rely on massively parallel programming models [1, 48] to extract MLP. ii) Reuse the Out of Order (OoO) processor [7, 28], and rely on power hungry load-store queues (LSQs) for memory disambiguation, or iii) Serialize – some accelerators [47] focus on program regions where MLP is not important and simply serialize memory operations. We present NACHOS, a compiler assisted energy efficient approach to memory disambiguation, which completely eliminates the need for an LSQ. NACHOS classifies memory operations pairwise into those that don’t alias (i.e., independent memory operations), must alias (i.e., ordering is required between memory operations), and may alias (i.e., compiler is unsure). To enforce program order between must alias memory operations, the compiler inserts ordering edges that are enforced as def-use data dependencies. When the compiler is unsure (i.e., may alias) about a pair of memory operations, the hardware checks if they are independent. We demonstrate that compiler alias analysis with additional refinement can achieve high accuracy for hardware accelerated regions. In our workload suite comprising of SPEC2k, SPEC2k6, and PARSEC workloads; Across 15 applications NACHOS imposes no energy overhead over the function units (i.e., compiler resolves all dependencies), and in another 12 applications NACHOS consumes =17% of function unit energy (max: 53% in povray). Overall NACHOS achieves performance similar to an optimized LSQ and adds an overhead equal to 2.3X of compute energy.

## Algorithms for Scheduling and Routing Problems

Optimization has been a central topic in most scientific disciplines for centuries. Continuous optimization has long benefited from well-established techniques of calculus. Discrete optimization, on the other hand, has risen to prominence quite recently. Advances in combinatorial optimization and integer programming in the past few decades, together with the improvement of computer hardware have enabled computer scientists to approach the the problems in this area both theoretically and computationally. However, obtaining the exact solution for many discrete optimization problems remains is still a challenging task, mainly because most of these problems are NP-hard. Under the widespread assumption that P ≠ NP, these problems are intractable from a computational complexity standpoint. Therefore, we should settle for near-optimal solutions. In this thesis, we develop techniques to obtain solutions that are provably close to the optimal for different indivisible resource allocation problems. Indivisible resource allocation encompasses a large class of problems in discrete optimization which can appear in disguise in various theoretical or applied settings. Specifically, we consider two indivisible resource allocation problems. The first one is a variant of the vehicle routing problem known as Skill Vehicle Routing problem, in which the aim is to obtain optimal tours for a fleet of vehicles that provides service to a set of customers. Each of the vehicles possesses a particular set of skills suitable for a subset of the tasks. Each customer, based on the type of service he requires, can only be served by a subset of vehicles. We study this problem computationally and find either the optimal solution or a relatively tight bound on the optimal solution on fairly large problem instances. The second problem involves approximation algorithms for two versions of the classic scheduling problem, the restricted $R||C_{max}$ and the restricted Santa Claus problem. The objective is to design a polynomial time approximation scheme (PTAS) for ordered instances of the two problems. Finally, we consider the class of precedence hierarchies in which the neighborhoods of the processors form Laminar families. We show similar results for a generalization of this model.

## Fundamentals of Learning Algorithms in Boltzmann Machines

Boltzmann learning underlies an artificial neural network model known as the Boltzmann machine that extends and improves upon the Hopfield network model. Boltzmann machine model uses stochastic binary units and allows for the existence of hidden units to represent latent variables. When subjected to reducing noise via simulated annealing and allowing uphill steps via Metropolis algorithm, the training algorithm increases the chances that, at thermal equilibrium, the network settles on the best distribution of parameters. The existence of equilibrium distribution for an asynchronous Boltzmann machine is analyzed with respect to temperature. Two families of learning algorithms, which correspond to two different approaches to compute the statistics required for learning, are presented. The learning algorithms based only on stochastic approximations are traditionally slow. When variational approximations of the free energy are used, like the mean field approximation or the Bethe approximation, the performance of learning improves considerably. The principal contribution of the present study is to provide, from a rigorous mathematical perspective, a unified framework for these two families of learning algorithms in asynchronous Boltzmann machines.

## Algorithms for Problems in Voting and Scheduling

In this dissertation, we study the voting problem and the ranking problem in computational social choice, as well as a matching problem in a restricted graph. We present our results for these problems in two parts.Part I: Election, Ranking, and Heuristics. Voting is commonly used to reach consensus among a group of people. Voting models often deal with a set of voters, each of whom has a preference over a set of alternatives. Each voter submits a ranking of the alternatives, and the outcome is decided by a voting rule. Computational voting theory is an interdisciplinary research area which considers the computational problems that arise in voting. Selecting the winner(s) of an election is one such problem. The problem of computing the winner(s) using most voting rules is easy. However, there are a few rules for which this problem becomes computationally hard. In the first part of this thesis, we study two important voting and ranking rules under which computing the winner(s) is hard. The first voting procedure we study in this thesis is the Chamberlin-Courant system. The Chamberlin-Courant system is a proportional representation system that does not restrict candidates to have a minimum number of votes to be selected in an assembly. We consider domination analysis of a 2-Opt heuristic for the winner determination problem under the Chamberlin-Courant system. We show that the 2-Opt heuristic produces solutions no worse than the average solution in polynomial time. The next problem we consider in this dissertation is Linear Ordering Problem. Linear ordering problem is a classic optimization problem which can be used to model problems in graph theory, machine scheduling, and ranking. Relatively recently, there has been some success in using Mixed Integer Program (MIP) heuristic for NP-hard optimization problems. We report our experience with using a MIP heuristic for the problem. Part II: Matching.The first problem we consider in this part is the Linear Ordering Problem. We show how the linear program of this problem can be solved by using a primal-dual based combinatorial algorithm instead of the Simplex method. Next, we address the cyclical scheduling problem which is used to schedule shifts for workers in a factory. Given a set of n work periods, each worker is assigned a shift where he works for n-2 consecutive periods and takes off the remaining two periods. Thus, for n=7, a typical shift may be to work from Monday to Friday and take off Saturday and Sunday. Each shift may also have a cost associated with it. Furthermore, the factory requires that a given number of workers be available for each period (this requirement may vary from period to period). The objective is to assign a shift to each worker such that the daily requirement is fulfilled and the total cost of the shifts is minimized. We use the primal-dual method to solve the (n-2,n) cyclical scheduling problem by solving a series of b-matching problems on a cycle of n vertices.

## Specialized Macro-Instructions for Von-Neumann Accelerators

In the last few decades, Von-Neumann super-scalar processors have been the superior approach for improving general purpose processing and hardware specialization was used as a complementary approach. However, the imminent end of Moore's law indicates voltage scaling and per-transistor switching power can not scale down with the same peace as what Moore's law predicts. As a result, there is a new interest in hardware specialization to improve performance, power and energy efficiency on specific tasks.This dissertation proposes a Von-Neumann based accelerator, Chainsaw, and demonstrates that many of the fundamental overheads (e.g., fetch-decode) can be amortized by adopting the appropriate instruction abstraction. We have developed a complete LLVM-based compiler prototype and simulation infrastructure and demonstrated that an 8-lane Chainsaw is within 73% of the performance of an ideal dataflow architecture while reducing the energy consumption by 45% compared to a 4-way out of order processor.

## A data-driven approach to automatic tweet generation about traffic incidents

Traffic congestion continues to be a major problem in large cities around the world and a source of frustration for drivers. Previous studies show that providing drivers with real-time traffic information will help them make better route planning and avoid congestion. In this research, we examine the use of data-driven natural language generation (NLG) techniques to automatically generate tweets from traffic incident data. From the task of automatic tweet generation, we discuss and propose a design of a traffic notification system that can deliver personalized and location-relevant real-time traffic information to drivers. The domain of our NLG work is novel with respect to the previous work in different domains including weather forecasts, educational reports and clinical reports. We evaluate the automatic generated tweets using BLEU-4. Our experimental results show that a well-prepared training corpus is important for better quality output, however, it is currently limited in traffic-related domains.

## Multi-Relational Learning with SQL All the Way

Which doctors prescribe which drugs to which patients? Who upvotes which answers on what topics on Quora? Who has followed whom on Twitter/Weibo? These relationships are all visible in data, and they all contain a wealth of information that could be extracted to be knowledge/wisdom. Statistical Relational Learning (SRL) is a recent growing field which extends traditional machine learning from single-table to multiple inter-related tables. It aims to provide integrated statistical analysis of heterogeneous and interdependent complex data. In the thesis, I focus on modelling the interactions between different attributes and the link itself for such complex heterogeneous and richly interconnected data. First, I describe the FactorBase system which combines advanced analytics from statistical-relational machine learning (SRL) with database systems. Within FactorBase, all statistical objects are stored as first-class citizens as well as raw data. This new SQL-based framework pushes the multi-relational model discovery into a relational database management system. Secondly, to solve the scalability issue of computing cross-table sufficient statistics, a new Virtual Join algorithm is proposed and implemented in FactorBase. Bayesian networks (BNs) and Dependency Networks (DNs) are two major classes of SRL. Thirdly, I utilize FactorBase to extend the state-of-the-art learning algorithm for BN of generative modelling with link uncertainty. The learned model captures correlations between link types, link features, and attributes of nodes, simultaneously. Finally, a fast hybrid approach is proposed for instance level discriminative learning of DNs with competitive predictive power but substantially better scalability.