Engineering Science, School of

Receive updates for this collection

Efficiently Finding Poses for Multiple Grasp Types with Partial Point Clouds by Uncoupling Grasp Shape and Scale

Peer reviewed: 
Yes, item is peer reviewed.
Date created: 
2022-06-22
Abstract: 

We present an algorithm that discovers grasp pose solutions for multiple grasp types for a multi-fingered mechanical gripper using partially-sensed point clouds of unknown objects. The algorithm introduces two key ideas: 1) a histogram of finger contact normals is used to represent a grasp “shape” to guide a gripper orientation search in a histogram of object(s) surface normals, and 2) voxel grid representations of gripper contacts and object(s) are cross-correlated to match finger contact points, i.e. grasp “scale`”, to discover a grasp pose. Collision constraints are incorporated in the cross-correlation computation. We show via simulations and preliminary experiments that 1) grasp poses for three grasp types (i.e. lateral, power, and tripodal)are found quickly without interrupting the robot’s motion, 2) the quality of grasp pose solutions is consistent with respect to voxel resolution changes for both partial and complete point cloud scans, 3) grasp type definitions are scalable for n-contacts and can incorporate constraints for collision checks in one integrated step, and 4) planned grasp poses are successfully executed with a mechanical gripper demonstrating the robustness of grasp pose solutions.

Detailed Analysis of the Effects of Biodiesel Fraction Increase on the Combustion Stability and Characteristics of a Reactivity-Controlled Compression Ignition Diesel-Biodiesel/Natural Gas Engine

File(s): 
Peer reviewed: 
Yes, item is peer reviewed.
Date created: 
2022-02-01
Abstract: 

A single-cylinder marine diesel engine was modified to be operated in reactivity controlled compression ignition (RCCI) combustion mode. The engine fueling system was upgraded to a common rail fuel injection system. Natural gas (NG) was used as port fuel injection, and a diesel/sunflower methyl ester biodiesel mixture was used for direct fuel injection. The fraction of biodiesel in the direct fuel injection was changed from 0% (B0; 0% biodiesel and 100% diesel) to 5% (B5) and 20% (B20) while keeping the total energy input into the engine constant. The objective was to understand the impacts of the increased biodiesel fraction on the combustion characteristics and stability, emissions, and knocking/misfiring behavior, keeping all other influential parameters constant. The results showed that nitrogen oxides (NOx) emissions of B5 and B20 without the need for any after-treatment devices were lower than the NOx emission limit of the Euro VI stationary engine regulation. B5 and B20 NOx emissions decreased by more than 70% compared to the baseline. Significantly more unburned hydrocarbons (UHCs) and carbon monoxide (CO) emissions were produced when biodiesel was used in the direct fuel injection (DFI). The results also showed that using B5 and B20 instead of B0 led to an increase of 18% and 13.5% in UHCs and an increase of 88.5% and 97% in CO emissions, respectively. Increasing the biodiesel fraction to B5 and B20 reduced the maximum in-cylinder pressure by 3% and 10.2%, respectively, compared to B0. Combustion instability is characterized by the coefficient of variation (COV) of the indicated mean effective pressure (IMEP), which was measured as 4.2% for B5 and 4.8% for B20 compared to 1.8% for B0. Therefore, using B20 and B5 resulted in up to 34.9% combustion instabilities, and 18.5% compared to the baseline case. The tendency for knocking decreased from 13.7% for B0 to 4.3% for B20. The baseline case (B0) had no misfiring cycle. The B5 case had some misfiring cycles, but no knocking cycle was observed. Moreover, the historical cyclic analysis showed more data dispersions when the biodiesel fraction increased in DFI. This study shows the potential of biodiesel replacement in NG/diesel RCCI combustion engines. This study shows that biodiesel can be used to effectively reduce NOx emissions and the knocking intensity of RCCI combustion. However, combustion instability needs to be monitored.

Document type: 
Article

Force Myography-Based Human Robot Interactions via Deep Domain Adaptation and Generalization

File(s): 
Peer reviewed: 
Yes, item is peer reviewed.
Date created: 
2021-12-27
Abstract: 

Estimating applied force using force myography (FMG) technique can be effective in human-robot interactions (HRI) using data-driven models. A model predicts well when adequate training and evaluation are observed in same session, which is sometimes time consuming and impractical. In real scenarios, a pretrained transfer learning model predicting forces quickly once fine-tuned to target distribution would be a favorable choice and hence needs to be examined. Therefore, in this study a unified supervised FMG-based deep transfer learner (SFMG-DTL) model using CNN architecture was pretrained with multiple sessions FMG source data (Ds, Ts) and evaluated in estimating forces in separate target domains (Dt, Tt) via supervised domain adaptation (SDA) and supervised domain generalization (SDG). For SDA, case (i) intra-subject evaluation (Ds ≠ Dt-SDA, Ts ≈ Tt-SDA) was examined, while for SDG, case (ii) cross-subject evaluation (Ds ≠ Dt-SDG, Ts ≠ Tt-SDG) was examined. Fine tuning with few “target training data” calibrated the model effectively towards target adaptation. The proposed SFMG-DTL model performed better with higher estimation accuracies and lower errors (R2 ≥ 88%, NRMSE ≤ 0.6) in both cases. These results reveal that interactive force estimations via transfer learning will improve daily HRI experiences where “target training data” is limited, or faster adaptation is required

Document type: 
Article

Critical Overview of Visual Tracking with Kernel Correlation Filter

File(s): 
Peer reviewed: 
Yes, item is peer reviewed.
Date created: 
2021-11-30
Abstract: 

With the development of new methodologies for faster training on datasets, there is a need to provide an in-depth explanation of the workings of such methods. This paper attempts to provide an understanding for one such correlation filter-based tracking technology, Kernelized Correlation Filter (KCF), which uses implicit properties of tracked images (circulant matrices) for training and tracking in real-time. It is unlike deep learning, which is data intensive. KCF uses implicit dynamic properties of the scene and movements of image patches to form an efficient representation based on the circulant structure for further processing, using properties such as diagonalizing in the Fourier domain. The computational efficiency of KCF, which makes it ideal for low-power heterogeneous computational processing technologies, lies in its ability to compute data in high-dimensional feature space without explicitly invoking the computation on this space. Despite its strong practical potential in visual tracking, there is a need for an in-depth critical understanding of the method and its performance, which this paper aims to provide. Here we present a survey of KCF and its method along with an experimental study that highlights its novel approach and some of the future challenges associated with this method through observations on standard performance metrics in an effort to make the algorithm easy to investigate. It further compares the method against the current public benchmarks such as SOTA on OTB-50, VOT-2015, and VOT-2019. We observe that KCF is a simple-to-understand tracking algorithm that does well on popular benchmarks and has potential for further improvement. The paper aims to provide researchers a base for understanding and comparing KCF with other tracking technologies to explore the possibility of an improved KCF tracker.

Document type: 
Article

Experimental Study of a Deep-Learning RGB-D Tracker for Virtual Remote Human Model Reconstruction

File(s): 
Peer reviewed: 
Yes, item is peer reviewed.
Date created: 
2021-09-16
Abstract: 

Tracking movements of the body in a natural living environment of a person is a challenging undertaking. Such tracking information can be used as a part of detecting any onsets of anomalies in movement patterns or as a part of a remote monitoring environment. The tracking information can be mapped and visualized using a virtual avatar model of the tracked person. This paper presents an initial novel experimental study of using a commercially available deep-learning body tracking system based on an RGB-D sensor for virtual human model reconstruction. We carried out our study in an indoor environment under natural conditions. To study the performance of the tracker, we experimentally study the output of the tracker which is in the form of a skeleton (stick-figure) data structure under several conditions in order to observe its robustness and identify its drawbacks. In addition, we show and study how the generic model can be mapped for virtual human model reconstruction. It was found that the deep-learning tracking approach using an RGB-D sensor is susceptible to various environmental factors which result in the absence and presence of noise in estimating the resulting locations of skeleton joints. This as a result introduces challenges for further virtual model reconstruction. We present an initial approach for compensating for such noise resulting in a better temporal variation of the joint coordinates in the captured skeleton data. We explored how the extracted joint position information of the skeleton data can be used as a part of the virtual human model reconstruction.

Document type: 
Article

Toward Long-Term FMG Model-Based Estimation of Applied Hand Force in Dynamic Motion During Human–Robot Interactions

File(s): 
Peer reviewed: 
Yes, item is peer reviewed.
Date created: 
2021-07-07
Abstract: 

Physical human-robot interaction (pHRI) is reliant on human actions and can be addressed by studying human upper-limb motions during interactions. Use of force myography (FMG) signals, which detect muscle contractions, can be useful in developing machine learning algorithms as controls. In this paper, a novel long-term calibrated FMG-based trained model is presented to estimate applied force in dynamic motion during real-time interactions between a human and a linear robot. The proposed FMG-based pHRI framework was investigated in new, unseen, real-time scenarios for the first time. Initially, a long-term reference dataset (multiple source distributions) of upper-limb FMG data was generated as five participants interacted with the robot applying force in five different dynamic motions. Ten other participants interacted with the robot in two intended motions to evaluate the out-of-distribution (OOD) target data (new, unlearned), which was different than the population data. Two practical scenarios were considered for assessment: i) a participant applied force in a new, unlearned motion (scenario 1), and ii) a new, unlearned participant applied force in an intended motion (scenario 2). In each scenario, few long-term FMG-based models were trained using a baseline dataset [reference dataset (scenario 1, 2) and/or a learnt participant dataset (scenario 1)] and a calibration dataset (collected during evaluation). Real-time evaluation showed that the proposed long-term calibrated FMG-based models (LCFMG) could achieve estimation accuracies of 80%-94% in all scenarios. These results are useful towards integrating and generalizing human activity data in a robot control scheme by avoiding extensive HRI training phase in regular applications.

Document type: 
Article

Cloud and Cloud Shadow Segmentation for Remote Sensing Imagery via Filtered Jaccard Loss Function and Parametric Augmentation

File(s): 
Peer reviewed: 
Yes, item is peer reviewed.
Date created: 
2021-04-02
Abstract: 

Cloud and cloud shadow segmentation are fundamental processes in optical remote sensing image analysis. Current methods for cloud/shadow identification in geospatial imagery are not as accurate as they should, especially in the presence of snow and haze. This paper presents a deep learning-based framework for the detection of cloud/shadow in Landsat 8 images. Our method benefits from a convolutional neural network, Cloud-Net+ (a modification of our previously proposed Cloud-Net) that is trained with a novel loss function (Filtered Jaccard Loss). The proposed loss function is more sensitive to the absence of foreground objects in an image and penalizes/rewards the predicted mask more accurately than other common loss functions. In addition, a sunlight direction-aware data augmentation technique is developed for the task of cloud shadow detection to extend the generalization ability of the proposed model by expanding existing training sets. The combination of Cloud-Net+, Filtered Jaccard Loss function, and the proposed augmentation algorithm delivers superior results on four public cloud/shadow detection datasets. Our experiments on Pascal VOC dataset exemplifies the applicability and quality of our proposed network and loss function in other computer vision applications.

Document type: 
Article

Developing a Community of Practice Around an Open Source Energy Modelling Tool

File(s): 
Peer reviewed: 
Yes, item is peer reviewed.
Date created: 
2021-04-02
Abstract: 

Energy modelling is critical for addressing challenges such as integrating variable renewable energy and addressing climate impacts. This paper describes the updated code management structure and code updates, the revised community forum and the outreach activities that have built a vibrant community of practice around OSeMOSYS. The code management structure has allowed code improvements to be incorporated into the model, the community forum provides users with a place to ask and answer questions, and the outreach activities connect members of the community. Overall, these three pillars show how a community of practice can be built around an open source tool and provides an example for other developers and users of open source software wanting to build a community of practice.

Document type: 
Article

Toward Design of a Drip-Stand Patient Follower Robot

File(s): 
Peer reviewed: 
Yes, item is peer reviewed.
Date created: 
2020-03-09
Abstract: 

A person following robot is an application of service robotics that primarily focuses on human-robot interaction, for example, in security and health care. This paper explores some of the design and development challenges of a patient follower robot. Our motivation stemmed from common mobility challenges associated with patients holding on and pulling the medical drip stand. Unlike other designs for person following robots, the proposed design objectives need to preserve as much as patient privacy and operational challenges in the hospital environment. We placed a single camera closer to the ground, which can result in a narrower field of view to preserve patient privacy. Through a unique design of artificial markers placed on various hospital clothing, we have shown how the visual tracking algorithm can determine the spatial location of the patient with respect to the robot. The robot control algorithm is implemented in three parts: (a) patient detection; (b) distance estimation; and (c) trajectory controller. For patient detection, the proposed algorithm utilizes two complementary tools for target detection, namely, template matching and colour histogram comparison. We applied a pinhole camera model for the estimation of distance from the robot to the patient. We proposed a novel movement trajectory planner to maintain the dynamic tipping stability of the robot by adjusting the peak acceleration. The paper further demonstrates the practicality of the proposed design through several experimental case studies.

Document type: 
Article

A Dataset of Labelled Objects on Raw Video Sequences

File(s): 
Peer reviewed: 
Yes, item is peer reviewed.
Date created: 
2020-12-26
Abstract: 

We present an object labelled dataset called SFU-HW-Objects-v1, which contains object labels for a set of raw video sequences. The dataset can be useful for the cases where both object detection accuracy and video coding efficiency need to be evaluated on the same dataset. Object ground-truths for 18 of the High Efficiency Video Coding (HEVC) v1 Common Test Conditions (CTC) sequences have been labelled. The object categories used for the labeling are based on the Common Objects in Context (COCO) labels. A total of 21 object classes are found in test sequences, out of the 80 original COCO label classes. Brief descriptions of the labeling process and the structure of the dataset are presented.

Document type: 
Article