Engineering Science, School of

Receive updates for this collection

Experimental Study of a Deep-Learning RGB-D Tracker for Virtual Remote Human Model Reconstruction

Peer reviewed: 
Yes, item is peer reviewed.
Date created: 
2021-09-16
Abstract: 

Tracking movements of the body in a natural living environment of a person is a challenging undertaking. Such tracking information can be used as a part of detecting any onsets of anomalies in movement patterns or as a part of a remote monitoring environment. The tracking information can be mapped and visualized using a virtual avatar model of the tracked person. This paper presents an initial novel experimental study of using a commercially available deep-learning body tracking system based on an RGB-D sensor for virtual human model reconstruction. We carried out our study in an indoor environment under natural conditions. To study the performance of the tracker, we experimentally study the output of the tracker which is in the form of a skeleton (stick-figure) data structure under several conditions in order to observe its robustness and identify its drawbacks. In addition, we show and study how the generic model can be mapped for virtual human model reconstruction. It was found that the deep-learning tracking approach using an RGB-D sensor is susceptible to various environmental factors which result in the absence and presence of noise in estimating the resulting locations of skeleton joints. This as a result introduces challenges for further virtual model reconstruction. We present an initial approach for compensating for such noise resulting in a better temporal variation of the joint coordinates in the captured skeleton data. We explored how the extracted joint position information of the skeleton data can be used as a part of the virtual human model reconstruction.

Document type: 
Article
File(s): 

Toward Long-Term FMG Model-Based Estimation of Applied Hand Force in Dynamic Motion During Human–Robot Interactions

Peer reviewed: 
Yes, item is peer reviewed.
Date created: 
2021-07-07
Abstract: 

Physical human-robot interaction (pHRI) is reliant on human actions and can be addressed by studying human upper-limb motions during interactions. Use of force myography (FMG) signals, which detect muscle contractions, can be useful in developing machine learning algorithms as controls. In this paper, a novel long-term calibrated FMG-based trained model is presented to estimate applied force in dynamic motion during real-time interactions between a human and a linear robot. The proposed FMG-based pHRI framework was investigated in new, unseen, real-time scenarios for the first time. Initially, a long-term reference dataset (multiple source distributions) of upper-limb FMG data was generated as five participants interacted with the robot applying force in five different dynamic motions. Ten other participants interacted with the robot in two intended motions to evaluate the out-of-distribution (OOD) target data (new, unlearned), which was different than the population data. Two practical scenarios were considered for assessment: i) a participant applied force in a new, unlearned motion (scenario 1), and ii) a new, unlearned participant applied force in an intended motion (scenario 2). In each scenario, few long-term FMG-based models were trained using a baseline dataset [reference dataset (scenario 1, 2) and/or a learnt participant dataset (scenario 1)] and a calibration dataset (collected during evaluation). Real-time evaluation showed that the proposed long-term calibrated FMG-based models (LCFMG) could achieve estimation accuracies of 80%-94% in all scenarios. These results are useful towards integrating and generalizing human activity data in a robot control scheme by avoiding extensive HRI training phase in regular applications.

Document type: 
Article
File(s): 

Cloud and Cloud Shadow Segmentation for Remote Sensing Imagery via Filtered Jaccard Loss Function and Parametric Augmentation

Peer reviewed: 
Yes, item is peer reviewed.
Date created: 
2021-04-02
Abstract: 

Cloud and cloud shadow segmentation are fundamental processes in optical remote sensing image analysis. Current methods for cloud/shadow identification in geospatial imagery are not as accurate as they should, especially in the presence of snow and haze. This paper presents a deep learning-based framework for the detection of cloud/shadow in Landsat 8 images. Our method benefits from a convolutional neural network, Cloud-Net+ (a modification of our previously proposed Cloud-Net) that is trained with a novel loss function (Filtered Jaccard Loss). The proposed loss function is more sensitive to the absence of foreground objects in an image and penalizes/rewards the predicted mask more accurately than other common loss functions. In addition, a sunlight direction-aware data augmentation technique is developed for the task of cloud shadow detection to extend the generalization ability of the proposed model by expanding existing training sets. The combination of Cloud-Net+, Filtered Jaccard Loss function, and the proposed augmentation algorithm delivers superior results on four public cloud/shadow detection datasets. Our experiments on Pascal VOC dataset exemplifies the applicability and quality of our proposed network and loss function in other computer vision applications.

Document type: 
Article
File(s): 

Developing a Community of Practice Around an Open Source Energy Modelling Tool

Peer reviewed: 
Yes, item is peer reviewed.
Date created: 
2021-04-02
Abstract: 

Energy modelling is critical for addressing challenges such as integrating variable renewable energy and addressing climate impacts. This paper describes the updated code management structure and code updates, the revised community forum and the outreach activities that have built a vibrant community of practice around OSeMOSYS. The code management structure has allowed code improvements to be incorporated into the model, the community forum provides users with a place to ask and answer questions, and the outreach activities connect members of the community. Overall, these three pillars show how a community of practice can be built around an open source tool and provides an example for other developers and users of open source software wanting to build a community of practice.

Document type: 
Article
File(s): 

Toward Design of a Drip-Stand Patient Follower Robot

Peer reviewed: 
Yes, item is peer reviewed.
Date created: 
2020-03-09
Abstract: 

A person following robot is an application of service robotics that primarily focuses on human-robot interaction, for example, in security and health care. This paper explores some of the design and development challenges of a patient follower robot. Our motivation stemmed from common mobility challenges associated with patients holding on and pulling the medical drip stand. Unlike other designs for person following robots, the proposed design objectives need to preserve as much as patient privacy and operational challenges in the hospital environment. We placed a single camera closer to the ground, which can result in a narrower field of view to preserve patient privacy. Through a unique design of artificial markers placed on various hospital clothing, we have shown how the visual tracking algorithm can determine the spatial location of the patient with respect to the robot. The robot control algorithm is implemented in three parts: (a) patient detection; (b) distance estimation; and (c) trajectory controller. For patient detection, the proposed algorithm utilizes two complementary tools for target detection, namely, template matching and colour histogram comparison. We applied a pinhole camera model for the estimation of distance from the robot to the patient. We proposed a novel movement trajectory planner to maintain the dynamic tipping stability of the robot by adjusting the peak acceleration. The paper further demonstrates the practicality of the proposed design through several experimental case studies.

Document type: 
Article
File(s): 

A Dataset of Labelled Objects on Raw Video Sequences

Peer reviewed: 
Yes, item is peer reviewed.
Date created: 
2020-12-26
Abstract: 

We present an object labelled dataset called SFU-HW-Objects-v1, which contains object labels for a set of raw video sequences. The dataset can be useful for the cases where both object detection accuracy and video coding efficiency need to be evaluated on the same dataset. Object ground-truths for 18 of the High Efficiency Video Coding (HEVC) v1 Common Test Conditions (CTC) sequences have been labelled. The object categories used for the labeling are based on the Common Objects in Context (COCO) labels. A total of 21 object classes are found in test sequences, out of the 80 original COCO label classes. Brief descriptions of the labeling process and the structure of the dataset are presented.

Document type: 
Article
File(s): 

Scanning and Actuation Techniques for Cantilever-Based Fiber Optic Endoscopic Scanners—A Review

Peer reviewed: 
Yes, item is peer reviewed.
Date created: 
2021-01-02
Abstract: 

Endoscopes are used routinely in modern medicine for in-vivo imaging of luminal organs. Technical advances in the micro-electro-mechanical system (MEMS) and optical fields have enabled the further miniaturization of endoscopes, resulting in the ability to image previously inaccessible small-caliber luminal organs, enabling the early detection of lesions and other abnormalities in these tissues. The development of scanning fiber endoscopes supports the fabrication of small cantilever-based imaging devices without compromising the image resolution. The size of an endoscope is highly dependent on the actuation and scanning method used to illuminate the target image area. Different actuation methods used in the design of small-sized cantilever-based endoscopes are reviewed in this paper along with their working principles, advantages and disadvantages, generated scanning patterns, and applications.

Document type: 
Article
File(s): 

Embedding the United Nations Sustainable Development Goals Into Energy Systems Analysis: Expanding the Food–Energy–Water Nexus

Peer reviewed: 
Yes, item is peer reviewed.
Date created: 
2021-01-05
Abstract: 

Background

There have been numerous studies that consider the nexus interactions between energy systems, land use, water use and climate adaptation and impacts. These studies have filled a gap in the literature to allow for more effective policymaking by considering the trade-offs between land use, energy infrastructure as well as the use of water for agriculture and providing energy services. Though these studies fill a significant gap in the modelling literature, we argue that more work is needed to effectively consider policy trade-offs between the 17 United Nations sustainable development goals (SDGs) to avoid missing important interactions.

 

Results

We examine the 17 SDGs individually to determine if it should be included in a modelling framework and the challenges of doing so. We show that the nexus of climate, land, energy and water needs to be expanded to consider economic well-being of both individuals and the greater economy, health benefits and impacts, as well as land use in terms of both food production and in terms of sustaining ecological diversity and natural capital. Such an expansion will allow energy systems models to better address the trade-offs and synergies inherent in the SDGs. Luckily, although there are some challenges with expanding the nexus in this way, we feel the challenges are generally modest and that many model structures can already incorporate many of these factors without significant modification.

 

Finally, we argue that SDGs 16 and 17 cannot be met without open-source models and open data to allow for transparent analysis that can be used and reused with a low cost of entry for modellers from less well-off nations.

 

Conclusions

To effectively address the SDGs, there is a need to expand the common definition of the nexus of climate, land, energy, and water to include the synergies and trade-offs of health impacts, ecological diversity and the system requirements for human and environmental well-being. In most cases, expanding models to be able to incorporate these factors will be relatively straight forward, but open models and analysis are needed to fully support the SDGs.

Document type: 
Article
File(s): 

Fabrication of a Stepped Optical Fiber Tip for Miniaturized Scanner

Peer reviewed: 
Yes, item is peer reviewed.
Date created: 
2021-01-01
Abstract: 

Advancements in fabrication of miniaturized optical scanners would benefit from micrometer sized optical fiber tips. The change in the cross section of an optical fiber tip is often accompanied with the presence of a longer tapered area. The reduction of the cross section of double clad optical fibers (DCFs) with a flat interface surface at the region where a change in the cross section takes place (with an abrupt change in the cross section) is considered in this paper. Various methods such as heating and pulling, wet etching using hydrofluoric acid (HF), and etching in a vaporous state were explored. The optical etching rate and its dependence on the temperature of the etchant solution were also determined. Optical fibers etch linearly with time, and the etching speed is dependent on the temperature of the etchant solution which shows a parabolic trend. The flatness of the surface at the cross section change is an interesting parameter in the fabrication of submillimeter sized scanners where the light is transmitted through the core of the DCF, and reflected light is collected through the inner cladding of the same fiber, or vice versa. The surface flatness at the interface was compared among different fiber samples developed using the aforementioned techniques. This research illustrates that the wet chemical etching performed by blocking the capillary rising of etchant solution along the fiber provided advantages over the heating and pulling technique in terms of light intensity transmitted to the target sample and the reflected light collected through the interface of etched cladding.

Document type: 
Article

FMG- and RNN-Based Estimation of Motor Intention of Upper-Limb Motion in Human-Robot Collaboration

Peer reviewed: 
Yes, item is peer reviewed.
Date created: 
2020-12-03
Abstract: 

Research on human-robot interactions has been driven by the increasing employment of robotic manipulators in manufacturing and production. Toward developing more effective human-robot collaboration during shared tasks, this paper proposes an interaction scheme by employing machine learning algorithms to interpret biosignals acquired from the human user and accordingly planning the robot reaction. More specifically, a force myography (FMG) band was wrapped around the user's forearm and was used to collect information about muscle contractions during a set of collaborative tasks between the user and an industrial robot. A recurrent neural network model was trained to estimate the user's hand movement pattern based on the collected FMG data to determine whether the performed motion was random or intended as part of the predefined collaborative tasks. Experimental evaluation during two practical collaboration scenarios demonstrated that the trained model could successfully estimate the category of hand motion, i.e., intended or random, such that the robot either assisted with performing the task or changed its course of action to avoid collision. Furthermore, proximity sensors were mounted on the robotic arm to investigate if monitoring the distance between the user and the robot had an effect on the outcome of the collaborative effort. While further investigation is required to rigorously establish the safety of the human worker, this study demonstrates the potential of FMG-based wearable technologies to enhance human-robot collaboration in industrial settings.

Document type: 
Article
File(s):