Recognition of human actions is crucial in several fields, including medical applications, human interaction systems, and video surveillance. In this thesis, we aim to develop a joint edge-cloud system for recognizing human actions, incorporating embedded devices, a cloud server, and a mobile app. One significant requirement for the application is privacy protection, prohibiting direct video transmission from the devices. However, the computational resources of embedded devices are limited and inflexible due to constraints in CPU/GPU, memory, and power supply. Consequently, these devices cannot execute overly complex algorithms. To address this limitation, we propose a joint edge-cloud computing approach wherein the embedded device runs a relatively simple human pose estimation algorithm to convert original human actions into skeleton data. These animations are then transmitted to the cloud, ensuring user privacy and reducing transmission costs. Moreover, the cloud utilizes more powerful computational resources to execute advanced algorithms, enabling the detection of more complex human activities from the input skeleton data. We explore various implementation options using Amazon Web Services (AWS) as the cloud, analyzing their feasibility and costs to select the optimal solution. Additionally, we conducted various experiments. The results of this thesis can aid developers and researchers in deploying their algorithms on the cloud. Furthermore, these findings may benefit other areas of artificial intelligence applications.
Copyright is held by the author(s).
This thesis may be printed or downloaded for non-commercial research and scholarly purposes.
Supervisor or Senior Supervisor
Thesis advisor: Liang, Jie
Member of collection