SFU Search
In this thesis, we have developed a semi-autonomous behavior that allows us to control a drone with less effort. We have also presented a technique that enables autonomous repeating of a previously traversed route using the visual navigation system. Our first application demonstrates an experiment with driving a drone using a vision-based control (visual servoing) method, particularly by tracking selected targets in an image view. In the second application, a drone equipped with a monocular camera has been derived manually on a path. Invariant semantic features (i.e., objects) have been extracted using an object detection neural network, YOLO. Using these features, we show that the drone can repeat the traversed route autonomously independently from the lighting condition and even appearance changes.