Skip to content

Control DJI Tello 🛸 using hand gesture recognition on drone`s camera video-stream. Feel free to contribute!

License

Notifications You must be signed in to change notification settings

elliotvaucher/tello-gesture-control

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

42 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

DJI Tello Hand Gesture control


🏆 This project featured in Official Google Dev Blog


The main goal of this project is to control the drone using hand gestures without any gloves or additional equipment. Just camera on the drone or your smartphone(soon), laptop and human hand.

demo_gif

Index

  1. Introduction
  2. Setup
    1. Install pip packages
    2. Connect and test Tello
  3. Usage
  4. Adding new gestures
  5. Repository structure

Introduction

This project relies on two main parts - DJI Tello drone and Mediapipe fast hand keypoints recognition.

DJI Tello is a perfect drone for any kind of programming experiments. It has a rich Python API (also Swift is available) which helps to almost fully control a drone, create drone swarms and utilise its camera for Computer vision.

Mediapipe is an amazing ML platform with many robust solutions like Face mesh, Hand Keypoints detection and Objectron. Moreover, their model can be used on the mobile platforms with on-device acceleration.

Here is a starter-pack that you need:

starter_pack

Setup

1. Installing pip packages

First, we need to install python dependencies. Make sure you that you are using python3.7

List of packages

ConfigArgParse == 1.2.3
djitellopy == 1.5
numpy == 1.19.3
opencv_python == 4.5.1.48
tensorflow == 2.4.1
mediapipe == 0.8.2

Install

pip3 install -r requirements.txt

2. Connect Tello

Turn on drone and connect computer to its WiFi

wifi_connection

Next, run the following code to verify connectivity

python3 tests/connection_test.py

On successful connection

1. Connection test:
Send command: command
Response: b'ok'


2. Video stream test:
Send command: streamon
Response: b'ok'

If you get such output, you may need to check your connection with the drone

1. Connection test:
Send command: command
Timeout exceed on command command
Command command was unsuccessful. Message: False


2. Video stream test:
Send command: streamon
Timeout exceed on command streamon
Command streamon was unsuccessful. Message: False

Usage

The most interesting part is demo. There are 2 types of control: keyboard and gesture. You can change between control types during the flight. Below is a complete description of both types.

Run the following command to start the tello control :

python3 main.py

This script will start the python window with visualization like this:

window

Keyboard control

To control the drone with your keyboard at any time - press the k key.

The following is a list of keys and action description -

  • k -> Toggle Keyboard controls
  • g -> Toggle Gesture controls
  • Space -> Take off drone(if landed) OR Land drone(if in flight)
  • w -> Move forward
  • s -> Move back
  • a -> Move left
  • d -> Move right
  • e -> Rotate clockwise
  • q -> Rotate counter-clockwise
  • r -> Move up
  • f -> Move down
  • Esc -> End program and land the drone

Gesture control

By pressing g you activate gesture control mode. Here is a full list of gestures that are available now.

gestures_list

Adding new gestures

Hand recognition detector can add and change training data to retrain the model on the own gestures. But before this, there are technical details of the detector to understand how it works and how it can be improved

Technical details of gesture detector

Mediapipe Hand keypoints recognition is returning 3D coordinated of 20 hand landmarks. For our model we will use only 2D coordinates.

gestures_list

Then, these points are preprocessed for training the model in the following way.

preprocessing

After that, we can use data to train our model. Keypoint classifier is a simple Neural network with such structure

model_structure

check here to understand how the architecture was selected

Creating dataset with new gestures

First, pull datasets from Git LFS. Here is the instruction of how to install LFS. Then, run the command to pull default csv files

git lfs install
git lfs pull

After that, run main.py and press "n" to enter the mode to save key points (displayed as MODE:Logging Key Point)

writing_mode

If you press "0" to "9", the key points will be added to model/keypoint_classifier/keypoint.csv as shown below.
1st column: Pressed number (class ID), 2nd and subsequent columns: Keypoint coordinates

keypoints_table

In the initial state, 7 types of learning data are included as was shown here. If necessary, add 3 or later, or delete the existing data of csv to prepare the training data.

Notebook for retraining model

Open In Colab

Open Keypoint_model_training.ipynb in Jupyter Notebook or Google Colab. Change the number of training data classes,the value of NUM_CLASSES = 3, and path to the dataset. Then, execute all cells and download .tflite model

notebook_gif

Do not forget to modify or add labels in "model/keypoint_classifier/keypoint_classifier_label.csv"

Grid Search

❗️ Important ❗️ The last part of the notebook is an experimental part of the notebook which main functionality is to test hyperparameters of the model structure. In a nutshell: grid search using TensorBoard visualization. Feel free to use it for your experiments.

grid_search

Repository structure

│  main.py
│  Keypoint_model_training.ipynb
│  config.txt
│  requirements.txt
│  
├─model
│  └─keypoint_classifier
│      │  keypoint.csv
│      │  keypoint_classifier.hdf5
│      │  keypoint_classifier.py
│      │  keypoint_classifier.tflite
│      └─ keypoint_classifier_label.csv
│ 
├─gestures
│   │  gesture_recognition.py
│   │  tello_gesture_controller.py
│   └─ tello_keyboard_controller.py
│          
├─tests
│   └─connection_test.py
│ 
└─utils
    └─cvfpscalc.py

app.py

Main app which controls the functionality of drone control and gesture recognition
App also includes mode to collect training data for adding new gestures.

keypoint_classification.ipynb

This is a model training script for hand sign recognition.

model/keypoint_classifier

This directory stores files related to gesture recognition.

  • Training data(keypoint.csv)
  • Trained model(keypoint_classifier.tflite)
  • Label data(keypoint_classifier_label.csv)
  • Inference module(keypoint_classifier.py)

gestures/

This directory stores files related to drone controllers and gesture modules.

  • Keyboard controller (tello_keyboard_controller.py)
  • Gesture controller(tello_keyboard_controller.py)
  • Gesture recognition module(keypoint_classifier_label.csv)

utils/cvfpscalc.py

Module for FPS measurement.

TODO

  • Motion gesture support (LSTM)
  • Web UI for mobile on-device gesture control
  • Add Holistic model support

Reference

Author

Nikita Kiselov(https://github.com/kinivi)

License

tello-gesture-control is under Apache-2.0 License.

About

Control DJI Tello 🛸 using hand gesture recognition on drone`s camera video-stream. Feel free to contribute!

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Python 54.5%
  • Jupyter Notebook 45.5%