In a nutshell.Nowadays, gesture recognition uses complex software and limited space. In particular, computer vision has been extensively used in this front and therefore is limited by computing power and physical space. This project’s goal is to make a deft gesture control device which could provide a user with some simple but direct functions such as controlling a slideshow presentation or other applications on IoT networks without the limitations of complex computations and camera view/quality.
The project uses a combination of sensors (accelerometers, IMUs, gyros) to recognize gestures and subsequently control an application. Gesture recognition is implemented by running machine learning algorithm(s) to create suitable models for prediction. Depending on sophistication, gesture control can be used to dictate other IoT networks, robotics, or simply keyboard control. |
Gesture Control System
Use various sensors (accelerometers, IMUs, gyros) to recognize gestures and subsequently control an application.
|
Gesture recognition is implemented by running ML algorithms to create suitable models for prediction.
|
Depending on sophistication, gesture control can be used to dictate other IoT networks, robotics, or simply screen cursor control.
|
The system can be segregated into three main components.
Sensor Hub
A wearable glove with 5 LISDH3 accelerometers and 1 BNO055 9-DOF IMU sends data through a TCA9548 I2C MUX to RaspberryPi which also acts as the MQTT broker |
Server
An AWS S3 bucket is used to hold training data and for AWS SageMaker to access. AWS SageMaker uses its XGBoost training algorithm for multi-classification. |
Application
IntelEdison hosts a Django webpage (out of convenience) and performs different actions (buzzer, read temperatures, etc) based off predictions through MQTT. Additionally, the keyboard on a laptop will also be performing actions from the predictions. |