Hand gesture recognition based on Raspberry Camera and TensorFlow. All the steps are described from the dataset creation to the final deploy.
The idea behind this project is to create a device able to drive an actuator based on the gesture of the hand’s fingers.
The project is specialized on recognizing streaming images of the hand taken by the raspberry-pi camera.
The data set of the images used to train the model was created ad hoc with images taken from the Raspberry Camera only (not other devices) with a neutral background.
The model is based on the transfer learning of the Inception v3 model, customized to handle the project requirements. The last layer was removed from the Inception v3 model and a few layers were added to be customized with the new dataset and to provide the output for just four cases.
The model was trained with the images collected and pre-classified earlier on a desktop (32 Gb ram + GPU). Once the model was trained and tested, it was exported to the Raspberry Pi.