Pierre-Etienne Martin thesis

The recognition of actions from videos is one of the main problems of computer vision. Despite intensive research, differentiation and recognition of similar actions remains a challenge. This thesis focuses on the classification of sports actions from videos, with table tennis as an application framework. We propose a deep learning method to automatically segment and classify the different table tennis moves. Our objective is to design an intelligent system to analyse the performance of table tennis students, and to give the coach the possibility to adapt his training sessions to improve their performance. To this end, we have developed the "TTStroke-21" database, made up of video clips of table tennis exercises recorded by the students of the University of Bordeaux Sports Faculty - STAPS. This database was then annotated by professionals in the field using a crowdsourcing platform. The annotations consist of a description of the moves made (start, end and type of move). In total, 20 different table tennis moves are considered plus a rejection class. The recognition of similar actions differs from the recognition of classic actions. Indeed, in classical databases, the background context often provides discriminating information that methods can use to classify the action rather than focusing on the action itself. In our case, the similarity between classes is high, so the discriminating visual characteristics are more difficult to extract and the movement of the action is more difficult to detect.
plays a key role in characterising the action. In this thesis, we introduce a convolutional spatiotemporal neural network with a Twin architecture. This deep learning network takes as inputs a sequence of RGB images and its estimated optical flow. The RGB data allow our model to capture appearance characteristics while the optical flow captures motion characteristics. These two streams are processed in parallel using 3D convolutions, and are merged at the last stage of the network. The spatio-temporal characteristics extracted from the network allow efficient classification of TTStroke-21 video clips. Our method achieves a classification performance of 93.2% over the entire test data set. Applied to the joint detection and classification task, our method achieves an accuracy of 82.6%. We study the performance according to the types of data used for input and how to merge them. Different optical flow estimators and their normalisation are tested to improve accuracy. The characteristics of each branch of our architecture are also analysed in order to understand the decision path of our model. Finally, we introduce an attention mechanism to help the model focus on discriminant features and also to speed up the training process. We compare our model with other methods on TTStroke-21 and test it on other datasets. We find that models that work well on conventional action databases do not always work as well on our database of similar actions. The work presented in this thesis has been validated by publications in an international journal, five papers from international conferences, two papers from an international workshop and a repeat task in the MediaEval workshop where participants can apply their action recognition methods to our TTStroke-21 database. Two more international workshop papers are in preparation, as well as a book chapter.

Français
LaBRI