Full-Body Interaction Lab
Context:
Laboratory of the Universitat Pompeu Fabra that studies the Full-body Interaction, where I developed my master's thesis.
Technologies:
Mixed-reality, Interaction, Computer Vision, Deep Learning, Python, C#, Unity
Description:
The Full-Body Interaction Lab has a mixed-rality interactive installation which consists in two projectors that display the environment and four cameras that detect and track the players on the room. One of the functions of this installation is to by played with different games developed in Unity, which aim to study the interaction of children with autism spectrum disorder in cooperation with children without it. While for years, the tracking system worked with traditional computer vision methods, new games led to difficulties in its performance, so it needed to be improved.
My master's thesis consisted in redesigning this tracking system by using Deep Learning methods. This presented several challenges, such as, the need of real-time detection, the confusion between players and the connection with the Unity system. The proposed pipeline included a preprocessing of the images, the predictions of the positions of the users by using the You Only Look Once (YOLO) neural network model and a tracking module based on a Linear Kalman Filter. The final implentation was done in Python and used vision and Deep Learning libraries like OpenCV and Tensorflow.
Lab's Website upf.edu/web/fubintlab