Cross-modal self-adaptative perception in cognitive robotics (CMES)
Context:
Ongoing project of the Research Group on Media Technologies at La Salle - Universitat Ramon Llull, funded by the Spanish Ministry in Science and Innovation. Here I worked as a researcher, contributing to its development, discussing results, and writing reports and publications.
Technologies:
Robotics, Perception, Haptics, Computer Vision, Python, Matlab
Description:
The aim of the project was to study the cross-modal perception of robots’ cognitive system. During it we developed a haptic capture system which used low-level features to recognize 3D objects by means of Machine Learning models both in digital and real scenarios. This system was also used in conjunction with a camera to study how cross-modality can be used to improve its cognitive system.
My role in this project consisted in programming digital physics simulation to gather data and using both signal and image processing techniques to process it. In parallel, I contributed in the implementation and programming of the real robot, which included 3D modeling and printing. It also allowed me to work in an interdisciplinary environment with a team of researchers and students.
Project Website salleurl.edu/en/node/24941
Publications Ruiz, C., de Jesús, Ò., Serrano, C., González, A., Nonell, P., Metaute, A., & Miralles, D. (Accepted/In press). Bridging realities: training visuo-haptic object recognition models for robots using 3D virtual simulations. Visual Computer. https://doi.org/10.1007/s00371-024-03455-7
Video of the robot's haptic capture system's prototype: