Mauro Comi

General Profile:

I have a BSc in Mechanical Engineering and a MSc in Data Science with Distinction. I did my master’s thesis on Deep Reinforcement Learning for physically-based rendering and computer graphics. Before joining the CDT in Interactive AI, I worked as a Machine Learning research engineer in the autonomous driving domain. The main challenge I worked on was finding ways to embed social awareness in self-driving tasks via Inverse RL or Bayesian approaches.
I am interested in many aspects of Artificial Intelligence, such as Reinforcement Learning, neuroscience, techniques for AI creativity, and AI Ethics.
In my spare time, I enjoy listening to podcasts, cooking, and trying to keep my house plants alive.

Research Project Summary:

Humans use their vision and touch senses to build a 3D understanding of their physical surroundings and interact with them. This understanding, which can be achieved by exploring the object shape and building its 3D representation, is critical in a variety of robotic subfields, such as those involved in manipulation.

The current state of 3D shape reconstruction research is primarily concerned with the sense of vision. However, training Computer Vision algorithms for object manipulation is very difficult due to the extremely high-dimensional observation space, which is affected by occlusion and external lighting conditions, among other factors. Data-driven methodologies based on optical tactile sensors have recently been proposed as additional solutions for object exploration and manipulation. Specifically, these devices use light to obtain information from physical touch and output a tactile image that contains information about the contact surface. This type of sensor has several advantages over vision sensors, including the ability to detect objects even when they are occluded, the ability to capture more detailed contact information, and the ability to simplify the translation from simulated to real world due to a smaller observation space. Therefore, we argue that combining tactile and vision sensing, rather than just vision, could result in safer human-robot interactions.

My PhD research will focus on the investigation and development of Machine Learning techniques for 3D shape reconstruction using the rich information provided by open-source optical tactile sensors. Currently, these approaches use a type of deep neural network called Graph Convolutional Network (GCN) to predict the shape of the desired object from sensorial inputs. However, recent studies in vision perception have demonstrated how implicit representations, such as Neural Radiance Field (NeRF) or Signed Distance Function (SDF), lead to a higher degree of reconstruction quality. One of the primary goals of my research is to leverage these novel methods to reconstruct objects using optical tactile sensors.

Moreover, my approach will be fully integrated into the open-source robot learning suite developed by the Tactile Robotics Group (Bristol Robotics Lab). This software focuses on methodologies for closing the sim-to-real gap, which refers to the inherent discrepancy between the real and virtual worlds that can lead to dangerous behaviour. This option will allow us to safely and efficiently apply our methods to real-world scenarios.

Finally, the 3D object representation will be used to extract useful physical properties that can improve algorithms for robotic control, particularly those involving human-robot interaction. We believe that shape understanding will contribute to a larger effort to develop safe and robust robot learning algorithms for physical world interaction using tactile sensing.

Supervisors:

Website:

Edit this page