I am a Research Scientist at Google DeepMind. Before that, I was a postdoc at Meta AI (FAIR) working on real-time 3D scene understanding for robotics.
I graduated from my PhD at Imperial College London in 2023, advised by Prof. Andrew Davison.
Previously, I completed my undergraduate and Masters in Physics at the University of Oxford.
My research interests lie in building efficient real-time scene understanding and planning systems for robotics. Towards this goal, my research focuses on two directions: 1) graphical representations and distributed inference algorithms on graphs, and 2) training neural scene representations via continual learning for real-time robotics.
A Touch, Vision, and Language Dataset for Multimodal Alignment
Neural feels with neural fields: Visuo-tactile perception for in-hand manipulation
Perceiving Extrinsic Contacts from Touch Improves Learning Insertion Policies
Decentralization and Acceleration Enables Large-Scale Bundle Adjustment
Gaussian Belief Propagation for Real-Time Decentralised Inference
Theseus: A Library for Differentiable Nonlinear Optimization
iSDF: Real-Time Neural Signed Distance Fields for Robot Perception
A Robot Web for Distributed Many-Device Localisation
Incremental Abstraction in Distributed Probabilistic SLAM Graphs
iMAP: Implicit Mapping and Positioning in Real-Time
A visual introduction to Gaussian Belief Propagation
Self published / arXiv 2021
Bundle Adjustment on a Graph Processor
FutureMapping 2: Gaussian Belief Propagation for Spatial AI