VIP Group Seminar
Visual Mapping for Rigid and Non-Rigid Scenes
JM Martinez Montiel. Universidad de Zaragoza.
http://webdiis.unizar.es/~jose
Monday 1st December, 12:00pm
Huxley Building Room 218
Abstract:
The speaker presents diverse results recently developed in the Robotics Perception and Real Time group at Universidad de Zaragoza. The focus of the research has been to push the limits of the visual mapping techniques considering different research venues:
1.- ORBSLAM, keyframe based SLAM where the map points correspond to image points described by their ORB signature. ORB are combined with a Bag of Words recognition algorithm, resulting in an efficient relocation and loop closure detection method. It is worth noting that the very same features are used for mapping and for recognition resulting in a system with unprecedented performance.
2.- A variational formulation for real-time dense 3D mapping from a RGB
monocular sequence that incorporates Manhattan and piecewise-planar constraints in indoor and outdoor man-made scenes. It is shown that the addition of a third energy term modelling Manhattan and piecewise-planar structures greatly improves the accuracy of the dense visual maps, particularly for low-textured man-made environments
where the data term can be ambiguous.
3.- A Monocular visual SLAM algorithm tailored to deal with medical image sequences in order to provide an up-to-scale 3D map of the observed cavity and the endoscope trajectory at frame rate. The algorithm is validated human in-vivo sequences corresponding to fifteen laparoscopic hernioplasties where accurate ground-truth distances are available, showing the feasibility of SLAM in medical endoscopic sequences.
4.-Visual SLAM is cross fertilized Navier’s equations to model elastic solid deformations, the scene is coded as a Finite Element Method (FEM) elastic thin-plate solid, the resulting sequential methods has proven to recover accurately the scene and camera geometry.
