Pangolin is a very useful and portable lightweight library for rapid prototyping computer vision applications which is now used by all members of our group. It centres on providing easy access to OpenGL visualisation of images and 3D data. It also supports video input from various devices, has a versatile set of input widgets, has a live plotter, and there are examples of how to easily interface with CUDA. Pangolin is by Steven Lovegrove and Richard Newcombe. For full details and source code download visit this page.
Scalable Visual SLAM (ScaViSLAM) using double window optimisation is available on github. This is a project led by Hauke Strasdat and is a full SLAM system for stereo or RGB-D sensors which implements the approach explained in this paper from ICCV 2011.
Robot Vision Library: in 2010 we released a new open source (LGPL) software package for vision-based SLAM. The RobotVision library is available from openslam.org and offers various essential elements of a visual SLAM back-end, including bundle adjustment, feature initialisation, pose-graph optimisation and 2D/3D visualisation. This release features software used in our paper to presented at RSS 2010 and is largely thanks to the hard work of Hauke Strasdat, supported by Steven Lovegrove and the rest of the group.
SceneLib/MonoSLAM: The MonoSLAM monocular SLAM system based on the Extended Kalman Filter, developed by Andrew Davison and colleagues at the University of Oxford and beyond in the early 2000’s, is available as an open source project here. That code, however, is no longer maintained and is tricky to compile on today’s machines. If you are interested in giving MonoSLAM a spin, we can recommend SceneLib2, written by Hanme Kim in 2012. This is a great re-implementation, with the same functionality as the original software but updated to use modern libraries and tools such as Eigen, Pangolin and CMake; and can be run either on my original file sequences or with a live USB camera.
SLAMBench: SLAMBench presents a foundation for quantitative, comparable and validatable experimental research to investigate trade-offs for performance, accuracy and energy consumption of an application that produces a dense 3D model of an arbitrary scene using an RGB-D camera. SLAMBench is a software framework that supports research in hardware accelerators and software tools by comparison of performance, energy-consumption, and accuracy of the generated 3D model in the context of a known ground truth.
VaFRIC (Variable Frame-Rate Imperial College) Dataset: The VaFRIC dataset aims at analysing the performance of direct iterative whole image alignment based camera tracking for varying frame-rates and image resolution under stringent processing budgets.
ICL-NUIM Dataset: The ICL-NUIM dataset aims at benchmarking RGB-D, Visual Odometry and SLAM algorithms. Two different scenes (the living room and the office room scene) are provided with ground truth. Living room has 3D surface ground truth together with the depth-maps as well as camera poses and as a result pefectly suits not just for bechmarking camera trajectory but also reconstruction. Office room scene comes with only trajectory data and does not have any explicit 3D model with it.