October 2012: We had a successful week at ECCV 2012, where Ankur Handa presented his paper on evaluating high frame-rate camera tracking to much interest. The idea of this work, in collaboration with Richard Newcombe, Adrien Angeli and Andrew Davison, was to answer questions about why we normally run advanced visual tracking algorithms in the 20–60Hz range, when we now have cameras available which can capture much faster than that; taking into account the fact that as frame-rate increases, frame-to-frame tracking becomes easier in any tracker that uses prediction (as every tracker should!). Ankur conducted systematic experiments on how the performance of a whole image alignment tracker varies (in terms of accuracy and computational cost), using a dataset he has generated of photo-realistic video. This video was generated using ray-tracing of a detailed room model, plus the application of realistic noise and blur effects using parameters determined using experiments with a real camera. Samples of our photo-realistic video are shown below. You can download the full multi-frame-rate dataset, and soon all of the open source code needed to render your own similar sequences, from the project page.
Real-Time Camera Tracking: When is High Frame-Rate Best? (PDF format),
Ankur Handa, Richard A. Newcombe, Adrien Angeli and Andrew J. Davison, ECCV 2012.
We also presented KAZE Features, a new feature detection and description algorithm based non-linear scale space decomposition which shows much improved performance than SURF and SIFT in difficult wide baseline matching problems. KAZE Features were developed by Pablo Fernandez Alcantarilla from the University of Auverge, with the collaboration of Adrien Bartoli. Pablo’s source code implementing KAZE Features is available here.
KAZE Features (PDF format),
Pablo Fernandez Alcantarilla, Adrien Bartoli and Andrew J. Davison, ECCV 2012.