Direkt zum Inhalt springen
Computer Vision Group
TUM School of Computation, Information and Technology
Technical University of Munich

Technical University of Munich



Rolling-Shutter Visual-Inertial Odometry Dataset

Contact : David Schubert, Nikolaus Demmel, Lukas von Stumberg, Vladyslav Usenko.

We present a novel dataset that contains time-synchronized global-shutter and rolling-shutter images, IMU data and ground-truth poses for ten different sequences.

Export as PDF, XML, TEX or BIB

Conference and Workshop Papers
[]Rolling-Shutter Modelling for Visual-Inertial Odometry (D. Schubert, N. Demmel, L. von Stumberg, V. Usenko and D. Cremers), In International Conference on Intelligent Robots and Systems (IROS), 2019. ([arxiv]) [bibtex] [pdf]
Powered by bibtexbrowser
Export as PDF, XML, TEX or BIB



The figure shows approximate sensor orientations in xyz-rgb convention.

Note: this is an updated figure compared to the schematic illustration in the paper, which might have been confusing. Also, in the calibrated dataset, the offset between IMU and Marker reference frame has already been taken care of: the ground truth poses are post-processed to track the IMU frame.

For the calibrated sequences that are provided in the table the ground-truth poses are provided for the IMU coordinate frame and time-synchronized with image and IMU data. Geometric camera-IMU calibration can be found here: calibration.yaml.

Calibration was done using the following sequences.

Camera calibration dataset-calib-cam1.bag dataset-calib-cam1.tar
IMU calibration dataset-calib-imu1.bag dataset-calib-imu1.tar

Note that for the calibration sequences, both cameras were operating in global-shutter mode. This means the timestamps for the rolling-shutter images refer to the first row. In general, timestamps denote the middle of the exposure interval.

For more information about calibration, we refer to our visual-inertial dataset.

According to the camera manufacturer, the time difference of two consecutive rows due to rolling shutter can't be read directly, but is very well approximated by the step size of the exposure time. Like this, we obtain an approximate row time difference of 29.4737 microseconds.

Rechte Seite

Informatik IX
Computer Vision Group

Boltzmannstrasse 3
85748 Garching info@vision.in.tum.de

Follow us on:



CVPR 2023

We have six papers accepted to CVPR 2023.


NeurIPS 2022

We have two papers accepted to NeurIPS 2022.


WACV 2023

We have two papers accepted at WACV 2023.


Fulbright PULSE podcast on Prof. Cremers went online on Apple Podcasts and Spotify.


MCML Kick-Off

On July 27th, we are organizing the Kick-Off of the Munich Center for Machine Learning in the Bavarian Academy of Sciences.