<?xml version="1.0" encoding="utf-8"?>
<!-- generator="FeedCreator 1.8" -->
<?xml-stylesheet href="https://cvg.cit.tum.de/lib/exe/css.php?s=feed" type="text/css"?>
<feed xmlns="http://www.w3.org/2005/Atom">
    <title>Computer Vision Group data:datasets</title>
    <subtitle></subtitle>
    <link rel="alternate" type="text/html" href="https://cvg.cit.tum.de/"/>
    <id>https://cvg.cit.tum.de/</id>
    <updated>2026-04-20T21:17:37+00:00</updated>
    <generator>FeedCreator 1.8 (info@mypapit.net)</generator>
    <link rel="self" type="application/atom+xml" href="https://cvg.cit.tum.de/feed.php" />
    <entry>
        <title>Multiview Datasets</title>
        <link rel="alternate" type="text/html" href="https://cvg.cit.tum.de/data/datasets/3dreconstruction?rev=1327057260&amp;do=diff"/>
        <published>2012-01-20T12:01:00+00:00</published>
        <updated>2012-01-20T12:01:00+00:00</updated>
        <id>https://cvg.cit.tum.de/data/datasets/3dreconstruction?rev=1327057260&amp;do=diff</id>
        <author>
            <name>Anonymous</name>
            <email>anonymous@undisclosed.example.com</email>
        </author>
        <category  term="data:datasets" />
        <content>Multiview Datasets

We provide multiple datasets capturing objects from various vantage points. Each entry contains an image sequence, corresponding silhouettes and full calibration parameters. 
We are happy to share our data with other researchers. Please refer to the respective publication when using this data.</content>
        <summary>Multiview Datasets

We provide multiple datasets capturing objects from various vantage points. Each entry contains an image sequence, corresponding silhouettes and full calibration parameters. 
We are happy to share our data with other researchers. Please refer to the respective publication when using this data.</summary>
    </entry>
    <entry>
        <title>4Seasons Dataset</title>
        <link rel="alternate" type="text/html" href="https://cvg.cit.tum.de/data/datasets/4seasons-dataset?rev=1738846078&amp;do=diff"/>
        <published>2025-02-06T13:47:58+00:00</published>
        <updated>2025-02-06T13:47:58+00:00</updated>
        <id>https://cvg.cit.tum.de/data/datasets/4seasons-dataset?rev=1738846078&amp;do=diff</id>
        <author>
            <name>Anonymous</name>
            <email>anonymous@undisclosed.example.com</email>
        </author>
        <category  term="data:datasets" />
        <content>4seasons

4Seasons Dataset



 4Seasons: A Cross-Season Dataset for Multi-Weather SLAM in Autonomous Driving 

We present a novel dataset covering seasonal and challenging perceptual conditions for autonomous driving. Among others, it enables research on visual odometry, global place recognition, and map-based re-localization tracking. The data was collected in different scenarios and under a wide variety of weather conditions and illuminations, including day and night. This resulted in more tha…</content>
        <summary>4seasons

4Seasons Dataset



 4Seasons: A Cross-Season Dataset for Multi-Weather SLAM in Autonomous Driving 

We present a novel dataset covering seasonal and challenging perceptual conditions for autonomous driving. Among others, it enables research on visual odometry, global place recognition, and map-based re-localization tracking. The data was collected in different scenarios and under a wide variety of weather conditions and illuminations, including day and night. This resulted in more tha…</summary>
    </entry>
    <entry>
        <title>3D Object in Clutter Recognition and Segmentation</title>
        <link rel="alternate" type="text/html" href="https://cvg.cit.tum.de/data/datasets/clutter?rev=1405339138&amp;do=diff"/>
        <published>2014-07-14T13:58:58+00:00</published>
        <updated>2014-07-14T13:58:58+00:00</updated>
        <id>https://cvg.cit.tum.de/data/datasets/clutter?rev=1405339138&amp;do=diff</id>
        <author>
            <name>Anonymous</name>
            <email>anonymous@undisclosed.example.com</email>
        </author>
        <category  term="data:datasets" />
        <content>3D Object in Clutter Recognition and Segmentation

----------



This dataset focuses on the recognition of known objects in cluttered and incomplete 3D scans. It is composed of 150 synthetic scenes, captured with a (perspective) virtual camera, and</content>
        <summary>3D Object in Clutter Recognition and Segmentation

----------



This dataset focuses on the recognition of known objects in cluttered and incomplete 3D scans. It is composed of 150 synthetic scenes, captured with a (perspective) virtual camera, and</summary>
    </entry>
    <entry>
        <title>DDFF 12-Scene Benchmark</title>
        <link rel="alternate" type="text/html" href="https://cvg.cit.tum.de/data/datasets/ddff12scene?rev=1773130578&amp;do=diff"/>
        <published>2026-03-10T09:16:18+00:00</published>
        <updated>2026-03-10T09:16:18+00:00</updated>
        <id>https://cvg.cit.tum.de/data/datasets/ddff12scene?rev=1773130578&amp;do=diff</id>
        <author>
            <name>Anonymous</name>
            <email>anonymous@undisclosed.example.com</email>
        </author>
        <category  term="data:datasets" />
        <content>DDFF 12-Scene

4.5D Lightfield-Depth Benchmark



 Depth From Focus Competition on the DDFF 12-Scene Dataset 

DDFF 12-Scene dataset consists of 720 lightfield images and coregistered depth maps.

	*  Lightfield: 4D lightfield images; each of which has 9 × 9 × 383 × 552 undistorted subapertures Images are saved as numpy arrays and can be loaded as follows:</content>
        <summary>DDFF 12-Scene

4.5D Lightfield-Depth Benchmark



 Depth From Focus Competition on the DDFF 12-Scene Dataset 

DDFF 12-Scene dataset consists of 720 lightfield images and coregistered depth maps.

	*  Lightfield: 4D lightfield images; each of which has 9 × 9 × 383 × 552 undistorted subapertures Images are saved as numpy arrays and can be loaded as follows:</summary>
    </entry>
    <entry>
        <title>Deformable Shape Tracking Datasets</title>
        <link rel="alternate" type="text/html" href="https://cvg.cit.tum.de/data/datasets/deformable_shape_tracking_datasets?rev=1340027990&amp;do=diff"/>
        <published>2012-06-18T15:59:50+00:00</published>
        <updated>2012-06-18T15:59:50+00:00</updated>
        <id>https://cvg.cit.tum.de/data/datasets/deformable_shape_tracking_datasets?rev=1340027990&amp;do=diff</id>
        <author>
            <name>Anonymous</name>
            <email>anonymous@undisclosed.example.com</email>
        </author>
        <category  term="data:datasets" />
        <content>We are happy to share our data with other researchers. Please refer to the respective publication when using this data.

Deformable Shape Tracking Datasets

Shape Priors in Variational Image Segmentation: Convexity, Lipschitz Continuity and Globally Optimal Solutions</content>
        <summary>We are happy to share our data with other researchers. Please refer to the respective publication when using this data.

Deformable Shape Tracking Datasets

Shape Priors in Variational Image Segmentation: Convexity, Lipschitz Continuity and Globally Optimal Solutions</summary>
    </entry>
    <entry>
        <title>Intrinsic3D</title>
        <link rel="alternate" type="text/html" href="https://cvg.cit.tum.de/data/datasets/intrinsic3d?rev=1602505769&amp;do=diff"/>
        <published>2020-10-12T14:29:29+00:00</published>
        <updated>2020-10-12T14:29:29+00:00</updated>
        <id>https://cvg.cit.tum.de/data/datasets/intrinsic3d?rev=1602505769&amp;do=diff</id>
        <author>
            <name>Anonymous</name>
            <email>anonymous@undisclosed.example.com</email>
        </author>
        <category  term="data:datasets" />
        <content>Intrinsic3D Dataset

Intrinsic3D: High-Quality 3D Reconstruction by Joint Appearance and Geometry Optimization with Spatially-Varying Lighting



Robert Maier1,2 Kihwan Kim1 Daniel Cremers2 Jan Kautz1 Matthias Nießner2,3

1NVIDIA 2Technical University of Munich 3Stanford University



IEEE International Conference on Computer Vision (ICCV) 2017</content>
        <summary>Intrinsic3D Dataset

Intrinsic3D: High-Quality 3D Reconstruction by Joint Appearance and Geometry Optimization with Spatially-Varying Lighting



Robert Maier1,2 Kihwan Kim1 Daniel Cremers2 Jan Kautz1 Matthias Nießner2,3

1NVIDIA 2Technical University of Munich 3Stanford University



IEEE International Conference on Computer Vision (ICCV) 2017</summary>
    </entry>
    <entry>
        <title>Deformable 3D Shape Matching</title>
        <link rel="alternate" type="text/html" href="https://cvg.cit.tum.de/data/datasets/kids?rev=1460122798&amp;do=diff"/>
        <published>2016-04-08T15:39:58+00:00</published>
        <updated>2016-04-08T15:39:58+00:00</updated>
        <id>https://cvg.cit.tum.de/data/datasets/kids?rev=1460122798&amp;do=diff</id>
        <author>
            <name>Anonymous</name>
            <email>anonymous@undisclosed.example.com</email>
        </author>
        <category  term="data:datasets" />
        <content>Deformable 3D Shape Matching

----------



This dataset consists of a collection of 3D shapes undergoing nearly-isometric and within-class deformations. In particular, we provide two shape classes (&quot;kid&quot; and &quot;fat kid&quot;) under different poses, where the same poses are applied to both classes. Filenames within each class are ordered by deviation from isometry, i.e. the last shape has the largest deformation with respect to the null pose. All shapes in the dataset are given in OFF format, have arou…</content>
        <summary>Deformable 3D Shape Matching

----------



This dataset consists of a collection of 3D shapes undergoing nearly-isometric and within-class deformations. In particular, we provide two shape classes (&quot;kid&quot; and &quot;fat kid&quot;) under different poses, where the same poses are applied to both classes. Filenames within each class are ordered by deviation from isometry, i.e. the last shape has the largest deformation with respect to the null pose. All shapes in the dataset are given in OFF format, have arou…</summary>
    </entry>
    <entry>
        <title>Mobile Depth From Focus Dataset</title>
        <link rel="alternate" type="text/html" href="https://cvg.cit.tum.de/data/datasets/mdff?rev=1539814935&amp;do=diff"/>
        <published>2018-10-18T00:22:15+00:00</published>
        <updated>2018-10-18T00:22:15+00:00</updated>
        <id>https://cvg.cit.tum.de/data/datasets/mdff?rev=1539814935&amp;do=diff</id>
        <author>
            <name>Anonymous</name>
            <email>anonymous@undisclosed.example.com</email>
        </author>
        <category  term="data:datasets" />
     <summary>no summary</summary>
    </entry>
    <entry>
        <title>Monocular Visual Odometry Dataset</title>
        <link rel="alternate" type="text/html" href="https://cvg.cit.tum.de/data/datasets/mono-dataset?rev=1625646638&amp;do=diff"/>
        <published>2021-07-07T10:30:38+00:00</published>
        <updated>2021-07-07T10:30:38+00:00</updated>
        <id>https://cvg.cit.tum.de/data/datasets/mono-dataset?rev=1625646638&amp;do=diff</id>
        <author>
            <name>Anonymous</name>
            <email>anonymous@undisclosed.example.com</email>
        </author>
        <category  term="data:datasets" />
        <content>Monocular Visual Odometry Dataset

We present a dataset for evaluating the tracking accuracy of 
monocular Visual Odometry (VO) and SLAM methods. It contains 50
real-world sequences comprising over 100 minutes of video, 
recorded across different environments</content>
        <summary>Monocular Visual Odometry Dataset

We present a dataset for evaluating the tracking accuracy of 
monocular Visual Odometry (VO) and SLAM methods. It contains 50
real-world sequences comprising over 100 minutes of video, 
recorded across different environments</summary>
    </entry>
    <entry>
        <title>SLAM for Omnidirectional Cameras</title>
        <link rel="alternate" type="text/html" href="https://cvg.cit.tum.de/data/datasets/omni-lsdslam?rev=1615800598&amp;do=diff"/>
        <published>2021-03-15T10:29:58+00:00</published>
        <updated>2021-03-15T10:29:58+00:00</updated>
        <id>https://cvg.cit.tum.de/data/datasets/omni-lsdslam?rev=1615800598&amp;do=diff</id>
        <author>
            <name>Anonymous</name>
            <email>anonymous@undisclosed.example.com</email>
        </author>
        <category  term="data:datasets" />
        <content>Large-Scale Direct SLAM for Omnidirectional Cameras

We propose a real-time, direct monocular SLAM
method for omnidirectional or wide field-of-view fisheye cam-
eras. Both tracking (direct image alignment) and mapping
(pixel-wise distance filtering) are directly formulated for the
unified omnidirectional model, which can model central imaging
devices with a field of view well above 150° . This is in stark
contrast to existing direct mono-SLAM approaches like DTAM
or LSD-SLAM, which operate on re…</content>
        <summary>Large-Scale Direct SLAM for Omnidirectional Cameras

We propose a real-time, direct monocular SLAM
method for omnidirectional or wide field-of-view fisheye cam-
eras. Both tracking (direct image alignment) and mapping
(pixel-wise distance filtering) are directly formulated for the
unified omnidirectional model, which can model central imaging
devices with a field of view well above 150° . This is in stark
contrast to existing direct mono-SLAM approaches like DTAM
or LSD-SLAM, which operate on re…</summary>
    </entry>
    <entry>
        <title>3D Deformable Partial Shape Matching</title>
        <link rel="alternate" type="text/html" href="https://cvg.cit.tum.de/data/datasets/partial?rev=1461011311&amp;do=diff"/>
        <published>2016-04-18T22:28:31+00:00</published>
        <updated>2016-04-18T22:28:31+00:00</updated>
        <id>https://cvg.cit.tum.de/data/datasets/partial?rev=1461011311&amp;do=diff</id>
        <author>
            <name>Anonymous</name>
            <email>anonymous@undisclosed.example.com</email>
        </author>
        <category  term="data:datasets" />
        <content>3D Deformable Partial Shape Matching

----------



Description

The datasets we provide here can be used for tasks of deformable 3D shape matching and retrieval under partiality transformations. This is considered a more challenging setting if compared to the more classical tasks dealing with full shapes.</content>
        <summary>3D Deformable Partial Shape Matching

----------



Description

The datasets we provide here can be used for tasks of deformable 3D shape matching and retrieval under partiality transformations. This is considered a more challenging setting if compared to the more classical tasks dealing with full shapes.</summary>
    </entry>
    <entry>
        <title>Photometric Depth Super-Resolution Dataset</title>
        <link rel="alternate" type="text/html" href="https://cvg.cit.tum.de/data/datasets/photometricdepthsr?rev=1602155433&amp;do=diff"/>
        <published>2020-10-08T13:10:33+00:00</published>
        <updated>2020-10-08T13:10:33+00:00</updated>
        <id>https://cvg.cit.tum.de/data/datasets/photometricdepthsr?rev=1602155433&amp;do=diff</id>
        <author>
            <name>Anonymous</name>
            <email>anonymous@undisclosed.example.com</email>
        </author>
        <category  term="data:datasets" />
        <content>Photometric Depth Super-Resolution Dataset

Photometric Depth Super-Resolution



Bjoern Haefner1 Songyou Peng2 Alok Verma1 Yvain Quéau3 Daniel Cremers1

1Technical University of Munich 2University of Illinois
at Urbana-Champaign 3GREYC, UMR CNRS 6072



IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI) Special Issue on RGB-D Vision: Methods and Applications</content>
        <summary>Photometric Depth Super-Resolution Dataset

Photometric Depth Super-Resolution



Bjoern Haefner1 Songyou Peng2 Alok Verma1 Yvain Quéau3 Daniel Cremers1

1Technical University of Munich 2University of Illinois
at Urbana-Champaign 3GREYC, UMR CNRS 6072



IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI) Special Issue on RGB-D Vision: Methods and Applications</summary>
    </entry>
    <entry>
        <title>RGB-D SLAM Dataset and Benchmark</title>
        <link rel="alternate" type="text/html" href="https://cvg.cit.tum.de/data/datasets/rgbd-dataset?rev=1498832898&amp;do=diff"/>
        <published>2017-06-30T16:28:18+00:00</published>
        <updated>2017-06-30T16:28:18+00:00</updated>
        <id>https://cvg.cit.tum.de/data/datasets/rgbd-dataset?rev=1498832898&amp;do=diff</id>
        <author>
            <name>Anonymous</name>
            <email>anonymous@undisclosed.example.com</email>
        </author>
        <category  term="data:datasets" />
        <content>RGB-D SLAM Dataset and Benchmark

Contact: Jürgen Sturm

We provide a large dataset containing RGB-D data and ground-truth data with the
goal to establish a novel benchmark for the evaluation of visual odometry and visual SLAM systems.
Our dataset contains the color and depth images of a 
Microsoft Kinect sensor along the ground-truth trajectory of the sensor. The data was recorded at full frame rate (30 Hz) 
and sensor resolution (640x480). The ground-truth trajectory was obtained from a high-a…</content>
        <summary>RGB-D SLAM Dataset and Benchmark

Contact: Jürgen Sturm

We provide a large dataset containing RGB-D data and ground-truth data with the
goal to establish a novel benchmark for the evaluation of visual odometry and visual SLAM systems.
Our dataset contains the color and depth images of a 
Microsoft Kinect sensor along the ground-truth trajectory of the sensor. The data was recorded at full frame rate (30 Hz) 
and sensor resolution (640x480). The ground-truth trajectory was obtained from a high-a…</summary>
    </entry>
    <entry>
        <title>Rolling-Shutter Dataset</title>
        <link rel="alternate" type="text/html" href="https://cvg.cit.tum.de/data/datasets/rolling-shutter-dataset?rev=1641900024&amp;do=diff"/>
        <published>2022-01-11T12:20:24+00:00</published>
        <updated>2022-01-11T12:20:24+00:00</updated>
        <id>https://cvg.cit.tum.de/data/datasets/rolling-shutter-dataset?rev=1641900024&amp;do=diff</id>
        <author>
            <name>Anonymous</name>
            <email>anonymous@undisclosed.example.com</email>
        </author>
        <category  term="data:datasets" />
        <content>Rolling-Shutter Visual-Inertial Odometry Dataset




 Contact : David Schubert, Nikolaus Demmel, Lukas von Stumberg, Vladyslav Usenko.

We present a novel dataset that contains time-synchronized global-shutter and rolling-shutter images, IMU data and ground-truth poses for ten different sequences.



Dataset

The full dataset can be found at:</content>
        <summary>Rolling-Shutter Visual-Inertial Odometry Dataset




 Contact : David Schubert, Nikolaus Demmel, Lukas von Stumberg, Vladyslav Usenko.

We present a novel dataset that contains time-synchronized global-shutter and rolling-shutter images, IMU data and ground-truth poses for ten different sequences.



Dataset

The full dataset can be found at:</summary>
    </entry>
    <entry>
        <title>Deformable 3D Shape Matching with Topological Noise</title>
        <link rel="alternate" type="text/html" href="https://cvg.cit.tum.de/data/datasets/topkids?rev=1461068149&amp;do=diff"/>
        <published>2016-04-19T14:15:49+00:00</published>
        <updated>2016-04-19T14:15:49+00:00</updated>
        <id>https://cvg.cit.tum.de/data/datasets/topkids?rev=1461068149&amp;do=diff</id>
        <author>
            <name>Anonymous</name>
            <email>anonymous@undisclosed.example.com</email>
        </author>
        <category  term="data:datasets" />
        <content>Deformable 3D Shape Matching with Topological Noise

----------



This dataset consists of a collection of 3D shapes undergoing within-class deformations that include in topological changes. 
The changes simulate coalescence of spatially close surface regions – a scenario that frequently occurs when dealing with real
data under suboptimal acquisition conditions. The dataset is based on the fat kid from
the</content>
        <summary>Deformable 3D Shape Matching with Topological Noise

----------



This dataset consists of a collection of 3D shapes undergoing within-class deformations that include in topological changes. 
The changes simulate coalescence of spatially close surface regions – a scenario that frequently occurs when dealing with real
data under suboptimal acquisition conditions. The dataset is based on the fat kid from
the</summary>
    </entry>
    <entry>
        <title>TUM-LSI</title>
        <link rel="alternate" type="text/html" href="https://cvg.cit.tum.de/data/datasets/tum-lsi?rev=1773130843&amp;do=diff"/>
        <published>2026-03-10T09:20:43+00:00</published>
        <updated>2026-03-10T09:20:43+00:00</updated>
        <id>https://cvg.cit.tum.de/data/datasets/tum-lsi?rev=1773130843&amp;do=diff</id>
        <author>
            <name>Anonymous</name>
            <email>anonymous@undisclosed.example.com</email>
        </author>
        <category  term="data:datasets" />
        <content>TUM-LSI

A Large-Scale Indoor Dataset
TUM LSI Dataset[Train/Test Splits]
Note

The images in this dataset are all in portrait format, horizontal direction images were not rotated. By following PoseNet (ICCV, 2015) we first rescaled the images horizontally to 256 pixels, then did the random crops of size 224x224.</content>
        <summary>TUM-LSI

A Large-Scale Indoor Dataset
TUM LSI Dataset[Train/Test Splits]
Note

The images in this dataset are all in portrait format, horizontal direction images were not rotated. By following PoseNet (ICCV, 2015) we first rescaled the images horizontally to 256 pixels, then did the random crops of size 224x224.</summary>
    </entry>
    <entry>
        <title>Visual-Inertial Odometry Dataset</title>
        <link rel="alternate" type="text/html" href="https://cvg.cit.tum.de/data/datasets/vi-dataset?rev=1523958174&amp;do=diff"/>
        <published>2018-04-17T11:42:54+00:00</published>
        <updated>2018-04-17T11:42:54+00:00</updated>
        <id>https://cvg.cit.tum.de/data/datasets/vi-dataset?rev=1523958174&amp;do=diff</id>
        <author>
            <name>Anonymous</name>
            <email>anonymous@undisclosed.example.com</email>
        </author>
        <category  term="data:datasets" />
        <content>Visual-Inertial Dataset



 Contact : David Schubert, Nikolaus Demmel,
Vladyslav Usenko.

Visual odometry and SLAM methods have a large variety of applications in domains such as augmented reality or robotics. Complementing vision sensors with inertial measurements tremendously improves tracking accuracy and robustness, and thus has spawned large interest in the development of visual-inertial (VI) odometry approaches. 
In this paper, we propose the TUM VI benchmark, a novel dataset with a divers…</content>
        <summary>Visual-Inertial Dataset



 Contact : David Schubert, Nikolaus Demmel,
Vladyslav Usenko.

Visual odometry and SLAM methods have a large variety of applications in domains such as augmented reality or robotics. Complementing vision sensors with inertial measurements tremendously improves tracking accuracy and robustness, and thus has spawned large interest in the development of visual-inertial (VI) odometry approaches. 
In this paper, we propose the TUM VI benchmark, a novel dataset with a divers…</summary>
    </entry>
    <entry>
        <title>Visual-Inertial Dataset</title>
        <link rel="alternate" type="text/html" href="https://cvg.cit.tum.de/data/datasets/visual-inertial-dataset?rev=1625647248&amp;do=diff"/>
        <published>2021-07-07T10:40:48+00:00</published>
        <updated>2021-07-07T10:40:48+00:00</updated>
        <id>https://cvg.cit.tum.de/data/datasets/visual-inertial-dataset?rev=1625647248&amp;do=diff</id>
        <author>
            <name>Anonymous</name>
            <email>anonymous@undisclosed.example.com</email>
        </author>
        <category  term="data:datasets" />
        <content>Visual-Inertial Dataset



 Contact : David Schubert, Nikolaus Demmel,
Vladyslav Usenko.

 The TUM VI Benchmark for Evaluating Visual-Inertial Odometry 

Visual odometry and SLAM methods have a large variety of applications in domains such as augmented reality or robotics. Complementing vision sensors with inertial measurements tremendously improves tracking accuracy and robustness, and thus has spawned large interest in the development of visual-inertial (VI) odometry approaches. 
In this paper…</content>
        <summary>Visual-Inertial Dataset



 Contact : David Schubert, Nikolaus Demmel,
Vladyslav Usenko.

 The TUM VI Benchmark for Evaluating Visual-Inertial Odometry 

Visual odometry and SLAM methods have a large variety of applications in domains such as augmented reality or robotics. Complementing vision sensors with inertial measurements tremendously improves tracking accuracy and robustness, and thus has spawned large interest in the development of visual-inertial (VI) odometry approaches. 
In this paper…</summary>
    </entry>
    <entry>
        <title>Visual-Inertial Event Dataset</title>
        <link rel="alternate" type="text/html" href="https://cvg.cit.tum.de/data/datasets/visual-inertial-event-dataset?rev=1721117392&amp;do=diff"/>
        <published>2024-07-16T10:09:52+00:00</published>
        <updated>2024-07-16T10:09:52+00:00</updated>
        <id>https://cvg.cit.tum.de/data/datasets/visual-inertial-event-dataset?rev=1721117392&amp;do=diff</id>
        <author>
            <name>Anonymous</name>
            <email>anonymous@undisclosed.example.com</email>
        </author>
        <category  term="data:datasets" />
        <content>tumvie



Visual-Inertial Event Dataset

 Contact : Simon Klenk, Jason Chui,  Publications.

 TUM-VIE:  The  TUM  Stereo  Visual-Inertial  Event  Data  Set 

TUM-VIE is an event camera dataset for developing 3D perception and navigation algorithms. It contains handheld and head-mounted sequences
in indoor and outdoor environments with rapid motion
during sports and high dynamic range. TUM-VIE includes challenging sequences where state-of-the art VIO fails or results in large drift. Hence, it can…</content>
        <summary>tumvie



Visual-Inertial Event Dataset

 Contact : Simon Klenk, Jason Chui,  Publications.

 TUM-VIE:  The  TUM  Stereo  Visual-Inertial  Event  Data  Set 

TUM-VIE is an event camera dataset for developing 3D perception and navigation algorithms. It contains handheld and head-mounted sequences
in indoor and outdoor environments with rapid motion
during sports and high dynamic range. TUM-VIE includes challenging sequences where state-of-the art VIO fails or results in large drift. Hence, it can…</summary>
    </entry>
</feed>
