<?xml version="1.0" encoding="utf-8"?>
<!-- generator="FeedCreator 1.8" -->
<?xml-stylesheet href="https://cvg.cit.tum.de/lib/exe/css.php?s=feed" type="text/css"?>
<feed xmlns="http://www.w3.org/2005/Atom">
    <title>Computer Vision Group research</title>
    <subtitle></subtitle>
    <link rel="alternate" type="text/html" href="https://cvg.cit.tum.de/"/>
    <id>https://cvg.cit.tum.de/</id>
    <updated>2026-04-20T05:39:19+00:00</updated>
    <generator>FeedCreator 1.8 (info@mypapit.net)</generator>
    <link rel="self" type="application/atom+xml" href="https://cvg.cit.tum.de/feed.php" />
    <entry>
        <title>Biomedicine</title>
        <link rel="alternate" type="text/html" href="https://cvg.cit.tum.de/research/biomed?rev=1604918165&amp;do=diff"/>
        <published>2020-11-09T11:36:05+00:00</published>
        <updated>2020-11-09T11:36:05+00:00</updated>
        <id>https://cvg.cit.tum.de/research/biomed?rev=1604918165&amp;do=diff</id>
        <author>
            <name>Anonymous</name>
            <email>anonymous@undisclosed.example.com</email>
        </author>
        <category  term="research" />
        <content>Biomedicine

Below, you can find a list of publications where we partially or entirely focus on biomedicine. Also many of our other methods can be applied to biomedical data.

Contact

Related publications</content>
        <summary>Biomedicine

Below, you can find a list of publications where we partially or entirely focus on biomedicine. Also many of our other methods can be applied to biomedical data.

Contact

Related publications</summary>
    </entry>
    <entry>
        <title>Convex Relaxation Methods</title>
        <link rel="alternate" type="text/html" href="https://cvg.cit.tum.de/research/convex_relaxation_methods?rev=1443188797&amp;do=diff"/>
        <published>2015-09-25T15:46:37+00:00</published>
        <updated>2015-09-25T15:46:37+00:00</updated>
        <id>https://cvg.cit.tum.de/research/convex_relaxation_methods?rev=1443188797&amp;do=diff</id>
        <author>
            <name>Anonymous</name>
            <email>anonymous@undisclosed.example.com</email>
        </author>
        <category  term="research" />
        <content>Convex Relaxation Methods

Contact: Thomas Möllenhoff, Evgeny Strekalovskiy

A popular and well established paradigm for modeling computer vision problems is through energy minimization.
In practice, almost all functionals providing a realistic model are non-convex and even NP-hard.
They are thus hard to solve and a direct minimization usually leads to poor local minima.</content>
        <summary>Convex Relaxation Methods

Contact: Thomas Möllenhoff, Evgeny Strekalovskiy

A popular and well established paradigm for modeling computer vision problems is through energy minimization.
In practice, almost all functionals providing a realistic model are non-convex and even NP-hard.
They are thus hard to solve and a direct minimization usually leads to poor local minima.</summary>
    </entry>
    <entry>
        <title>Deep Learning</title>
        <link rel="alternate" type="text/html" href="https://cvg.cit.tum.de/research/deeplearning?rev=1606817773&amp;do=diff"/>
        <published>2020-12-01T11:16:13+00:00</published>
        <updated>2020-12-01T11:16:13+00:00</updated>
        <id>https://cvg.cit.tum.de/research/deeplearning?rev=1606817773&amp;do=diff</id>
        <author>
            <name>Anonymous</name>
            <email>anonymous@undisclosed.example.com</email>
        </author>
        <category  term="research" />
        <content>deeplearning

Deep Learning



Deep learning is a powerful machine learning framework that has shown outstanding performance in many fields. The main power of deep learning comes from learning data representations directly from data in a hierarchical layer-based structure.</content>
        <summary>deeplearning

Deep Learning



Deep learning is a powerful machine learning framework that has shown outstanding performance in many fields. The main power of deep learning comes from learning data representations directly from data in a hierarchical layer-based structure.</summary>
    </entry>
    <entry>
        <title>E-NeRF: Neural Radiance Fields from a Moving Event Camera</title>
        <link rel="alternate" type="text/html" href="https://cvg.cit.tum.de/research/enerf?rev=1678955389&amp;do=diff"/>
        <published>2023-03-16T09:29:49+00:00</published>
        <updated>2023-03-16T09:29:49+00:00</updated>
        <id>https://cvg.cit.tum.de/research/enerf?rev=1678955389&amp;do=diff</id>
        <author>
            <name>Anonymous</name>
            <email>anonymous@undisclosed.example.com</email>
        </author>
        <category  term="research" />
        <content>E-NeRF: Neural Radiance Fields from a Moving Event Camera

[ ]

Code

Code can be found on https://github.com/knelk/enerf.

Abstract

Estimating neural radiance fields (NeRFs) from ideal images has been extensively studied in the computer vision community. Most approaches assume optimal illumination and slow camera motion. These assumptions are often violated in robotic applications, where images contain motion blur and the scene may not have suitable illumination. This can cause significant pro…</content>
        <summary>E-NeRF: Neural Radiance Fields from a Moving Event Camera

[ ]

Code

Code can be found on https://github.com/knelk/enerf.

Abstract

Estimating neural radiance fields (NeRFs) from ideal images has been extensively studied in the computer vision community. Most approaches assume optimal illumination and slow camera motion. These assumptions are often violated in robotic applications, where images contain motion blur and the scene may not have suitable illumination. This can cause significant pro…</summary>
    </entry>
    <entry>
        <title>Geometry Processing</title>
        <link rel="alternate" type="text/html" href="https://cvg.cit.tum.de/research/geometry?rev=1610704569&amp;do=diff"/>
        <published>2021-01-15T10:56:09+00:00</published>
        <updated>2021-01-15T10:56:09+00:00</updated>
        <id>https://cvg.cit.tum.de/research/geometry?rev=1610704569&amp;do=diff</id>
        <author>
            <name>Anonymous</name>
            <email>anonymous@undisclosed.example.com</email>
        </author>
        <category  term="research" />
        <content>Geometry Processing

Geometry processing is concerned with the acquisition, analysis and manipulation of geometric data. The field is very broad and algorithms aim at improving 3D reconstructions, finding correspondences between objects, shape interpolation, or analysing physical properties of scanned data. Many applications in geometry processing have counterparts in image processing, e.g. object detection, but geometric data poses special challenges in terms of how to formulate mathematical co…</content>
        <summary>Geometry Processing

Geometry processing is concerned with the acquisition, analysis and manipulation of geometric data. The field is very broad and algorithms aim at improving 3D reconstructions, finding correspondences between objects, shape interpolation, or analysing physical properties of scanned data. Many applications in geometry processing have counterparts in image processing, e.g. object detection, but geometric data poses special challenges in terms of how to formulate mathematical co…</summary>
    </entry>
    <entry>
        <title>Image-based 3D Reconstruction</title>
        <link rel="alternate" type="text/html" href="https://cvg.cit.tum.de/research/image-based_3d_reconstruction?rev=1495451219&amp;do=diff"/>
        <published>2017-05-22T13:06:59+00:00</published>
        <updated>2017-05-22T13:06:59+00:00</updated>
        <id>https://cvg.cit.tum.de/research/image-based_3d_reconstruction?rev=1495451219&amp;do=diff</id>
        <author>
            <name>Anonymous</name>
            <email>anonymous@undisclosed.example.com</email>
        </author>
        <category  term="research" />
        <content>Image-based 3D Reconstruction

Contact: Prof. Dr. Daniel Cremers

For a human, it is usually an easy task to get an idea of the 3D structure shown in an image. Due to the loss of one dimension in the projection process, the estimation of the true 3D geometry is difficult and a so called ill-posed problem, because usually infinitely many different 3D surfaces may produce the same set of images.</content>
        <summary>Image-based 3D Reconstruction

Contact: Prof. Dr. Daniel Cremers

For a human, it is usually an easy task to get an idea of the 3D structure shown in an image. Due to the loss of one dimension in the projection process, the estimation of the true 3D geometry is difficult and a so called ill-posed problem, because usually infinitely many different 3D surfaces may produce the same set of images.</summary>
    </entry>
    <entry>
        <title>Image Segmentation</title>
        <link rel="alternate" type="text/html" href="https://cvg.cit.tum.de/research/image_segmentation?rev=1327057042&amp;do=diff"/>
        <published>2012-01-20T11:57:22+00:00</published>
        <updated>2012-01-20T11:57:22+00:00</updated>
        <id>https://cvg.cit.tum.de/research/image_segmentation?rev=1327057042&amp;do=diff</id>
        <author>
            <name>Anonymous</name>
            <email>anonymous@undisclosed.example.com</email>
        </author>
        <category  term="research" />
        <content>Image Segmentation

Contact: Claudia Niewenhuis, Maria Klodt

Image segmentation aims at partitioning an image into n disjoint regions. Since this problem is highly ambiguous additional information is indispensible. This can be given as user input, e.g. scribbles on the image, additional constraints such as the center of gravity and the major axes of the object or learned from a given database.
We formulate mostly convex energy functionals to solve this problem.</content>
        <summary>Image Segmentation

Contact: Claudia Niewenhuis, Maria Klodt

Image segmentation aims at partitioning an image into n disjoint regions. Since this problem is highly ambiguous additional information is indispensible. This can be given as user input, e.g. scribbles on the image, additional constraints such as the center of gravity and the major axes of the object or learned from a given database.
We formulate mostly convex energy functionals to solve this problem.</summary>
    </entry>
    <entry>
        <title>research:lsdslam</title>
        <link rel="alternate" type="text/html" href="https://cvg.cit.tum.de/research/lsdslam?rev=1438029875&amp;do=diff"/>
        <published>2015-07-27T22:44:35+00:00</published>
        <updated>2015-07-27T22:44:35+00:00</updated>
        <id>https://cvg.cit.tum.de/research/lsdslam?rev=1438029875&amp;do=diff</id>
        <author>
            <name>Anonymous</name>
            <email>anonymous@undisclosed.example.com</email>
        </author>
        <category  term="research" />
     <summary>no summary</summary>
    </entry>
    <entry>
        <title>Marker-less Motion Capture</title>
        <link rel="alternate" type="text/html" href="https://cvg.cit.tum.de/research/markerless_motion_capture?rev=1327057136&amp;do=diff"/>
        <published>2012-01-20T11:58:56+00:00</published>
        <updated>2012-01-20T11:58:56+00:00</updated>
        <id>https://cvg.cit.tum.de/research/markerless_motion_capture?rev=1327057136&amp;do=diff</id>
        <author>
            <name>Anonymous</name>
            <email>anonymous@undisclosed.example.com</email>
        </author>
        <category  term="research" />
        <content>Marker-less Motion Capture

In this project, we develop statistical and energy minimization methods to tracking articulated 3D objects from multiple camera views. Such techniques are of central importance in particular for markerless motion capture. The human motion sequence extracted from multiple videos can subsequently be used to animate virtual characters as commonly done in action movies.</content>
        <summary>Marker-less Motion Capture

In this project, we develop statistical and energy minimization methods to tracking articulated 3D objects from multiple camera views. Such techniques are of central importance in particular for markerless motion capture. The human motion sequence extracted from multiple videos can subsequently be used to animate virtual characters as commonly done in action movies.</summary>
    </entry>
    <entry>
        <title>MonoRec: Semi-Supervised Dense Reconstruction in Dynamic Environments from a Single Moving Camera</title>
        <link rel="alternate" type="text/html" href="https://cvg.cit.tum.de/research/monorec?rev=1623742696&amp;do=diff"/>
        <published>2021-06-15T09:38:16+00:00</published>
        <updated>2021-06-15T09:38:16+00:00</updated>
        <id>https://cvg.cit.tum.de/research/monorec?rev=1623742696&amp;do=diff</id>
        <author>
            <name>Anonymous</name>
            <email>anonymous@undisclosed.example.com</email>
        </author>
        <category  term="research" />
        <content>MonoRec: Semi-Supervised Dense Reconstruction in Dynamic Environments from a Single Moving Camera

Contact: Nan Yang, Lukas Von Stumberg, Niclas Zeller









Code: &lt;https://github.com/Brummi/MonoRec&gt;

Abstract

In this paper, we propose MonoRec, a semi-supervised monocular dense reconstruction architecture that predicts depth maps from a single moving camera in dynamic environments. MonoRec is based on a multi-view stereo setting which encodes the information of multiple consecutive images in…</content>
        <summary>MonoRec: Semi-Supervised Dense Reconstruction in Dynamic Environments from a Single Moving Camera

Contact: Nan Yang, Lukas Von Stumberg, Niclas Zeller









Code: &lt;https://github.com/Brummi/MonoRec&gt;

Abstract

In this paper, we propose MonoRec, a semi-supervised monocular dense reconstruction architecture that predicts depth maps from a single moving camera in dynamic environments. MonoRec is based on a multi-view stereo setting which encodes the information of multiple consecutive images in…</summary>
    </entry>
    <entry>
        <title>Nanocopter</title>
        <link rel="alternate" type="text/html" href="https://cvg.cit.tum.de/research/nanocopter?rev=1432070560&amp;do=diff"/>
        <published>2015-05-19T23:22:40+00:00</published>
        <updated>2015-05-19T23:22:40+00:00</updated>
        <id>https://cvg.cit.tum.de/research/nanocopter?rev=1432070560&amp;do=diff</id>
        <author>
            <name>Anonymous</name>
            <email>anonymous@undisclosed.example.com</email>
        </author>
        <category  term="research" />
        <content>Nanocopter

Contact: Jakob Engel, Jörg Stückler

In this research project we will explore visual navigation methods for Nanocopters, extremely small Quadrocopter with a flight-weight of less than 50g.
Possible future applications are e.g. exploration of a collapsed building after a natural desaster, surveillance and inspection of difficult-to-reach areas, or simply as personal flying camera, to take pictures from a fully new perspective.
In contrast to regular quadrocopters with a flight-weight …</content>
        <summary>Nanocopter

Contact: Jakob Engel, Jörg Stückler

In this research project we will explore visual navigation methods for Nanocopters, extremely small Quadrocopter with a flight-weight of less than 50g.
Possible future applications are e.g. exploration of a collapsed building after a natural desaster, surveillance and inspection of difficult-to-reach areas, or simply as personal flying camera, to take pictures from a fully new perspective.
In contrast to regular quadrocopters with a flight-weight …</summary>
    </entry>
    <entry>
        <title>research:omni-lsdslam</title>
        <link rel="alternate" type="text/html" href="https://cvg.cit.tum.de/research/omni-lsdslam?rev=1442866163&amp;do=diff"/>
        <published>2015-09-21T22:09:23+00:00</published>
        <updated>2015-09-21T22:09:23+00:00</updated>
        <id>https://cvg.cit.tum.de/research/omni-lsdslam?rev=1442866163&amp;do=diff</id>
        <author>
            <name>Anonymous</name>
            <email>anonymous@undisclosed.example.com</email>
        </author>
        <category  term="research" />
     <summary>no summary</summary>
    </entry>
    <entry>
        <title>Optical Flow Estimation</title>
        <link rel="alternate" type="text/html" href="https://cvg.cit.tum.de/research/optical_flow_estimation?rev=1603800240&amp;do=diff"/>
        <published>2020-10-27T13:04:00+00:00</published>
        <updated>2020-10-27T13:04:00+00:00</updated>
        <id>https://cvg.cit.tum.de/research/optical_flow_estimation?rev=1603800240&amp;do=diff</id>
        <author>
            <name>Anonymous</name>
            <email>anonymous@undisclosed.example.com</email>
        </author>
        <category  term="research" />
        <content>Optical Flow Estimation

Estimating the motion of every pixel in a sequence of images is a problem with many applications in computer vision, such as image segmentation, object classification,visual odometry, and driver assistance.

In general, optical flow describes a sparse or dense vector field, where a displacement vector is assigned to certain pixel position, that points to where that pixel can be found in another image.
In the context of scene flow estimation, which is performed on images …</content>
        <summary>Optical Flow Estimation

Estimating the motion of every pixel in a sequence of images is a problem with many applications in computer vision, such as image segmentation, object classification,visual odometry, and driver assistance.

In general, optical flow describes a sparse or dense vector field, where a displacement vector is assigned to certain pixel position, that points to where that pixel can be found in another image.
In the context of scene flow estimation, which is performed on images …</summary>
    </entry>
    <entry>
        <title>research:overview</title>
        <link rel="alternate" type="text/html" href="https://cvg.cit.tum.de/research/overview?rev=1602764883&amp;do=diff"/>
        <published>2020-10-15T14:28:03+00:00</published>
        <updated>2020-10-15T14:28:03+00:00</updated>
        <id>https://cvg.cit.tum.de/research/overview?rev=1602764883&amp;do=diff</id>
        <author>
            <name>Anonymous</name>
            <email>anonymous@undisclosed.example.com</email>
        </author>
        <category  term="research" />
     <summary>no summary</summary>
    </entry>
    <entry>
        <title>Photometry-Based Reconstruction</title>
        <link rel="alternate" type="text/html" href="https://cvg.cit.tum.de/research/ps_reconstruction?rev=1603800900&amp;do=diff"/>
        <published>2020-10-27T13:15:00+00:00</published>
        <updated>2020-10-27T13:15:00+00:00</updated>
        <id>https://cvg.cit.tum.de/research/ps_reconstruction?rev=1603800900&amp;do=diff</id>
        <author>
            <name>Anonymous</name>
            <email>anonymous@undisclosed.example.com</email>
        </author>
        <category  term="research" />
        <content>Photometry-Based Reconstruction



We are concerned with the reconstruction of the 3D world based on the interaction between shape, illumination and material. RGB images provide observations from which we can infer the above, by solving an ill-posed inverse rendering problem. This enables reconstructions with high-frequency geometric information and meaningful albedo estimates, allowing for plausible rendering under novel lighting conditions. We are especially interested in Shape-from-Shading an…</content>
        <summary>Photometry-Based Reconstruction



We are concerned with the reconstruction of the 3D world based on the interaction between shape, illumination and material. RGB images provide observations from which we can infer the above, by solving an ill-posed inverse rendering problem. This enables reconstructions with high-frequency geometric information and meaningful albedo estimates, allowing for plausible rendering under novel lighting conditions. We are especially interested in Shape-from-Shading an…</summary>
    </entry>
    <entry>
        <title>Micro Aerial Vehicles (MAVs)</title>
        <link rel="alternate" type="text/html" href="https://cvg.cit.tum.de/research/quadcopter?rev=1426336001&amp;do=diff"/>
        <published>2015-03-14T13:26:41+00:00</published>
        <updated>2015-03-14T13:26:41+00:00</updated>
        <id>https://cvg.cit.tum.de/research/quadcopter?rev=1426336001&amp;do=diff</id>
        <author>
            <name>Anonymous</name>
            <email>anonymous@undisclosed.example.com</email>
        </author>
        <category  term="research" />
        <content>Micro Aerial Vehicles (MAVs)

Contact: Jörg Stückler, Jakob Engel, Christian Kerl, Vladyslav Usenko


In recent years, flying robots such as quadrocopters have gained increased interest in robotics and computer vision research. To navigate safely, these robots need the ability to localize themselves autonomously using their onboard sensors. Potential applications of such systems include the usage as a flying camera, for example to record sport movies or to inspect bridges from the air, as well a…</content>
        <summary>Micro Aerial Vehicles (MAVs)

Contact: Jörg Stückler, Jakob Engel, Christian Kerl, Vladyslav Usenko


In recent years, flying robots such as quadrocopters have gained increased interest in robotics and computer vision research. To navigate safely, these robots need the ability to localize themselves autonomously using their onboard sensors. Potential applications of such systems include the usage as a flying camera, for example to record sport movies or to inspect bridges from the air, as well a…</summary>
    </entry>
    <entry>
        <title>RGB-D Vision</title>
        <link rel="alternate" type="text/html" href="https://cvg.cit.tum.de/research/rgb-d_sensors_kinect?rev=1495446956&amp;do=diff"/>
        <published>2017-05-22T11:55:56+00:00</published>
        <updated>2017-05-22T11:55:56+00:00</updated>
        <id>https://cvg.cit.tum.de/research/rgb-d_sensors_kinect?rev=1495446956&amp;do=diff</id>
        <author>
            <name>Anonymous</name>
            <email>anonymous@undisclosed.example.com</email>
        </author>
        <category  term="research" />
        <content>RGB-D Vision



Contact: Mariano Jaimez and Robert Maier

In the past years, novel camera systems like the Microsoft Kinect or the Asus Xtion sensor that provide both color and dense depth images became readily available. There are great expectations that such systems will lead to a boost of new 3D perception-based applications in the fields of robotics and visual&amp;augmented reality.</content>
        <summary>RGB-D Vision



Contact: Mariano Jaimez and Robert Maier

In the past years, novel camera systems like the Microsoft Kinect or the Asus Xtion sensor that provide both color and dense depth images became readily available. There are great expectations that such systems will lead to a boost of new 3D perception-based applications in the fields of robotics and visual&amp;augmented reality.</summary>
    </entry>
    <entry>
        <title>Robot Vision</title>
        <link rel="alternate" type="text/html" href="https://cvg.cit.tum.de/research/robotvision?rev=1525173247&amp;do=diff"/>
        <published>2018-05-01T13:14:07+00:00</published>
        <updated>2018-05-01T13:14:07+00:00</updated>
        <id>https://cvg.cit.tum.de/research/robotvision?rev=1525173247&amp;do=diff</id>
        <author>
            <name>Anonymous</name>
            <email>anonymous@undisclosed.example.com</email>
        </author>
        <category  term="research" />
        <content>Robot Vision

Contact: Jörg Stückler, Prof. Dr. Daniel Cremers

We develop computer vision methods for robot systems such as micro aerial vehicles and wheeled robots. In recent years, flying robots such as quadrocopters have gained increased interest in robotics and computer vision research. To navigate safely, these robots need the ability to localize themselves autonomously using their onboard sensors.</content>
        <summary>Robot Vision

Contact: Jörg Stückler, Prof. Dr. Daniel Cremers

We develop computer vision methods for robot systems such as micro aerial vehicles and wheeled robots. In recent years, flying robots such as quadrocopters have gained increased interest in robotics and computer vision research. To navigate safely, these robots need the ability to localize themselves autonomously using their onboard sensors.</summary>
    </entry>
    <entry>
        <title>Scene Flow Estimation</title>
        <link rel="alternate" type="text/html" href="https://cvg.cit.tum.de/research/sceneflow?rev=1512567296&amp;do=diff"/>
        <published>2017-12-06T14:34:56+00:00</published>
        <updated>2017-12-06T14:34:56+00:00</updated>
        <id>https://cvg.cit.tum.de/research/sceneflow?rev=1512567296&amp;do=diff</id>
        <author>
            <name>Anonymous</name>
            <email>anonymous@undisclosed.example.com</email>
        </author>
        <category  term="research" />
        <content>Scene Flow Estimation

Scene flow is the dense or semi-dense 3D motion field of a scene that moves completely of partially with respect to a camera. The potential applications of scene flow are numerous. In robotics, it can be used for autonomous navigation and/or manipulation in dynamic environments where the motion of the surrounding objects needs to be predicted. Besides, it could complement and improve state-of-the-art Visual Odometry and SLAM algorithms which typically assume to work in rig…</content>
        <summary>Scene Flow Estimation

Scene flow is the dense or semi-dense 3D motion field of a scene that moves completely of partially with respect to a camera. The potential applications of scene flow are numerous. In robotics, it can be used for autonomous navigation and/or manipulation in dynamic environments where the motion of the surrounding objects needs to be predicted. Besides, it could complement and improve state-of-the-art Visual Odometry and SLAM algorithms which typically assume to work in rig…</summary>
    </entry>
    <entry>
        <title>Semi-Dense Monocular Visual Odometry</title>
        <link rel="alternate" type="text/html" href="https://cvg.cit.tum.de/research/semidense?rev=1404283862&amp;do=diff"/>
        <published>2014-07-02T08:51:02+00:00</published>
        <updated>2014-07-02T08:51:02+00:00</updated>
        <id>https://cvg.cit.tum.de/research/semidense?rev=1404283862&amp;do=diff</id>
        <author>
            <name>Anonymous</name>
            <email>anonymous@undisclosed.example.com</email>
        </author>
        <category  term="research" />
     <summary>no summary</summary>
    </entry>
    <entry>
        <title>Shape Analysis</title>
        <link rel="alternate" type="text/html" href="https://cvg.cit.tum.de/research/shape_analysis?rev=1601971687&amp;do=diff"/>
        <published>2020-10-06T10:08:07+00:00</published>
        <updated>2020-10-06T10:08:07+00:00</updated>
        <id>https://cvg.cit.tum.de/research/shape_analysis?rev=1601971687&amp;do=diff</id>
        <author>
            <name>Anonymous</name>
            <email>anonymous@undisclosed.example.com</email>
        </author>
        <category  term="research" />
        <content>Shape Analysis

Over the last years, the availability of devices for the acquisition of three-dimensional data like laser-scanners, RGB-D Vision or medical imaging devices has dramatically increased. This brings about the need for efficient algorithms to analyze three-dimensional shapes.</content>
        <summary>Shape Analysis

Over the last years, the availability of devices for the acquisition of three-dimensional data like laser-scanners, RGB-D Vision or medical imaging devices has dramatically increased. This brings about the need for efficient algorithms to analyze three-dimensional shapes.</summary>
    </entry>
    <entry>
        <title>Shape Priors</title>
        <link rel="alternate" type="text/html" href="https://cvg.cit.tum.de/research/shape_priors?rev=1327057115&amp;do=diff"/>
        <published>2012-01-20T11:58:35+00:00</published>
        <updated>2012-01-20T11:58:35+00:00</updated>
        <id>https://cvg.cit.tum.de/research/shape_priors?rev=1327057115&amp;do=diff</id>
        <author>
            <name>Anonymous</name>
            <email>anonymous@undisclosed.example.com</email>
        </author>
        <category  term="research" />
        <content>Shape Priors

In this project, we introduce into classical image segmentation methods some prior knowledge about which shapes are likely to be in a given image. In particular, we develop metrics on spaces of shapes, statistical models of shape variation and dynamical models which allow to impose a statistical model of the temporal evolution of shape. The respective segmentation processes can cope with large amounts of background clutter, noise and partial or complete occlusions.</content>
        <summary>Shape Priors

In this project, we introduce into classical image segmentation methods some prior knowledge about which shapes are likely to be in a given image. In particular, we develop metrics on spaces of shapes, statistical models of shape variation and dynamical models which allow to impose a statistical model of the temporal evolution of shape. The respective segmentation processes can cope with large amounts of background clutter, noise and partial or complete occlusions.</summary>
    </entry>
    <entry>
        <title>3D Reconstruction from a Single Image</title>
        <link rel="alternate" type="text/html" href="https://cvg.cit.tum.de/research/single_view_3d_reconstruction?rev=1393159648&amp;do=diff"/>
        <published>2014-02-23T13:47:28+00:00</published>
        <updated>2014-02-23T13:47:28+00:00</updated>
        <id>https://cvg.cit.tum.de/research/single_view_3d_reconstruction?rev=1393159648&amp;do=diff</id>
        <author>
            <name>Anonymous</name>
            <email>anonymous@undisclosed.example.com</email>
        </author>
        <category  term="research" />
        <content>3D Reconstruction from a single view

Contact: Martin Oswald, Eno Toeppe

The estimation of 3D geometry from a single image is a special case of image-based 3D reconstruction from several images,
but is considerably more difficult since depth cannot be estimated from pixel correspondences. Thus, further prior knowledge or user input is needed in order to recover or infer any depth information.</content>
        <summary>3D Reconstruction from a single view

Contact: Martin Oswald, Eno Toeppe

The estimation of 3D geometry from a single image is a special case of image-based 3D reconstruction from several images,
but is considerably more difficult since depth cannot be estimated from pixel correspondences. Thus, further prior knowledge or user input is needed in order to recover or infer any depth information.</summary>
    </entry>
    <entry>
        <title>Stereo LSD-SLAM</title>
        <link rel="alternate" type="text/html" href="https://cvg.cit.tum.de/research/stereo-lsdslam?rev=1438030558&amp;do=diff"/>
        <published>2015-07-27T22:55:58+00:00</published>
        <updated>2015-07-27T22:55:58+00:00</updated>
        <id>https://cvg.cit.tum.de/research/stereo-lsdslam?rev=1438030558&amp;do=diff</id>
        <author>
            <name>Anonymous</name>
            <email>anonymous@undisclosed.example.com</email>
        </author>
        <category  term="research" />
     <summary>no summary</summary>
    </entry>
    <entry>
        <title>Visual SLAM</title>
        <link rel="alternate" type="text/html" href="https://cvg.cit.tum.de/research/vslam?rev=1602767272&amp;do=diff"/>
        <published>2020-10-15T15:07:52+00:00</published>
        <updated>2020-10-15T15:07:52+00:00</updated>
        <id>https://cvg.cit.tum.de/research/vslam?rev=1602767272&amp;do=diff</id>
        <author>
            <name>Anonymous</name>
            <email>anonymous@undisclosed.example.com</email>
        </author>
        <category  term="research" />
        <content>Visual SLAM

In Simultaneous Localization And Mapping, we track the pose of the sensor while creating a map of the environment. In particular, our group has a strong focus on direct methods, where, contrary to the classical pipeline of feature extraction and matching, we directly optimize intensity errors.</content>
        <summary>Visual SLAM

In Simultaneous Localization And Mapping, we track the pose of the sensor while creating a map of the environment. In particular, our group has a strong focus on direct methods, where, contrary to the classical pipeline of feature extraction and matching, we directly optimize intensity errors.</summary>
    </entry>
    <entry>
        <title>Projects in Visual SLAM</title>
        <link rel="alternate" type="text/html" href="https://cvg.cit.tum.de/research/vslam_overview?rev=1603232585&amp;do=diff"/>
        <published>2020-10-21T00:23:05+00:00</published>
        <updated>2020-10-21T00:23:05+00:00</updated>
        <id>https://cvg.cit.tum.de/research/vslam_overview?rev=1603232585&amp;do=diff</id>
        <author>
            <name>Anonymous</name>
            <email>anonymous@undisclosed.example.com</email>
        </author>
        <category  term="research" />
        <content>Projects in Visual SLAM</content>
        <summary>Projects in Visual SLAM</summary>
    </entry>
</feed>
