SlideShare une entreprise Scribd logo
1  sur  34
DariusBurschka–MVPGroupatTUM
http://mvp.visual-navigation.com Lakeside Research Days, July 8, 2013
Darius Burschka
Machine Vision and Perception Group
Department of Computer Science
Technische Universität München
in collaboration with
DLR Oberpfaffenhofen
Fusion of Distributed Perception in
Collaborative Visual-SLAM Approaches on
Mini-UAVs
DariusBurschka–MVPGroupatTUM
http://mvp.visual-navigation.com Lakeside Research Days, July 8, 2013
Aspects of Collaborative Swarm Perception
4 Darius Burschka
Fig. 2. Collaborative 3D reconstruction from 2 independently moving cameras.
directional system with a large field of view.
We decided to use omnidirectional systems in-
stead of fish-lens cameras, because their single view-
point property [2] is essential for our combined
localization and reconstruction approach (Fig. 3).
ep2008
•  What is the appropriate level of
detail for model representation?
•  How to boost measurement
sensitivity at large distances?
•  Global vs. local perception?
•  Increase the exploration speed by
sharing the exploration of free
space
•  Completeness of the
instantaneous scene perception –
true 3D instead of 2.5D
DariusBurschka–MVPGroupatTUM
http://mvp.visual-navigation.com Lakeside Research Days, July 8, 2013
Level of Detail in Modelling
Geometry
for action
planning
Image-based
rich on details
Planning
module
§  Joint geometry and image based environment modelling
DariusBurschka–MVPGroupatTUM
http://mvp.visual-navigation.com Lakeside Research Days, July 8, 2013
Constraints on the Level of Detail
(work with E.Steinbach, TUM)
Hybrid Model for world representation
•  geometry for collision avoidance
•  appearance for view prediction
DariusBurschka–MVPGroupatTUM
http://mvp.visual-navigation.com Lakeside Research Days, July 8, 2013
Reconstruction of a hybrid (appearance/geometry)
model
W. Maier, E. Mair, D. Burschka, and E. Steinbach. ICRA 2009.
DariusBurschka–MVPGroupatTUM
http://mvp.visual-navigation.com Lakeside Research Days, July 8, 20136
Visual Localization for Visual Environment Modeling
surprise
trigger
localization
result
W. Maier, E. Mair, D. Burschka, and E. Steinbach. ICRA 2009.
DariusBurschka–MVPGroupatTUM
http://mvp.visual-navigation.com Lakeside Research Days, July 8, 2013
Modeling from Images (collaboration with M. Jagersand UAlberta)
Cameras
+Inexpensive, easy to use
+Textures automatically registered with
SFM geometry
+Easy to get multiple viewpoints
+Redundant data
-Global metric accuracy of SFM
geometry not always good
-Needs image features (texture) to
compute 3D geometry
Processing (SFM or SFS)
Geometry from images
DariusBurschka–MVPGroupatTUM
http://mvp.visual-navigation.com Lakeside Research Days, July 8, 2013
Combining the geometry and image-based
approaches
Ø  Can add realism from image-based rendering
Ø  Can fill-in with SFM geometry where laser is occluded, or more fine detail is
needed.
Ø  Cumbersome registration of models: SFM model may re-project well, but
still not have a high Euclidean accuracy
DariusBurschka–MVPGroupatTUM
http://mvp.visual-navigation.com Lakeside Research Days, July 8, 2013
Local Feature Tracking Algorithms
• Image-gradient based à Extended KLT (ExtKLT)
•  patch-based implementation
•  feature propagation
•  corner-binding
+  sub-pixel accuracy
•  algorithm scales bad with number
of features
• Tracking-By-Matching à AGAST tracker
•  AGAST corner detector
•  efficient descriptor
•  high frame-rates (hundrets of
features in a few milliseconds)
+  algorithm scales well with number
of features
•  pixel-accuracy
8
DariusBurschka–MVPGroupatTUM
http://mvp.visual-navigation.com Lakeside Research Days, July 8, 2013
Adaptive and Generic Accelerated Segment Test (AGAST)
9
• Improvements compared to FAST:
• full exploration of the configuration space by backward-induction (no
learning)
• binary decision tree (not ternary)
• computation of the actual probability and processing costs
(no greedy algorithm)
• automatic scene adaption by tree switching (at no cost)
• various corner pattern sizes (not just one)
No drawbacks!
Mair, Hager, Burschka, Suppa, Hirzinger
ECCV, Springer, 2010
E. Rosten
DariusBurschka–MVPGroupatTUM
http://mvp.visual-navigation.com Lakeside Research Days, July 8, 2013
AGAST – Open Source Software
• AGAST, AGASTPP @ Sourceforge
•  > 5000 downloads
• AGAST in OpenCV
• AGAST used by BRISK
DariusBurschka–MVPGroupatTUM
http://mvp.visual-navigation.com Lakeside Research Days, July 8, 2013
Feature Propagation
Ø Two motion prediction
concepts
•  2D feature propagation by
motion derivatives
•  IMU-based feature
prediction
Ø Combination of both:
•  translation propagation by
feature velocity (2D)
•  rotation propagation by
gyroscopes
no feature propagation
Strobl, Mair, Bodenmüller, Kielhofer, Sepp, Suppa, Burschka, Hirzinger
IROS, IEEE/RSJ, 2009, Best Paper Finalist
Mair, Strobl, Bodenmüller, Suppa, Burschka
KI, Springer Journal, 2010
DariusBurschka–MVPGroupatTUM
http://mvp.visual-navigation.com Lakeside Research Days, July 8, 2013
Feature Propagation
Ø Two motion prediction
concepts
•  2D feature propagation by
motion derivatives
•  IMU-based feature
prediction
Ø Combination of both:
•  translation propagation by
feature velocity (2D)
•  rotation propagation by
gyroscopes
linear feature propagation
Strobl, Mair, Bodenmüller, Kielhofer, Sepp, Suppa, Burschka, Hirzinger
IROS, IEEE/RSJ, 2009, Best Paper Finalist
Mair, Strobl, Bodenmüller, Suppa, Burschka
KI, Springer Journal, 2010
DariusBurschka–MVPGroupatTUM
http://mvp.visual-navigation.com Lakeside Research Days, July 8, 2013
Feature Propagation
Ø Two motion prediction
concepts
•  2D feature propagation by
motion derivatives
•  IMU-based feature
prediction
Ø Combination of both:
•  translation propagation by
feature velocity (2D)
•  rotation propagation by
gyroscopes
Strobl, Mair, Bodenmüller, Kielhofer, Sepp, Suppa, Burschka, Hirzinger
IROS, IEEE/RSJ, 2009, Best Paper Finalist
Mair, Strobl, Bodenmüller, Suppa, Burschka
KI, Springer Journal, 2010linear + gyros based prop.
DariusBurschka–MVPGroupatTUM
http://mvp.visual-navigation.com Lakeside Research Days, July 8, 2013
Z∞ – Algorithm at Work
14
Mair, Burschka
Mobile Robots Navigation, book chapter, In-Tech, 2010
Simple sensors, low processing power
Obstacle avoidance
DariusBurschka–MVPGroupatTUM
http://mvp.visual-navigation.com Lakeside Research Days, July 8, 2013http://www6.in.tum.de/burschka/
„Simple“ Image Acquisition
60 images taken with a standard low cost digital camera
DariusBurschka–MVPGroupatTUM
http://mvp.visual-navigation.com Lakeside Research Days, July 8, 2013
DariusBurschka–MVPGroupatTUM
http://mvp.visual-navigation.com Lakeside Research Days, July 8, 2013
Navigation results Burschka,Mair RobotVision 2008
DariusBurschka–MVPGroupatTUM
http://mvp.visual-navigation.com Lakeside Research Days, July 8, 2013http://www6.in.tum.de/burschka/
Estimation of the 6 Degrees of Freedom
Estimation of 3 rotational angles Estimation of a translation vector
DariusBurschka–MVPGroupatTUM
http://mvp.visual-navigation.com Lakeside Research Days, July 8, 2013
__________________________________________________________________
__________________________________________________________________
______________________________________________
Swarms to boost resolution
Possible measures:
•  Increase of the distance
between cameras (B)
•  Increase of the focal length
(f) → field of view
•  Decrease of the pixel-size
(px) → resolution of the
camera
[ ]pixel
zp
fB
d
x
p
1
⋅
⋅
=
75cm
DariusBurschka–MVPGroupatTUM
http://mvp.visual-navigation.com Lakeside Research Days, July 8, 2013
Collaborative Reconstruction with Self-Localization
(CVPR Workshop on Vision in Action: Efficient strategies for
cognitive agents in complex environments, Marseille : France (2008))
4 Darius Burschka
Fig. 2. Collaborative 3D reconstruction from 2 independently moving cameras.
directional system with a large field of view.
We decided to use omnidirectional systems in-
stead of fish-lens cameras, because their single view-
point property [2] is essential for our combined
localization and reconstruction approach (Fig. 3).
This property allows an easy recovery of the viewing
angle of the virtual camera with the focal point F
(Fig. 3) directly from the image coordinates (ui, νi).
A standard perspective camera can be mapped on
1-30Sep2008
in [3]. The original approach relied on an essential method to initialize the
3D structure in the world. Our system gives a more robust initialization method
minimizing the image error directly. The limited space of this paper does not
allow a detailed description of this part of the system. The recursive approach
from [3] is used to maintain the radial distance λx.
3 Results
Our flying systems use omnidirectional mirrors like the one depicted in Fig. 6
Fig. 6. Flying agent equipped with an omnidirectional sensor pointing upwards.
We tested the system on several indoor and outdoor sequences with two cam-
eras observing the world through different sized planar mirrors (Fig. 4) using a
Linux laptop computer with a 1.2 GHz Pentium Centrino processor. The system
was equipped with 1GB RAM and was operating two Firewire cameras with
standard PAL resolution of 768x576.
3.1 Accuracy of the Estimation of Extrinsic Parameters
We used the system to estimate the extrinsic motion parameters and achieved
results comparable with the extrinsic camera calibration results. We verified
the parameters by applying them to the 3D reconstruction process in (5) and
achieved measurement accuracy below the resolution of our test system. This
reconstruction was in the close range of the system which explains the high
inria-00325805,version1-30Sep2008
of the camera f=1) (ui, νi) to
ni =
(ui, νi, 1)T
||(ui, νi, 1)T ||
.
We rely on the fact that each camera can see the partner and the area
wants to reconstruct at the same time.
In our system, Camera 1 observes the position of the focal point F o
era 2 along the vector T , and the point P to be reconstructed along the ve
simultaneously (Fig. 2). The second camera (Camera 2) uses its own coo
frame to reconstruct the same point P along the vector V2. The point P o
by this camera has modified coordinates [10]:
V2 = R ∗ (V1 + T )
Collaborative Exploration - Vision in Actio
Since we cannot rely on any extrinsic calibration, we perform the c
of the extrinsic parameters directly from the current observation. W
find the transformation parameters (R, T) in (3) defining the trans
between the coordinate frames of the two cameras. Each camera define
coordinate frame.
2.1 3D Reconstruction from Motion Stereo
In our system, the cameras undergo an arbitrary motion (R, T ) whi
in two independent observations (n1, n2) of a point P. The equation (
written using (2) as
λ2n2 = R ∗ (λ1n1 + T ).
We need to find the radial distances (λ1, λ2) along the incoming rays to
the 3D coordinates of the point. We can find it by re-writing (4) to
(−Rn1, n2)
λ1
λ2
= R · T
λ1
λ2
= (−Rn1, n2)
−∗
· R · T = D−∗
· R · T
We use in (5) the pseudo inverse matrix D−∗
to solve for the two unk
Collaborative Exploration - Vision in Action
Since we cannot rely on any extrinsic calibration, we perform the calibrat
of the extrinsic parameters directly from the current observation. We need
find the transformation parameters (R, T) in (3) defining the transformat
between the coordinate frames of the two cameras. Each camera defines its o
coordinate frame.
2.1 3D Reconstruction from Motion Stereo
In our system, the cameras undergo an arbitrary motion (R, T ) which resu
in two independent observations (n1, n2) of a point P. The equation (3) can
written using (2) as
λ2n2 = R ∗ (λ1n1 + T ).
We need to find the radial distances (λ1, λ2) along the incoming rays to estim
the 3D coordinates of the point. We can find it by re-writing (4) to
(−Rn1, n2)
λ1
λ2
= R · T
λ1
λ2
= (−Rn1, n2)
−∗
· R · T = D−∗
· R · T
We use in (5) the pseudo inverse matrix D−∗
to solve for the two unknown
dial distances (λ1, λ2). A pseudo-inverse matrix to D can be calculated accord
to
D−∗
= (DT
· D)−1
· DT
.
The pseudo-inverse operation finds a least square approximation satisfying
overdetermined set of three equations with two unknowns (λ , λ ) in (5). D
1-30Sep2008
Collaborative Exploration - Vision in Action
Since we cannot rely on any extrinsic calibration, we perform the calibr
of the extrinsic parameters directly from the current observation. We nee
find the transformation parameters (R, T) in (3) defining the transform
between the coordinate frames of the two cameras. Each camera defines its
coordinate frame.
2.1 3D Reconstruction from Motion Stereo
In our system, the cameras undergo an arbitrary motion (R, T ) which re
in two independent observations (n1, n2) of a point P. The equation (3) ca
written using (2) as
λ2n2 = R ∗ (λ1n1 + T ).
We need to find the radial distances (λ1, λ2) along the incoming rays to esti
the 3D coordinates of the point. We can find it by re-writing (4) to
(−Rn1, n2)
λ1
λ2
= R · T
λ1
λ2
= (−Rn1, n2)
−∗
· R · T = D−∗
· R · T
We use in (5) the pseudo inverse matrix D−∗
to solve for the two unknow
dial distances (λ1, λ2). A pseudo-inverse matrix to D can be calculated acco
to
D−∗
= (DT
· D)−1
· DT
.
The pseudo-inverse operation finds a least square approximation satisfyin
overdetermined set of three equations with two unknowns (λ1, λ2) in (5).
to calibration and detection errors, the two lines V1 and V2 in Fig. 2 do
necessarily intersect. Equation (5) calculates the position of the point a
each line closest to the other line.
5,version1-30Sep2008
DariusBurschka–MVPGroupatTUM
http://mvp.visual-navigation.com Lakeside Research Days, July 8, 2013
Condition number for an angular configuration
The relative error in the solution caused by perturbations of parameters can
stimated from the condition number of the pseudo-inverse matrix D−∗
. The
dition number is the ratio between the largest and the smallest singular value
his matrix [4].
The condition number estimates the sensitivity of the solution of a linear
braic system to variations of parameters in matrix D−∗
and in the measure-
t vector T .
Consider the equation system with perturbations in matrix D−∗
and vec-
T :
(D−∗
+ δD−∗
)Tb = λ + δλ (8)
The relative error in the solution caused by perturbations of parameters can
stimated by the following inequality using the condition number κ calculated
D−∗
(see [7]):
||T − Tb||
||T ||
≤ κ
||δD−∗
||
||D−∗||
+
||δλ||
||λ||
+ O( 2
) (9)
Therefore, the relative error in solution λ can be as large as condition num-
times the relative error in D−∗
and T . The condition number together with
singular values of the matrix D−∗
describe the sensitivity of the system
hanges in the input parameters. Small condition numbers describe systems
small sensitivity to errors compared to large condition numbers that suggest
llel direction of the measured direction vectors (n1, n2). A quantitative eval-
on of this result can be found in Section 3.2. We use the condition number of
matrix D−∗
as a metric to evaluate the quality of the resulting configuration
he two cameras. We can move the cameras for a given region of interest to
nfiguration with a small condition number ensuring high accuracy of the
nstructed coordinates.
f we know the extrinsic motion parameters (R, T ), e.g. from a calibration
ess, then equation (5) allows to estimate directly the 3D position of an ob-
The collaborative exploration approach allows us to move the cameras
desired position to establish the best sensitivity and the highest robustn
errors in the parameter estimation. We mentioned already in section 2.1
the sensitivity of the reconstruction result (5) to parameter errors depen
the relative orientation of the reconstructing cameras (Fig. 8).
−5
−3
−1
1
3
5
7
−5
0
5
−5
0
5
0
50
100
150
Orientation Cam 1 [deg]
Orientation Cam 2 [deg]
Conditionnumber
Fig. 8. (left) The condition number of the 3D reconstruction is dependent o
relative orientation of the corresponding direction vectors V1, V2 in Fig. 2 obser
point;(right) a minimal condition number corresponds to a difference in the v
Best observation by 90° viewing angle
DariusBurschka–MVPGroupatTUM
http://mvp.visual-navigation.com Lakeside Research Days, July 8, 2013
Asynchronous stereo for dynamic scenes
4 Darius Burschka
Fig. 2. Collaborative 3D reconstruction from 2 independently moving cameras.
directional system with a large field of view.
Fig. 3. Single viewpoint
property.
We decided to use omnidirectional systems in-
stead of fish-lens cameras, because their single view-
point property [2] is essential for our combined
localization and reconstruction approach (Fig. 3).
This property allows an easy recovery of the viewing
angle of the virtual camera with the focal point F
(Fig. 3) directly from the image coordinates (ui, νi).
A standard perspective camera can be mapped on
our generic model of an omnidirectional sensor. The
only limitation of a standard perspective camera is
the limited viewing angle. In case of a standard per-
spective camera with a focal point at F, we can es-
timate the direction vector ni of the incoming rays
from the uni-focal image coordinates (focal length
of the camera f=1) (u , ν ) to
25805,version1-30Sep2008
NTP
Figure 2: Here: C0 and C1 are the camera centers of the
stereo pair, P0,P1,P2 are the 3D poses of the point at times
t0,t1,t2. Latter correspond to frame acquisition timestamps
of camera C0. P⇤ is the 3D pose of the point at time t⇤,
which correspond to the frame acquisition timestamp of the
camera C1. Vectors v0,v1,v2, are unit vectors pointing from
camera center C0, to corresponding 3D points. Vector v⇤ is
the unit vector pointing from camera center C to pose P⇤
Since the angle between the velocity dire
vector v3 and v0 is y, v3 and v2 is h, and v
ˆn is p
2 . We can compute the v3 by solving the fo
ing equation:
2
4
v0x v0y v0z
v2x v2y v2z
nx ny nz
3
5
2
4
v3x
v3y
v3z
3
5 =
2
4
cosy
cosh
0
3.2 Path Reconstruction
In the second stage of proposed method we com
the 3D pose P0. For reasons of simplicity we
represent the poses (Pi (i = 0..2), P⇤) depicted i
2 as:
Pi =
2
4
ai ⇤zi
bi ⇤zi
zi
3
5 P⇤ =
2
4
m⇤z⇤
0 +tx
s⇤z⇤
0 +ty
q⇤z⇤
0 +tz
3
5
where tx,ty,tz are the x,y and z componen
Submitted VISAPP 2014
DariusBurschka–MVPGroupatTUM
http://mvp.visual-navigation.com Lakeside Research Days, July 8, 2013
Enhancement of Stereo Data
Multimodal Sensor
Fusion
IROS 2013
DariusBurschka–MVPGroupatTUM
http://mvp.visual-navigation.com Lakeside Research Days, July 8, 20132
5
Towards Autonomous MAV Exploration in Cluttered Indoor and Outdoor Environments > Korbinian Schmid > 28/06/2013
Slide
INS filter design
25
[Schmid et al. IROS 2012]
"   Synchronization of realtime and non realtime modules by sensor hardware trigger
"   Direct system state:
"   High rate calculation by „Strap Down Algorithm“ (SDA)
"   Indirect system state:
"   Estimation by indirect Extended Kalman Filter (EKF)
"   Measurements: „pseudo gravity“, key frame based stereo odometry
"   Measurement delay compensation by filter state augmentation
DariusBurschka–MVPGroupatTUM
http://mvp.visual-navigation.com Lakeside Research Days, July 8, 20132
6
Slide
Key frame based stereo odometry
26
" Delta measurements referencing key frames
" Locally drift free system state estimation
DariusBurschka–MVPGroupatTUM
http://mvp.visual-navigation.com Lakeside Research Days, July 8, 20132
7
Slide
Exploration with an MAV
" 70 m trajectory
" Ground truth by
tachymeter
" 5 s forced vision drop out
with translational motion
" 1 s forced vision drop out
with rotational motion
27
"  Estimation error < 1.2 m
" Odometry error < 25.9 m
"  Results comparable to runs
without vision drop outs
DariusBurschka–MVPGroupatTUM
http://mvp.visual-navigation.com Lakeside Research Days, July 8, 20132
8
Mixed indoor/outdoor exploration (DLR K. Schmid)
28
" Autonomous indoor/outdoor
flight of 60m
" Mapping resolution: 0.1m
" Leaving through a window
" Returning through door
DariusBurschka–MVPGroupatTUM
http://mvp.visual-navigation.com Lakeside Research Days, July 8, 20132
9
Slide
INS filter design
29
[Schmid et al. IROS 2012]
"   Synchronization of realtime and non realtime modules by sensor hardware trigger
"   Direct system state:
"   High rate calculation by „Strap Down Algorithm“ (SDA)
"   Indirect system state:
"   Estimation by indirect Extended Kalman Filter (EKF)
"   Measurements: „pseudo gravity“, key frame based stereo odometry
"   Measurement delay compensation by filter state augmentation
DariusBurschka–MVPGroupatTUM
http://mvp.visual-navigation.com Lakeside Research Days, July 8, 20133
0
Slide
Mixed indoor/outdoor exploration (DLR)
30
" Autonomous indoor/outdoor
flight of 60m
" Mapping resolution: 0.1m
" Leaving through a window
" Returning through door
DariusBurschka–MVPGroupatTUM
http://mvp.visual-navigation.com Lakeside Research Days, July 8, 2013
3
1
" Multicopters for SAR and disaster management scenarios
"  Swarms for better resolution
" Modelling complexity needs to be adapted to the task
" Synchronisation necessary
Towards Autonomous MAV Exploration in Cluttered Indoor and Outdoor Environments > Korbinian Schmid > 28/06/2013
Slide
Conclusion
31
DariusBurschka–MVPGroupatTUM
http://mvp.visual-navigation.com Lakeside Research Days, July 8, 2013
… last RSS 2013 workshop on MAVs
DariusBurschka–MVPGroupatTUM
http://mvp.visual-navigation.com Lakeside Research Days, July 8, 2013
MachineVisionandPerceptionGroup@TUMMVP
Research of the MVP Group http://mvp.visual-navigation.com
The Machine Vision and
Perception Group @TUM works
on the aspects of visual
perception and control in
medical, mobile, and HCI
applications
Visual navigation
Biologically motivated
perception
Perception for manipulation
Visual Action Analysis
Photogrammetric monocular
reconstruction
Rigid and Deformable
Registration
DariusBurschka–MVPGroupatTUM
http://mvp.visual-navigation.com Lakeside Research Days, July 8, 2013
MachineVisionandPerceptionGroup@TUMMVP
Research of the MVP Group http://mvp.visual-navigation.com
Exploration of physical
object properties
Sensor substitution
Multimodal Sensor
Fusion
Development of new
Optical Sensors
The Machine Vision and
Perception Group @TUM works
on the aspects of visual
perception and control in
medical, mobile, and HCI
applications

Contenu connexe

Tendances

6 superpixels using morphology for rock image
6 superpixels using morphology for rock image6 superpixels using morphology for rock image
6 superpixels using morphology for rock imageAlok Padole
 
Global illumination for PIXAR movie production
Global illumination for PIXAR movie productionGlobal illumination for PIXAR movie production
Global illumination for PIXAR movie productionJaehyun Jang
 
Image Registration Based on CCRE for Remote Sensing Images
Image Registration Based on CCRE for Remote Sensing ImagesImage Registration Based on CCRE for Remote Sensing Images
Image Registration Based on CCRE for Remote Sensing ImagesSujoy Sarathi
 
Final thesis presentation
Final thesis presentationFinal thesis presentation
Final thesis presentationPawan Singh
 
Satellite image processing
Satellite image processingSatellite image processing
Satellite image processingalok ray
 
Geoscience satellite image processing
Geoscience satellite image processingGeoscience satellite image processing
Geoscience satellite image processinggaurav jain
 
Fast and High-Precision 3D Tracking and Position Measurement with MEMS Microm...
Fast and High-Precision 3D Tracking and Position Measurement with MEMS Microm...Fast and High-Precision 3D Tracking and Position Measurement with MEMS Microm...
Fast and High-Precision 3D Tracking and Position Measurement with MEMS Microm...Ping Hsu
 
How to achieve Energy Efficiency in GPS
How to achieve Energy Efficiency in GPSHow to achieve Energy Efficiency in GPS
How to achieve Energy Efficiency in GPSAditi Gadgul
 
IRJET- Implementation of IoT based Dual Axis Photo-Voltaic Solar Tracker ...
IRJET-  	  Implementation of IoT based Dual Axis Photo-Voltaic Solar Tracker ...IRJET-  	  Implementation of IoT based Dual Axis Photo-Voltaic Solar Tracker ...
IRJET- Implementation of IoT based Dual Axis Photo-Voltaic Solar Tracker ...IRJET Journal
 
IGARSS presentation WKLEE.pptx
IGARSS presentation WKLEE.pptxIGARSS presentation WKLEE.pptx
IGARSS presentation WKLEE.pptxgrssieee
 
mihara_iccp16_presentation
mihara_iccp16_presentationmihara_iccp16_presentation
mihara_iccp16_presentationHajime Mihara
 
Inertial Navigation for Quadrotor Using Kalman Filter with Drift Compensation
Inertial Navigation for Quadrotor Using Kalman Filter with Drift Compensation Inertial Navigation for Quadrotor Using Kalman Filter with Drift Compensation
Inertial Navigation for Quadrotor Using Kalman Filter with Drift Compensation IJECEIAES
 
Processing of satellite_image_using_digi
Processing of satellite_image_using_digiProcessing of satellite_image_using_digi
Processing of satellite_image_using_digiShanmuga Sundaram
 
satellite image processing
satellite image processingsatellite image processing
satellite image processingavhadlaxmikant
 

Tendances (18)

6 superpixels using morphology for rock image
6 superpixels using morphology for rock image6 superpixels using morphology for rock image
6 superpixels using morphology for rock image
 
Global illumination for PIXAR movie production
Global illumination for PIXAR movie productionGlobal illumination for PIXAR movie production
Global illumination for PIXAR movie production
 
Image Registration Based on CCRE for Remote Sensing Images
Image Registration Based on CCRE for Remote Sensing ImagesImage Registration Based on CCRE for Remote Sensing Images
Image Registration Based on CCRE for Remote Sensing Images
 
Final thesis presentation
Final thesis presentationFinal thesis presentation
Final thesis presentation
 
Geota G
Geota GGeota G
Geota G
 
Geotagb
GeotagbGeotagb
Geotagb
 
Satellite image processing
Satellite image processingSatellite image processing
Satellite image processing
 
Geoscience satellite image processing
Geoscience satellite image processingGeoscience satellite image processing
Geoscience satellite image processing
 
Fast and High-Precision 3D Tracking and Position Measurement with MEMS Microm...
Fast and High-Precision 3D Tracking and Position Measurement with MEMS Microm...Fast and High-Precision 3D Tracking and Position Measurement with MEMS Microm...
Fast and High-Precision 3D Tracking and Position Measurement with MEMS Microm...
 
How to achieve Energy Efficiency in GPS
How to achieve Energy Efficiency in GPSHow to achieve Energy Efficiency in GPS
How to achieve Energy Efficiency in GPS
 
20210226 esa-science-coffee-v2.0
20210226 esa-science-coffee-v2.020210226 esa-science-coffee-v2.0
20210226 esa-science-coffee-v2.0
 
IRJET- Implementation of IoT based Dual Axis Photo-Voltaic Solar Tracker ...
IRJET-  	  Implementation of IoT based Dual Axis Photo-Voltaic Solar Tracker ...IRJET-  	  Implementation of IoT based Dual Axis Photo-Voltaic Solar Tracker ...
IRJET- Implementation of IoT based Dual Axis Photo-Voltaic Solar Tracker ...
 
IGARSS presentation WKLEE.pptx
IGARSS presentation WKLEE.pptxIGARSS presentation WKLEE.pptx
IGARSS presentation WKLEE.pptx
 
mihara_iccp16_presentation
mihara_iccp16_presentationmihara_iccp16_presentation
mihara_iccp16_presentation
 
Inertial Navigation for Quadrotor Using Kalman Filter with Drift Compensation
Inertial Navigation for Quadrotor Using Kalman Filter with Drift Compensation Inertial Navigation for Quadrotor Using Kalman Filter with Drift Compensation
Inertial Navigation for Quadrotor Using Kalman Filter with Drift Compensation
 
Processing of satellite_image_using_digi
Processing of satellite_image_using_digiProcessing of satellite_image_using_digi
Processing of satellite_image_using_digi
 
DSM Extraction from Pleiades Images using Micmac
DSM Extraction from Pleiades Images using MicmacDSM Extraction from Pleiades Images using Micmac
DSM Extraction from Pleiades Images using Micmac
 
satellite image processing
satellite image processingsatellite image processing
satellite image processing
 

En vedette

Mesh v Second Life - úvodní přednáška
Mesh v Second Life - úvodní přednáškaMesh v Second Life - úvodní přednáška
Mesh v Second Life - úvodní přednáškaZuzka
 
Sample Ppt for Moodle Training
Sample Ppt for Moodle TrainingSample Ppt for Moodle Training
Sample Ppt for Moodle Trainingmandadnama
 
European Robotics Forum 2014 Talk on Closing the Perception-Action Gap
European Robotics Forum 2014 Talk on Closing the Perception-Action GapEuropean Robotics Forum 2014 Talk on Closing the Perception-Action Gap
European Robotics Forum 2014 Talk on Closing the Perception-Action GapDariolakis
 
KARADERE SEVDALILARINA
KARADERE SEVDALILARINAKARADERE SEVDALILARINA
KARADERE SEVDALILARINAguestc5bda299
 
Infiltration on students
Infiltration on studentsInfiltration on students
Infiltration on studentsDennis Cana
 
External Cards and Slots
External Cards and SlotsExternal Cards and Slots
External Cards and SlotsArif Samoon
 
Infiltrating the youth sector
Infiltrating the youth sectorInfiltrating the youth sector
Infiltrating the youth sectorDennis Cana
 
The youth and the students
The youth and the studentsThe youth and the students
The youth and the studentsDennis Cana
 
Bosch Expert Days
Bosch Expert DaysBosch Expert Days
Bosch Expert DaysDariolakis
 
Mechanism of resistance to target therapy
Mechanism of resistance to target therapyMechanism of resistance to target therapy
Mechanism of resistance to target therapyVito Lorusso
 
SucessfulInsiderThreat
SucessfulInsiderThreatSucessfulInsiderThreat
SucessfulInsiderThreatHammerNJ
 
Leadership theories
Leadership theoriesLeadership theories
Leadership theoriesDennis Cana
 
Toplu Ulasimda Fiyatlandirma ve Etkileri
Toplu Ulasimda Fiyatlandirma ve EtkileriToplu Ulasimda Fiyatlandirma ve Etkileri
Toplu Ulasimda Fiyatlandirma ve Etkilerieroncu
 
Woman emprende noia
Woman emprende noiaWoman emprende noia
Woman emprende noiachefalorenzo
 

En vedette (19)

Mesh v Second Life - úvodní přednáška
Mesh v Second Life - úvodní přednáškaMesh v Second Life - úvodní přednáška
Mesh v Second Life - úvodní přednáška
 
Sample Ppt for Moodle Training
Sample Ppt for Moodle TrainingSample Ppt for Moodle Training
Sample Ppt for Moodle Training
 
European Robotics Forum 2014 Talk on Closing the Perception-Action Gap
European Robotics Forum 2014 Talk on Closing the Perception-Action GapEuropean Robotics Forum 2014 Talk on Closing the Perception-Action Gap
European Robotics Forum 2014 Talk on Closing the Perception-Action Gap
 
3rd Part
3rd Part3rd Part
3rd Part
 
KARADERE SEVDALILARINA
KARADERE SEVDALILARINAKARADERE SEVDALILARINA
KARADERE SEVDALILARINA
 
2nd Part
2nd Part2nd Part
2nd Part
 
Mvp
MvpMvp
Mvp
 
Infiltration on students
Infiltration on studentsInfiltration on students
Infiltration on students
 
Ephebiphobia
EphebiphobiaEphebiphobia
Ephebiphobia
 
India Ppw
India PpwIndia Ppw
India Ppw
 
External Cards and Slots
External Cards and SlotsExternal Cards and Slots
External Cards and Slots
 
Infiltrating the youth sector
Infiltrating the youth sectorInfiltrating the youth sector
Infiltrating the youth sector
 
The youth and the students
The youth and the studentsThe youth and the students
The youth and the students
 
Bosch Expert Days
Bosch Expert DaysBosch Expert Days
Bosch Expert Days
 
Mechanism of resistance to target therapy
Mechanism of resistance to target therapyMechanism of resistance to target therapy
Mechanism of resistance to target therapy
 
SucessfulInsiderThreat
SucessfulInsiderThreatSucessfulInsiderThreat
SucessfulInsiderThreat
 
Leadership theories
Leadership theoriesLeadership theories
Leadership theories
 
Toplu Ulasimda Fiyatlandirma ve Etkileri
Toplu Ulasimda Fiyatlandirma ve EtkileriToplu Ulasimda Fiyatlandirma ve Etkileri
Toplu Ulasimda Fiyatlandirma ve Etkileri
 
Woman emprende noia
Woman emprende noiaWoman emprende noia
Woman emprende noia
 

Similaire à Fusion of Multi-MAV Data

Semantic Perception for Telemanipulation at SPME Workshop at ICRA 2013
Semantic Perception for Telemanipulation at SPME Workshop at ICRA 2013Semantic Perception for Telemanipulation at SPME Workshop at ICRA 2013
Semantic Perception for Telemanipulation at SPME Workshop at ICRA 2013Dariolakis
 
IRJET- Exploring Image Super Resolution Techniques
IRJET- Exploring Image Super Resolution TechniquesIRJET- Exploring Image Super Resolution Techniques
IRJET- Exploring Image Super Resolution TechniquesIRJET Journal
 
A Study of Motion Detection Method for Smart Home System
A Study of Motion Detection Method for Smart Home SystemA Study of Motion Detection Method for Smart Home System
A Study of Motion Detection Method for Smart Home SystemAM Publications
 
Applying edge density based region growing with frame difference for detectin...
Applying edge density based region growing with frame difference for detectin...Applying edge density based region growing with frame difference for detectin...
Applying edge density based region growing with frame difference for detectin...eSAT Publishing House
 
Enhanced Tracking Aerial Image by Applying Fusion & Image Registration Technique
Enhanced Tracking Aerial Image by Applying Fusion & Image Registration TechniqueEnhanced Tracking Aerial Image by Applying Fusion & Image Registration Technique
Enhanced Tracking Aerial Image by Applying Fusion & Image Registration TechniqueIRJET Journal
 
Moving object detection using background subtraction algorithm using simulink
Moving object detection using background subtraction algorithm using simulinkMoving object detection using background subtraction algorithm using simulink
Moving object detection using background subtraction algorithm using simulinkeSAT Publishing House
 
3 d mrf based video tracking in the compressed domain
3 d mrf based video tracking in the compressed domain3 d mrf based video tracking in the compressed domain
3 d mrf based video tracking in the compressed domaineSAT Journals
 
3 d mrf based video tracking in the compressed domain
3 d mrf based video tracking in the compressed domain3 d mrf based video tracking in the compressed domain
3 d mrf based video tracking in the compressed domaineSAT Publishing House
 
3 d mrf based video tracking in the compressed domain
3 d mrf based video tracking in the compressed domain3 d mrf based video tracking in the compressed domain
3 d mrf based video tracking in the compressed domaineSAT Publishing House
 
GPGPU-Assisted Subpixel Tracking Method for Fiducial Markers
GPGPU-Assisted Subpixel Tracking Method for Fiducial MarkersGPGPU-Assisted Subpixel Tracking Method for Fiducial Markers
GPGPU-Assisted Subpixel Tracking Method for Fiducial MarkersNaoki Shibata
 
Location Tracking and Smooth Path Providing System
Location Tracking and Smooth Path Providing SystemLocation Tracking and Smooth Path Providing System
Location Tracking and Smooth Path Providing SystemIRJET Journal
 
Ear Biometrics shritosh kumar
Ear Biometrics shritosh kumarEar Biometrics shritosh kumar
Ear Biometrics shritosh kumarshritosh kumar
 
Object based Classification of Satellite Images by Combining the HDP, IBP and...
Object based Classification of Satellite Images by Combining the HDP, IBP and...Object based Classification of Satellite Images by Combining the HDP, IBP and...
Object based Classification of Satellite Images by Combining the HDP, IBP and...IRJET Journal
 
IRJET- Criminal Recognization in CCTV Surveillance Video
IRJET-  	  Criminal Recognization in CCTV Surveillance VideoIRJET-  	  Criminal Recognization in CCTV Surveillance Video
IRJET- Criminal Recognization in CCTV Surveillance VideoIRJET Journal
 
When to Use a Measuring Microscope: And How to Further Enhance its Capabilities
When to Use a Measuring Microscope: And How to Further Enhance its CapabilitiesWhen to Use a Measuring Microscope: And How to Further Enhance its Capabilities
When to Use a Measuring Microscope: And How to Further Enhance its CapabilitiesOlympus IMS
 
Performance of Weighted Least Square Filter Based Pan Sharpening using Fuzzy ...
Performance of Weighted Least Square Filter Based Pan Sharpening using Fuzzy ...Performance of Weighted Least Square Filter Based Pan Sharpening using Fuzzy ...
Performance of Weighted Least Square Filter Based Pan Sharpening using Fuzzy ...IRJET Journal
 
Deep VO and SLAM IV
Deep VO and SLAM IVDeep VO and SLAM IV
Deep VO and SLAM IVYu Huang
 
A new approach of edge detection in sar images using region based active cont...
A new approach of edge detection in sar images using region based active cont...A new approach of edge detection in sar images using region based active cont...
A new approach of edge detection in sar images using region based active cont...eSAT Journals
 

Similaire à Fusion of Multi-MAV Data (20)

Semantic Perception for Telemanipulation at SPME Workshop at ICRA 2013
Semantic Perception for Telemanipulation at SPME Workshop at ICRA 2013Semantic Perception for Telemanipulation at SPME Workshop at ICRA 2013
Semantic Perception for Telemanipulation at SPME Workshop at ICRA 2013
 
IRJET- Exploring Image Super Resolution Techniques
IRJET- Exploring Image Super Resolution TechniquesIRJET- Exploring Image Super Resolution Techniques
IRJET- Exploring Image Super Resolution Techniques
 
A Study of Motion Detection Method for Smart Home System
A Study of Motion Detection Method for Smart Home SystemA Study of Motion Detection Method for Smart Home System
A Study of Motion Detection Method for Smart Home System
 
Applying edge density based region growing with frame difference for detectin...
Applying edge density based region growing with frame difference for detectin...Applying edge density based region growing with frame difference for detectin...
Applying edge density based region growing with frame difference for detectin...
 
X36141145
X36141145X36141145
X36141145
 
Enhanced Tracking Aerial Image by Applying Fusion & Image Registration Technique
Enhanced Tracking Aerial Image by Applying Fusion & Image Registration TechniqueEnhanced Tracking Aerial Image by Applying Fusion & Image Registration Technique
Enhanced Tracking Aerial Image by Applying Fusion & Image Registration Technique
 
Moving object detection using background subtraction algorithm using simulink
Moving object detection using background subtraction algorithm using simulinkMoving object detection using background subtraction algorithm using simulink
Moving object detection using background subtraction algorithm using simulink
 
3 d mrf based video tracking in the compressed domain
3 d mrf based video tracking in the compressed domain3 d mrf based video tracking in the compressed domain
3 d mrf based video tracking in the compressed domain
 
3 d mrf based video tracking in the compressed domain
3 d mrf based video tracking in the compressed domain3 d mrf based video tracking in the compressed domain
3 d mrf based video tracking in the compressed domain
 
3 d mrf based video tracking in the compressed domain
3 d mrf based video tracking in the compressed domain3 d mrf based video tracking in the compressed domain
3 d mrf based video tracking in the compressed domain
 
GPGPU-Assisted Subpixel Tracking Method for Fiducial Markers
GPGPU-Assisted Subpixel Tracking Method for Fiducial MarkersGPGPU-Assisted Subpixel Tracking Method for Fiducial Markers
GPGPU-Assisted Subpixel Tracking Method for Fiducial Markers
 
Location Tracking and Smooth Path Providing System
Location Tracking and Smooth Path Providing SystemLocation Tracking and Smooth Path Providing System
Location Tracking and Smooth Path Providing System
 
Ear Biometrics shritosh kumar
Ear Biometrics shritosh kumarEar Biometrics shritosh kumar
Ear Biometrics shritosh kumar
 
Object based Classification of Satellite Images by Combining the HDP, IBP and...
Object based Classification of Satellite Images by Combining the HDP, IBP and...Object based Classification of Satellite Images by Combining the HDP, IBP and...
Object based Classification of Satellite Images by Combining the HDP, IBP and...
 
project report final
project report finalproject report final
project report final
 
IRJET- Criminal Recognization in CCTV Surveillance Video
IRJET-  	  Criminal Recognization in CCTV Surveillance VideoIRJET-  	  Criminal Recognization in CCTV Surveillance Video
IRJET- Criminal Recognization in CCTV Surveillance Video
 
When to Use a Measuring Microscope: And How to Further Enhance its Capabilities
When to Use a Measuring Microscope: And How to Further Enhance its CapabilitiesWhen to Use a Measuring Microscope: And How to Further Enhance its Capabilities
When to Use a Measuring Microscope: And How to Further Enhance its Capabilities
 
Performance of Weighted Least Square Filter Based Pan Sharpening using Fuzzy ...
Performance of Weighted Least Square Filter Based Pan Sharpening using Fuzzy ...Performance of Weighted Least Square Filter Based Pan Sharpening using Fuzzy ...
Performance of Weighted Least Square Filter Based Pan Sharpening using Fuzzy ...
 
Deep VO and SLAM IV
Deep VO and SLAM IVDeep VO and SLAM IV
Deep VO and SLAM IV
 
A new approach of edge detection in sar images using region based active cont...
A new approach of edge detection in sar images using region based active cont...A new approach of edge detection in sar images using region based active cont...
A new approach of edge detection in sar images using region based active cont...
 

Dernier

My Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 PresentationMy Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 PresentationRidwan Fadjar
 
"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr Bagan"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr BaganFwdays
 
Story boards and shot lists for my a level piece
Story boards and shot lists for my a level pieceStory boards and shot lists for my a level piece
Story boards and shot lists for my a level piececharlottematthew16
 
Training state-of-the-art general text embedding
Training state-of-the-art general text embeddingTraining state-of-the-art general text embedding
Training state-of-the-art general text embeddingZilliz
 
AI as an Interface for Commercial Buildings
AI as an Interface for Commercial BuildingsAI as an Interface for Commercial Buildings
AI as an Interface for Commercial BuildingsMemoori
 
WordPress Websites for Engineers: Elevate Your Brand
WordPress Websites for Engineers: Elevate Your BrandWordPress Websites for Engineers: Elevate Your Brand
WordPress Websites for Engineers: Elevate Your Brandgvaughan
 
Vertex AI Gemini Prompt Engineering Tips
Vertex AI Gemini Prompt Engineering TipsVertex AI Gemini Prompt Engineering Tips
Vertex AI Gemini Prompt Engineering TipsMiki Katsuragi
 
Beyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
Beyond Boundaries: Leveraging No-Code Solutions for Industry InnovationBeyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
Beyond Boundaries: Leveraging No-Code Solutions for Industry InnovationSafe Software
 
Developer Data Modeling Mistakes: From Postgres to NoSQL
Developer Data Modeling Mistakes: From Postgres to NoSQLDeveloper Data Modeling Mistakes: From Postgres to NoSQL
Developer Data Modeling Mistakes: From Postgres to NoSQLScyllaDB
 
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...Integration and Automation in Practice: CI/CD in Mule Integration and Automat...
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...Patryk Bandurski
 
SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024Lorenzo Miniero
 
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024BookNet Canada
 
Connect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck PresentationConnect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck PresentationSlibray Presentation
 
Ensuring Technical Readiness For Copilot in Microsoft 365
Ensuring Technical Readiness For Copilot in Microsoft 365Ensuring Technical Readiness For Copilot in Microsoft 365
Ensuring Technical Readiness For Copilot in Microsoft 3652toLead Limited
 
Vector Databases 101 - An introduction to the world of Vector Databases
Vector Databases 101 - An introduction to the world of Vector DatabasesVector Databases 101 - An introduction to the world of Vector Databases
Vector Databases 101 - An introduction to the world of Vector DatabasesZilliz
 
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks..."LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...Fwdays
 
Unleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding ClubUnleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding ClubKalema Edgar
 
Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?Mattias Andersson
 
Scanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL CertsScanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL CertsRizwan Syed
 

Dernier (20)

My Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 PresentationMy Hashitalk Indonesia April 2024 Presentation
My Hashitalk Indonesia April 2024 Presentation
 
"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr Bagan"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr Bagan
 
Story boards and shot lists for my a level piece
Story boards and shot lists for my a level pieceStory boards and shot lists for my a level piece
Story boards and shot lists for my a level piece
 
Training state-of-the-art general text embedding
Training state-of-the-art general text embeddingTraining state-of-the-art general text embedding
Training state-of-the-art general text embedding
 
AI as an Interface for Commercial Buildings
AI as an Interface for Commercial BuildingsAI as an Interface for Commercial Buildings
AI as an Interface for Commercial Buildings
 
WordPress Websites for Engineers: Elevate Your Brand
WordPress Websites for Engineers: Elevate Your BrandWordPress Websites for Engineers: Elevate Your Brand
WordPress Websites for Engineers: Elevate Your Brand
 
Vertex AI Gemini Prompt Engineering Tips
Vertex AI Gemini Prompt Engineering TipsVertex AI Gemini Prompt Engineering Tips
Vertex AI Gemini Prompt Engineering Tips
 
Beyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
Beyond Boundaries: Leveraging No-Code Solutions for Industry InnovationBeyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
Beyond Boundaries: Leveraging No-Code Solutions for Industry Innovation
 
Developer Data Modeling Mistakes: From Postgres to NoSQL
Developer Data Modeling Mistakes: From Postgres to NoSQLDeveloper Data Modeling Mistakes: From Postgres to NoSQL
Developer Data Modeling Mistakes: From Postgres to NoSQL
 
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...Integration and Automation in Practice: CI/CD in Mule Integration and Automat...
Integration and Automation in Practice: CI/CD in Mule Integration and Automat...
 
SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024
 
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
 
Connect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck PresentationConnect Wave/ connectwave Pitch Deck Presentation
Connect Wave/ connectwave Pitch Deck Presentation
 
Ensuring Technical Readiness For Copilot in Microsoft 365
Ensuring Technical Readiness For Copilot in Microsoft 365Ensuring Technical Readiness For Copilot in Microsoft 365
Ensuring Technical Readiness For Copilot in Microsoft 365
 
Vector Databases 101 - An introduction to the world of Vector Databases
Vector Databases 101 - An introduction to the world of Vector DatabasesVector Databases 101 - An introduction to the world of Vector Databases
Vector Databases 101 - An introduction to the world of Vector Databases
 
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks..."LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
"LLMs for Python Engineers: Advanced Data Analysis and Semantic Kernel",Oleks...
 
Unleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding ClubUnleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding Club
 
E-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptx
E-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptxE-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptx
E-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptx
 
Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?
 
Scanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL CertsScanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL Certs
 

Fusion of Multi-MAV Data

  • 1. DariusBurschka–MVPGroupatTUM http://mvp.visual-navigation.com Lakeside Research Days, July 8, 2013 Darius Burschka Machine Vision and Perception Group Department of Computer Science Technische Universität München in collaboration with DLR Oberpfaffenhofen Fusion of Distributed Perception in Collaborative Visual-SLAM Approaches on Mini-UAVs
  • 2. DariusBurschka–MVPGroupatTUM http://mvp.visual-navigation.com Lakeside Research Days, July 8, 2013 Aspects of Collaborative Swarm Perception 4 Darius Burschka Fig. 2. Collaborative 3D reconstruction from 2 independently moving cameras. directional system with a large field of view. We decided to use omnidirectional systems in- stead of fish-lens cameras, because their single view- point property [2] is essential for our combined localization and reconstruction approach (Fig. 3). ep2008 •  What is the appropriate level of detail for model representation? •  How to boost measurement sensitivity at large distances? •  Global vs. local perception? •  Increase the exploration speed by sharing the exploration of free space •  Completeness of the instantaneous scene perception – true 3D instead of 2.5D
  • 3. DariusBurschka–MVPGroupatTUM http://mvp.visual-navigation.com Lakeside Research Days, July 8, 2013 Level of Detail in Modelling Geometry for action planning Image-based rich on details Planning module §  Joint geometry and image based environment modelling
  • 4. DariusBurschka–MVPGroupatTUM http://mvp.visual-navigation.com Lakeside Research Days, July 8, 2013 Constraints on the Level of Detail (work with E.Steinbach, TUM) Hybrid Model for world representation •  geometry for collision avoidance •  appearance for view prediction
  • 5. DariusBurschka–MVPGroupatTUM http://mvp.visual-navigation.com Lakeside Research Days, July 8, 2013 Reconstruction of a hybrid (appearance/geometry) model W. Maier, E. Mair, D. Burschka, and E. Steinbach. ICRA 2009.
  • 6. DariusBurschka–MVPGroupatTUM http://mvp.visual-navigation.com Lakeside Research Days, July 8, 20136 Visual Localization for Visual Environment Modeling surprise trigger localization result W. Maier, E. Mair, D. Burschka, and E. Steinbach. ICRA 2009.
  • 7. DariusBurschka–MVPGroupatTUM http://mvp.visual-navigation.com Lakeside Research Days, July 8, 2013 Modeling from Images (collaboration with M. Jagersand UAlberta) Cameras +Inexpensive, easy to use +Textures automatically registered with SFM geometry +Easy to get multiple viewpoints +Redundant data -Global metric accuracy of SFM geometry not always good -Needs image features (texture) to compute 3D geometry Processing (SFM or SFS) Geometry from images
  • 8. DariusBurschka–MVPGroupatTUM http://mvp.visual-navigation.com Lakeside Research Days, July 8, 2013 Combining the geometry and image-based approaches Ø  Can add realism from image-based rendering Ø  Can fill-in with SFM geometry where laser is occluded, or more fine detail is needed. Ø  Cumbersome registration of models: SFM model may re-project well, but still not have a high Euclidean accuracy
  • 9. DariusBurschka–MVPGroupatTUM http://mvp.visual-navigation.com Lakeside Research Days, July 8, 2013 Local Feature Tracking Algorithms • Image-gradient based à Extended KLT (ExtKLT) •  patch-based implementation •  feature propagation •  corner-binding +  sub-pixel accuracy •  algorithm scales bad with number of features • Tracking-By-Matching à AGAST tracker •  AGAST corner detector •  efficient descriptor •  high frame-rates (hundrets of features in a few milliseconds) +  algorithm scales well with number of features •  pixel-accuracy 8
  • 10. DariusBurschka–MVPGroupatTUM http://mvp.visual-navigation.com Lakeside Research Days, July 8, 2013 Adaptive and Generic Accelerated Segment Test (AGAST) 9 • Improvements compared to FAST: • full exploration of the configuration space by backward-induction (no learning) • binary decision tree (not ternary) • computation of the actual probability and processing costs (no greedy algorithm) • automatic scene adaption by tree switching (at no cost) • various corner pattern sizes (not just one) No drawbacks! Mair, Hager, Burschka, Suppa, Hirzinger ECCV, Springer, 2010 E. Rosten
  • 11. DariusBurschka–MVPGroupatTUM http://mvp.visual-navigation.com Lakeside Research Days, July 8, 2013 AGAST – Open Source Software • AGAST, AGASTPP @ Sourceforge •  > 5000 downloads • AGAST in OpenCV • AGAST used by BRISK
  • 12. DariusBurschka–MVPGroupatTUM http://mvp.visual-navigation.com Lakeside Research Days, July 8, 2013 Feature Propagation Ø Two motion prediction concepts •  2D feature propagation by motion derivatives •  IMU-based feature prediction Ø Combination of both: •  translation propagation by feature velocity (2D) •  rotation propagation by gyroscopes no feature propagation Strobl, Mair, Bodenmüller, Kielhofer, Sepp, Suppa, Burschka, Hirzinger IROS, IEEE/RSJ, 2009, Best Paper Finalist Mair, Strobl, Bodenmüller, Suppa, Burschka KI, Springer Journal, 2010
  • 13. DariusBurschka–MVPGroupatTUM http://mvp.visual-navigation.com Lakeside Research Days, July 8, 2013 Feature Propagation Ø Two motion prediction concepts •  2D feature propagation by motion derivatives •  IMU-based feature prediction Ø Combination of both: •  translation propagation by feature velocity (2D) •  rotation propagation by gyroscopes linear feature propagation Strobl, Mair, Bodenmüller, Kielhofer, Sepp, Suppa, Burschka, Hirzinger IROS, IEEE/RSJ, 2009, Best Paper Finalist Mair, Strobl, Bodenmüller, Suppa, Burschka KI, Springer Journal, 2010
  • 14. DariusBurschka–MVPGroupatTUM http://mvp.visual-navigation.com Lakeside Research Days, July 8, 2013 Feature Propagation Ø Two motion prediction concepts •  2D feature propagation by motion derivatives •  IMU-based feature prediction Ø Combination of both: •  translation propagation by feature velocity (2D) •  rotation propagation by gyroscopes Strobl, Mair, Bodenmüller, Kielhofer, Sepp, Suppa, Burschka, Hirzinger IROS, IEEE/RSJ, 2009, Best Paper Finalist Mair, Strobl, Bodenmüller, Suppa, Burschka KI, Springer Journal, 2010linear + gyros based prop.
  • 15. DariusBurschka–MVPGroupatTUM http://mvp.visual-navigation.com Lakeside Research Days, July 8, 2013 Z∞ – Algorithm at Work 14 Mair, Burschka Mobile Robots Navigation, book chapter, In-Tech, 2010 Simple sensors, low processing power Obstacle avoidance
  • 16. DariusBurschka–MVPGroupatTUM http://mvp.visual-navigation.com Lakeside Research Days, July 8, 2013http://www6.in.tum.de/burschka/ „Simple“ Image Acquisition 60 images taken with a standard low cost digital camera
  • 18. DariusBurschka–MVPGroupatTUM http://mvp.visual-navigation.com Lakeside Research Days, July 8, 2013 Navigation results Burschka,Mair RobotVision 2008
  • 19. DariusBurschka–MVPGroupatTUM http://mvp.visual-navigation.com Lakeside Research Days, July 8, 2013http://www6.in.tum.de/burschka/ Estimation of the 6 Degrees of Freedom Estimation of 3 rotational angles Estimation of a translation vector
  • 20. DariusBurschka–MVPGroupatTUM http://mvp.visual-navigation.com Lakeside Research Days, July 8, 2013 __________________________________________________________________ __________________________________________________________________ ______________________________________________ Swarms to boost resolution Possible measures: •  Increase of the distance between cameras (B) •  Increase of the focal length (f) → field of view •  Decrease of the pixel-size (px) → resolution of the camera [ ]pixel zp fB d x p 1 ⋅ ⋅ = 75cm
  • 21. DariusBurschka–MVPGroupatTUM http://mvp.visual-navigation.com Lakeside Research Days, July 8, 2013 Collaborative Reconstruction with Self-Localization (CVPR Workshop on Vision in Action: Efficient strategies for cognitive agents in complex environments, Marseille : France (2008)) 4 Darius Burschka Fig. 2. Collaborative 3D reconstruction from 2 independently moving cameras. directional system with a large field of view. We decided to use omnidirectional systems in- stead of fish-lens cameras, because their single view- point property [2] is essential for our combined localization and reconstruction approach (Fig. 3). This property allows an easy recovery of the viewing angle of the virtual camera with the focal point F (Fig. 3) directly from the image coordinates (ui, νi). A standard perspective camera can be mapped on 1-30Sep2008 in [3]. The original approach relied on an essential method to initialize the 3D structure in the world. Our system gives a more robust initialization method minimizing the image error directly. The limited space of this paper does not allow a detailed description of this part of the system. The recursive approach from [3] is used to maintain the radial distance λx. 3 Results Our flying systems use omnidirectional mirrors like the one depicted in Fig. 6 Fig. 6. Flying agent equipped with an omnidirectional sensor pointing upwards. We tested the system on several indoor and outdoor sequences with two cam- eras observing the world through different sized planar mirrors (Fig. 4) using a Linux laptop computer with a 1.2 GHz Pentium Centrino processor. The system was equipped with 1GB RAM and was operating two Firewire cameras with standard PAL resolution of 768x576. 3.1 Accuracy of the Estimation of Extrinsic Parameters We used the system to estimate the extrinsic motion parameters and achieved results comparable with the extrinsic camera calibration results. We verified the parameters by applying them to the 3D reconstruction process in (5) and achieved measurement accuracy below the resolution of our test system. This reconstruction was in the close range of the system which explains the high inria-00325805,version1-30Sep2008 of the camera f=1) (ui, νi) to ni = (ui, νi, 1)T ||(ui, νi, 1)T || . We rely on the fact that each camera can see the partner and the area wants to reconstruct at the same time. In our system, Camera 1 observes the position of the focal point F o era 2 along the vector T , and the point P to be reconstructed along the ve simultaneously (Fig. 2). The second camera (Camera 2) uses its own coo frame to reconstruct the same point P along the vector V2. The point P o by this camera has modified coordinates [10]: V2 = R ∗ (V1 + T ) Collaborative Exploration - Vision in Actio Since we cannot rely on any extrinsic calibration, we perform the c of the extrinsic parameters directly from the current observation. W find the transformation parameters (R, T) in (3) defining the trans between the coordinate frames of the two cameras. Each camera define coordinate frame. 2.1 3D Reconstruction from Motion Stereo In our system, the cameras undergo an arbitrary motion (R, T ) whi in two independent observations (n1, n2) of a point P. The equation ( written using (2) as λ2n2 = R ∗ (λ1n1 + T ). We need to find the radial distances (λ1, λ2) along the incoming rays to the 3D coordinates of the point. We can find it by re-writing (4) to (−Rn1, n2) λ1 λ2 = R · T λ1 λ2 = (−Rn1, n2) −∗ · R · T = D−∗ · R · T We use in (5) the pseudo inverse matrix D−∗ to solve for the two unk Collaborative Exploration - Vision in Action Since we cannot rely on any extrinsic calibration, we perform the calibrat of the extrinsic parameters directly from the current observation. We need find the transformation parameters (R, T) in (3) defining the transformat between the coordinate frames of the two cameras. Each camera defines its o coordinate frame. 2.1 3D Reconstruction from Motion Stereo In our system, the cameras undergo an arbitrary motion (R, T ) which resu in two independent observations (n1, n2) of a point P. The equation (3) can written using (2) as λ2n2 = R ∗ (λ1n1 + T ). We need to find the radial distances (λ1, λ2) along the incoming rays to estim the 3D coordinates of the point. We can find it by re-writing (4) to (−Rn1, n2) λ1 λ2 = R · T λ1 λ2 = (−Rn1, n2) −∗ · R · T = D−∗ · R · T We use in (5) the pseudo inverse matrix D−∗ to solve for the two unknown dial distances (λ1, λ2). A pseudo-inverse matrix to D can be calculated accord to D−∗ = (DT · D)−1 · DT . The pseudo-inverse operation finds a least square approximation satisfying overdetermined set of three equations with two unknowns (λ , λ ) in (5). D 1-30Sep2008 Collaborative Exploration - Vision in Action Since we cannot rely on any extrinsic calibration, we perform the calibr of the extrinsic parameters directly from the current observation. We nee find the transformation parameters (R, T) in (3) defining the transform between the coordinate frames of the two cameras. Each camera defines its coordinate frame. 2.1 3D Reconstruction from Motion Stereo In our system, the cameras undergo an arbitrary motion (R, T ) which re in two independent observations (n1, n2) of a point P. The equation (3) ca written using (2) as λ2n2 = R ∗ (λ1n1 + T ). We need to find the radial distances (λ1, λ2) along the incoming rays to esti the 3D coordinates of the point. We can find it by re-writing (4) to (−Rn1, n2) λ1 λ2 = R · T λ1 λ2 = (−Rn1, n2) −∗ · R · T = D−∗ · R · T We use in (5) the pseudo inverse matrix D−∗ to solve for the two unknow dial distances (λ1, λ2). A pseudo-inverse matrix to D can be calculated acco to D−∗ = (DT · D)−1 · DT . The pseudo-inverse operation finds a least square approximation satisfyin overdetermined set of three equations with two unknowns (λ1, λ2) in (5). to calibration and detection errors, the two lines V1 and V2 in Fig. 2 do necessarily intersect. Equation (5) calculates the position of the point a each line closest to the other line. 5,version1-30Sep2008
  • 22. DariusBurschka–MVPGroupatTUM http://mvp.visual-navigation.com Lakeside Research Days, July 8, 2013 Condition number for an angular configuration The relative error in the solution caused by perturbations of parameters can stimated from the condition number of the pseudo-inverse matrix D−∗ . The dition number is the ratio between the largest and the smallest singular value his matrix [4]. The condition number estimates the sensitivity of the solution of a linear braic system to variations of parameters in matrix D−∗ and in the measure- t vector T . Consider the equation system with perturbations in matrix D−∗ and vec- T : (D−∗ + δD−∗ )Tb = λ + δλ (8) The relative error in the solution caused by perturbations of parameters can stimated by the following inequality using the condition number κ calculated D−∗ (see [7]): ||T − Tb|| ||T || ≤ κ ||δD−∗ || ||D−∗|| + ||δλ|| ||λ|| + O( 2 ) (9) Therefore, the relative error in solution λ can be as large as condition num- times the relative error in D−∗ and T . The condition number together with singular values of the matrix D−∗ describe the sensitivity of the system hanges in the input parameters. Small condition numbers describe systems small sensitivity to errors compared to large condition numbers that suggest llel direction of the measured direction vectors (n1, n2). A quantitative eval- on of this result can be found in Section 3.2. We use the condition number of matrix D−∗ as a metric to evaluate the quality of the resulting configuration he two cameras. We can move the cameras for a given region of interest to nfiguration with a small condition number ensuring high accuracy of the nstructed coordinates. f we know the extrinsic motion parameters (R, T ), e.g. from a calibration ess, then equation (5) allows to estimate directly the 3D position of an ob- The collaborative exploration approach allows us to move the cameras desired position to establish the best sensitivity and the highest robustn errors in the parameter estimation. We mentioned already in section 2.1 the sensitivity of the reconstruction result (5) to parameter errors depen the relative orientation of the reconstructing cameras (Fig. 8). −5 −3 −1 1 3 5 7 −5 0 5 −5 0 5 0 50 100 150 Orientation Cam 1 [deg] Orientation Cam 2 [deg] Conditionnumber Fig. 8. (left) The condition number of the 3D reconstruction is dependent o relative orientation of the corresponding direction vectors V1, V2 in Fig. 2 obser point;(right) a minimal condition number corresponds to a difference in the v Best observation by 90° viewing angle
  • 23. DariusBurschka–MVPGroupatTUM http://mvp.visual-navigation.com Lakeside Research Days, July 8, 2013 Asynchronous stereo for dynamic scenes 4 Darius Burschka Fig. 2. Collaborative 3D reconstruction from 2 independently moving cameras. directional system with a large field of view. Fig. 3. Single viewpoint property. We decided to use omnidirectional systems in- stead of fish-lens cameras, because their single view- point property [2] is essential for our combined localization and reconstruction approach (Fig. 3). This property allows an easy recovery of the viewing angle of the virtual camera with the focal point F (Fig. 3) directly from the image coordinates (ui, νi). A standard perspective camera can be mapped on our generic model of an omnidirectional sensor. The only limitation of a standard perspective camera is the limited viewing angle. In case of a standard per- spective camera with a focal point at F, we can es- timate the direction vector ni of the incoming rays from the uni-focal image coordinates (focal length of the camera f=1) (u , ν ) to 25805,version1-30Sep2008 NTP Figure 2: Here: C0 and C1 are the camera centers of the stereo pair, P0,P1,P2 are the 3D poses of the point at times t0,t1,t2. Latter correspond to frame acquisition timestamps of camera C0. P⇤ is the 3D pose of the point at time t⇤, which correspond to the frame acquisition timestamp of the camera C1. Vectors v0,v1,v2, are unit vectors pointing from camera center C0, to corresponding 3D points. Vector v⇤ is the unit vector pointing from camera center C to pose P⇤ Since the angle between the velocity dire vector v3 and v0 is y, v3 and v2 is h, and v ˆn is p 2 . We can compute the v3 by solving the fo ing equation: 2 4 v0x v0y v0z v2x v2y v2z nx ny nz 3 5 2 4 v3x v3y v3z 3 5 = 2 4 cosy cosh 0 3.2 Path Reconstruction In the second stage of proposed method we com the 3D pose P0. For reasons of simplicity we represent the poses (Pi (i = 0..2), P⇤) depicted i 2 as: Pi = 2 4 ai ⇤zi bi ⇤zi zi 3 5 P⇤ = 2 4 m⇤z⇤ 0 +tx s⇤z⇤ 0 +ty q⇤z⇤ 0 +tz 3 5 where tx,ty,tz are the x,y and z componen Submitted VISAPP 2014
  • 24. DariusBurschka–MVPGroupatTUM http://mvp.visual-navigation.com Lakeside Research Days, July 8, 2013 Enhancement of Stereo Data Multimodal Sensor Fusion IROS 2013
  • 25. DariusBurschka–MVPGroupatTUM http://mvp.visual-navigation.com Lakeside Research Days, July 8, 20132 5 Towards Autonomous MAV Exploration in Cluttered Indoor and Outdoor Environments > Korbinian Schmid > 28/06/2013 Slide INS filter design 25 [Schmid et al. IROS 2012] "   Synchronization of realtime and non realtime modules by sensor hardware trigger "   Direct system state: "   High rate calculation by „Strap Down Algorithm“ (SDA) "   Indirect system state: "   Estimation by indirect Extended Kalman Filter (EKF) "   Measurements: „pseudo gravity“, key frame based stereo odometry "   Measurement delay compensation by filter state augmentation
  • 26. DariusBurschka–MVPGroupatTUM http://mvp.visual-navigation.com Lakeside Research Days, July 8, 20132 6 Slide Key frame based stereo odometry 26 " Delta measurements referencing key frames " Locally drift free system state estimation
  • 27. DariusBurschka–MVPGroupatTUM http://mvp.visual-navigation.com Lakeside Research Days, July 8, 20132 7 Slide Exploration with an MAV " 70 m trajectory " Ground truth by tachymeter " 5 s forced vision drop out with translational motion " 1 s forced vision drop out with rotational motion 27 "  Estimation error < 1.2 m " Odometry error < 25.9 m "  Results comparable to runs without vision drop outs
  • 28. DariusBurschka–MVPGroupatTUM http://mvp.visual-navigation.com Lakeside Research Days, July 8, 20132 8 Mixed indoor/outdoor exploration (DLR K. Schmid) 28 " Autonomous indoor/outdoor flight of 60m " Mapping resolution: 0.1m " Leaving through a window " Returning through door
  • 29. DariusBurschka–MVPGroupatTUM http://mvp.visual-navigation.com Lakeside Research Days, July 8, 20132 9 Slide INS filter design 29 [Schmid et al. IROS 2012] "   Synchronization of realtime and non realtime modules by sensor hardware trigger "   Direct system state: "   High rate calculation by „Strap Down Algorithm“ (SDA) "   Indirect system state: "   Estimation by indirect Extended Kalman Filter (EKF) "   Measurements: „pseudo gravity“, key frame based stereo odometry "   Measurement delay compensation by filter state augmentation
  • 30. DariusBurschka–MVPGroupatTUM http://mvp.visual-navigation.com Lakeside Research Days, July 8, 20133 0 Slide Mixed indoor/outdoor exploration (DLR) 30 " Autonomous indoor/outdoor flight of 60m " Mapping resolution: 0.1m " Leaving through a window " Returning through door
  • 31. DariusBurschka–MVPGroupatTUM http://mvp.visual-navigation.com Lakeside Research Days, July 8, 2013 3 1 " Multicopters for SAR and disaster management scenarios "  Swarms for better resolution " Modelling complexity needs to be adapted to the task " Synchronisation necessary Towards Autonomous MAV Exploration in Cluttered Indoor and Outdoor Environments > Korbinian Schmid > 28/06/2013 Slide Conclusion 31
  • 32. DariusBurschka–MVPGroupatTUM http://mvp.visual-navigation.com Lakeside Research Days, July 8, 2013 … last RSS 2013 workshop on MAVs
  • 33. DariusBurschka–MVPGroupatTUM http://mvp.visual-navigation.com Lakeside Research Days, July 8, 2013 MachineVisionandPerceptionGroup@TUMMVP Research of the MVP Group http://mvp.visual-navigation.com The Machine Vision and Perception Group @TUM works on the aspects of visual perception and control in medical, mobile, and HCI applications Visual navigation Biologically motivated perception Perception for manipulation Visual Action Analysis Photogrammetric monocular reconstruction Rigid and Deformable Registration
  • 34. DariusBurschka–MVPGroupatTUM http://mvp.visual-navigation.com Lakeside Research Days, July 8, 2013 MachineVisionandPerceptionGroup@TUMMVP Research of the MVP Group http://mvp.visual-navigation.com Exploration of physical object properties Sensor substitution Multimodal Sensor Fusion Development of new Optical Sensors The Machine Vision and Perception Group @TUM works on the aspects of visual perception and control in medical, mobile, and HCI applications