SlideShare une entreprise Scribd logo
1  sur  6
Télécharger pour lire hors ligne
Localization and Mapping for Autonomous Navigation in Outdoor Terrains : A 
Stereo Vision Approach 
Motilal Agrawal, Kurt Konolige and Robert C. Bolles 
SRI International 
333 Ravenswood Ave. 
Menlo Park, CA 94025 
fagrawal, konolige, bollesg@ai.sri.com 
Abstract 
We consider the problem of autonomous navigation in 
unstructured outdoor terrains using vision sensors. The 
goal is for a robot to come into a new environment, map 
it and move to a given goal at modest speeds (1 m/sec). The 
biggest challenges are in building good maps and keeping 
the robot well localized as it advances towards the goal. In 
this paper, we concentrate on showing how it is possible 
to build a consistent, globally correct map in real time, us-ing 
efcient precise stereo algorithms for map making and 
visual odometry for localization. While we have made ad-vances 
in both localization and mapping using stereo vi-sion, 
it is the integration of the techniques that is the biggest 
contribution of the research. The validity of our approach 
is tested in blind experiments, where we submit our code to 
an independent testing group that runs and validates it on 
an outdoor robot. 
1 Introduction 
Recent advances in computing hardware coupled with 
the availability of different types of sensors have brought 
the dream of autonomous robots closer to reality now. Not 
surprisingly, good map making and localization are critical 
to the autonomous operation and navigation of these robots. 
In this paper, we concentrate on showing how it is possible 
to build a consistent, globally correct map for outdoor un-structured 
environments in real time using stereo vision as 
the primary sensor. 
The goal is for a small outdoor robot to come into a new 
area, learn about and map its environment, and move to a 
given goal at modest speeds (1 m/sec). This problem is es-pecially 
difcult in outdoor, off-road environments, where 
Financial support from DARPA under the Learning Applied to Ground 
Robots (LAGR) program is gratefully acknowledged. 
tall grass, shadows, deadfall, and other obstacles predom-inate. 
Although work in outdoor navigation has preferen-tially 
used laser rangenders [11, 3, 5], we use stereo vi-sion 
as the main sensor. Stereo vision is a good choice for 
several reasons: it is low power, low cost, and can register 
dense range information from close objects. More impor-tantly, 
vision sensors allow us to use more distant objects 
as landmarks for navigation, and to learn and use color and 
texture models of the environment, in looking further ahead 
than is possible with range sensors alone. Our robot uses 
a combination of the following vision-based techniques to 
make globally consistent maps in real time. 
 Efcient, precise stereo algorithms. Over the past 8 
years, we have rened a set of stereo algorithms to take 
advantage of parallel-data instructions on PC hard-ware. 
We can perform stereo analysis on 512x384 im-ages 
in less than 40 ms, enabling a faster system cycle 
time for real-time obstacle detection and avoidance. 
 Visual odometry (VO) for ne registration of robot 
motion and corresponding obstacle maps. We have de-veloped 
techniques that run at 15 Hz on standard PC 
hardware, and that provide 4% error over runs of 100 
m. Our method can be integrated with information 
from inertial (IMU) and GPS devices, for robustness 
in difcult lighting or motion situations, and for over-all 
global consistency. Visual odometry is compared 
to the combination of GPS and IMU, and shown to be 
superior in producing useable obstacle maps. 
 A fast RANSAC method for nding the ground plane. 
The ground plane provides a solid base for obstacle de-tection 
algorithms in challenging outdoor terrain, and 
produces high-quality obstacle maps for a planning 
system. 
 Sight-line analysis for longer-range inference. Stereo 
information on our robot is unreliable past 8m, but it
is possible to infer free space by nding “sight lines,” 
directions in which it is likely there is freespace. 
While we have made advances in many of the areas above, 
it is the integration of the techniques that is the biggest con-tribution 
of the research. The validity of our approach is 
tested in blind experiments, where we submit our code to 
an independent testing group that runs and validates it on 
an outdoor robot. In the most recent tests, we have nished 
rst out of a group of eight teams, in some cases by a large 
margin. 
1.1 System overview 
Figure 1(a) shows our robot equipped with two stereo de-vices, 
each stereo device encompassing a 120 degree eld 
of view, with a baseline of 12 cm. In the current setup, 
the robot is near-sighted: depth information for each stero 
device degrades rapidly after 6m. There is also an inertial 
unit (IMU) with angular drift of several degrees per minute, 
and a Garmin GPS. There are 4 Pentium-M 2 GHz comput-ers, 
one for each stereo device, one for planning and map-making, 
and one for control of the robot and integration of 
GPS and IMU readings. In our setup, each stereo computer 
performs local map making and visual odometry, and sends 
registered local maps to the planner, where they are inte-grated 
into a global map. The planner is responsible for 
global planning and reactive control, sending commands to 
the controller. 
1.2 Related work 
There has been an explosion of work in mapping and lo-calization 
(SLAM), most of it concentrating on indoor en-vironments 
[6, 9]. The sensor of choice for indoor environ-ments 
is a laser rangender, augmented with monocular or 
stereo vision. In much of this work, high-accuracy GPS is 
used to register sensor scans; exceptions are [5, 11]. In con-trast, 
we forego laser rangenders, and explicitly use image-based 
registration to build accurate maps. Other approaches 
to mapping with vision are [15, 16], although they are not 
oriented towards realtime implementations. Obstacle de-tection 
using stereo has also received some attention [15]. 
There have been a number of recent approaches to visual 
odometry [13, 14]. Our system is distinguished by realtime 
implementation and high accuracy using a small baseline in 
realistic terrain. Finally, [16] applies visual matching tech-niques 
to global consistency for maps, in a technique similar 
to the maximum likelihood method proposed here. 
In the following sections, we rst discuss local map cre-ation 
from visual input. Then we examine visual odometry 
and registration in detail, and show how consistent global 
maps are created. Finally, we present performance results 
for several tests in Spring 2006. 
2 Local map construction 
The object of the local map algorithms is to determine, 
from the visual information, which areas are freespace and 
which are obstacles for the robot: the local map. From the 
stereo disparity image, we compute a nominal ground plane, 
which yields free space near the robot. We also analyze 
height differences from the ground to nd obstacles. Fi-nally, 
via the technique of sight lines we can infer freespace 
to more distant points. 
2.1 Stereo analysis and ground plane ex- 
traction 
We use a fast stereo algorithm [8] to compute a dispar-ity 
image at 512x384 resolution (Figure 1(b)). In typical 
outdoor scenes, it is possible to achieve very dense stereo 
results. The high resolution gives very detailed 3D infor-mation 
for nding the ground plane and obstacles. Each 
disparity image point [u; v; d] corresponds to a 3D point in 
the robot's frame. We compute the 3D points using a 4x4 
homography [2] [X; Y;Z;w]T  H[u; v; d; 1]T . 
The most important geometric analysis is nding the 
ground plane. Although it is possible to detect obstacles 
using local variation in height, using a ground plane simpli- 
es processing and yields more stable results. To extract a 
ground plane, we use a RANSAC technique [4]. Any three 
noncolinear 3D points dene a plane hypothesis. We choose 
randomly over the set of points, biasing the choices towards 
points the are distributed over the rst 5 meters near the 
robot, and rejecting hypotheses that are too slanted. Hy-pothesized 
planes are ranked by the number of points that 
are close to the plane. 
This method is very robust, returning reasonable ground 
planes even in difcult situations, with clumps of grass or 
other distractors. Figure 1(d) shows an example of the ex-tracted 
ground plane, with a green overlay indicating the 
inliers. Points that lie too high above the ground plane, but 
lower than the robot's height, are labeled as obstacles. This 
method is extremely simple, but has proven to work well in 
practice, even when the ground has modest dips and rises; 
one reason is that it only looks out to 6m around the robot. 
As the robot approaches a rise in the ground, for example, 
the ground plane will gradually assume the angle of the rise. 
A more sophisticated analysis would break the ground plane 
into several segments or model more complex shapes. 
2.2 Sight lines 
Because of the wide angle and short baseline of the 
stereo devices, depth information becomes very uncertain 
after 6 to 8m. Although we cannot precisely locate obsta-cles 
past this limit, we can determine if there is freespace, 
2
(a) Our Robot (b) Disparity Image (c) Ground Plane (d) Obstacles 
Figure 1. (a): Stereo sensors on our robot (b): Disparity image from the left view of the robot; closer pixels are lighter (c): 
Extracted ground plane, in green overlay. Limit of ground plane is shown by green bar; sight line has a red bar. (d): Ground plane 
overlayed on original image, in green. Obstacles are indicated in purple 
using the following observation. Consider the interpreted 
image of Figure 1(c). There is a path that goes around the 
bushes and extends out a good distance. The ground plane 
extends over most of this area, and then ends in a distant 
line of trees. The trees are too far to place with any pre-cision, 
but we can say that there is no obstacle along the 
line of sight to the trees. Given a conservative estimate for 
the distance of the trees, we can add freespace up to this 
estimate. 
The computation of sight lines is most efciently accom-plished 
in the disparity space. We divide the disparity image 
into narrow columns, and for each column attempt to nd 
a contiguous ground plane up to an obstacle. If the ground 
plane exists, and the obstacle is distant enough, we can add 
a sight line hypothesis to the local map. In Figure 1(c), 
the limits of the contiguous ground plane in a column are 
marked by green bars. Where the ground plane abuts a dis-tant 
object, there is a vertical red line indicating a sight line. 
Note in the example that the sight line follows the obvious 
path out of the bushes. 
3 Constructing consistent global maps 
In this section we provide solutions to two problems: 
representing and fusing the information provided by visual 
analysis, and registering local maps into a consistent global 
map using visual odometry. 
3.1 Map representation 
For indoor work, a standard map representation is a 2D 
occupancy grid [12], which gives the probability of each 
cell in the map being occupied by an obstacle. Alternatives 
for outdoor environments include 2.5D elevation maps and 
full 3D voxel maps [7]. These representations can be used 
to determine allowable kinematic and dynamic paths for an 
outdoor robot in rough terrain. We choose to keep the sim-pler 
2D occupancy grid, foregoing any complex calculation 
of the robot's interaction with the terrain. Instead, we ab-stract 
the geometrical characteristics of terrain into a set of 
categories, and fuse information from these categories to 
create a cost of movement. 
We use a grid of 20cm x 20cm cells to represent the 
global map. Each cell has a probability of the belonging 
to the three categories derived from visual analysis (Sec-tion 
2): obstacle, ground plane freespace and sight line 
freespace. Note that these categories are not mutually ex-clusive, 
since, for example, a cell under an overhanging 
branch could have both freespace and obstacle properties. 
We are interested in converting these probabilities into a 
cost of traversing the cell. If the probabilities were mu-tually 
exclusive, we would simply form the cost function: 
c = Pi pici, where ci is the cost associated with category 
i. With non-exclusive categories, we chose a simple priori-tization 
schedule to determine the cost. Obstacles have the 
highest priority, followed by ground plane, sight lines, and 
paths. 
3.2 Registration and visual odometry 
Our robot is equipped with a GPS that is accurate to 
within 3 to 10 meters in good situations. GPS informa-tion 
is ltered by the IMU and wheel encoders to produce a 
more stable position estimate. However, because GPS drifts 
and jumps over time, it is impossible using these devices 
to differentiate GPS errors from other errors such as wheel 
slippage, and the result is that local maps cannot be recon-structed 
accurately. Consider the situation of Figure 2. Here 
the robot goes through two loops of 10m diameter. There is 
a long linear feature (a low wall) that is seen as an obsta-cle 
at the beginning and end of the loops. Using the ltered 
GPS pose, the position of the wall shifts almost 2m during 
the run as is evident from 2(c). Our solution to the registra-tion 
problem is to use visual odometry (VO) to ensure local 
consistency in map registration. Over larger regions, lter-ing 
VO with GPS information provides the necessary cor- 
3
(a) First Stage (b) Second Stage (c) Third Stage (d) Third Stage using VO 
Figure 2. (a),(b)  (c): Three stages during a run using GPS ltered pose. Obstacle points are shown in white, freespace in black, 
and the yellow line is the robot's path. The linear feature is marked by hand in red in all three maps, in its initial pose. (d): Map 
registration using VO in the same sequence. GPS ltered path is shown in yellow, VO ltered path is in green. 
rections to keep errors from growing without bounds. We 
describe these techniques in the next two sections. 
3.3 Visual Odometry 
Our robot presents a challenging situation for visual 
odometry: wide FOV and short baseline make distance er-rors 
large, and a small offset from the ground plane makes 
it difcult to track points over longer distances. We have 
developed a robust visual odometry solution that functions 
well under these conditions; we describe it in some detail 
here. For a more detailed description of the visual odome-try 
system please refer to our detailed paper [1]. 
Our visual odometry system uses feature tracks to esti-mate 
the relative incremental motion between two frames 
that are close in time. Corner feature points are detected in 
the left image of each stereo pair and tracked across consec-utive 
frames. Figure 3(a) shows the tracked feature points 
over two consecutive frames. These feature points are then 
triangulated at each frame based on stereo correspondences. 
Three of these points are used to estimate the motion us-ing 
absolute orientation. This motion is then scored using 
the pixel reprojection errors in both the cameras. We use 
the disparity space homography [2] to evaluate the inliers 
for the motion. In the end, the hypothesis with the best 
score (maximum number of inliers) is used as the starting 
point for a nonlinear minimization problem that minimizes 
the pixel reprojection errors in both the cameras simultane-ously, 
resulting in a relative motion estimate between the 
two frames. 
We have found that the approach outlined above is very 
efcient ( 15Hz) and works remarkably well, even for 
stereo rigs with a small baseline. The fact that we are trian-gulating 
the feature points for each frame, builds a rewall 
for error propagation. However, this also means that there 
will be a drift when the rig is stationary. In order to avoid 
this drift, we update the reference frame (the frame with ref-erence 
to which the motion of the next frame is computed) 
only when the robot has moved some minimum distance 
(taken to be 5 cm in our implementation). Since, we are 
re-triangulating for every frame, it is important to calibrate 
the stereo cameras well. A standard plane based calibration 
step works well for all our experiments. The fundamental 
reason that our approach gives reliable motion estimates, 
even in small-baseline situations is due to the fact that we 
stick to image-based quantities and use both the left and 
right images symmetrically. The absolute orientation step 
used to generate the hypothesis uses the left and the right 
cameras symmetrically to generate the motion hypothesis. 
The hypothesis is evaluated and scored based on reprojec-tion 
errors in both views, resulting in an accurate estimate 
of the motion. This estimate is then rened in the nonlin-ear 
minimization step which also uses the two cameras uni-formly. 
The IMU and the wheel encoders are also used to ll in 
the relative poses when visual odometry fails. This happens 
due to sudden lighting changes, fast turns of the robot or 
lack of good features in the scene (e.g. blank wall). Thus 
it complements the visual pose system. The relative motion 
between consecutive frames are chained together to obtain 
the absolute pose at each frame. Obviously, this is bound to 
result in accumulation of errors and drifting. We use GPS to 
correct the pose of the vehicle through a very simple lter 
and is described next. 
3.4 Global Consistency 
Relative motions between consecutive frames are 
chained together to obtain the absolute pose at each frame. 
Obviously, this is bound to result in accumulation of errors 
and drifting. We use GPS to correct the pose of the vehicle 
through a simple linear lter. Pose information is used when 
4
the GPS receiver has at least a 3D position x, and head-ing 
information is used only when the vehicle is travelling 
0.5 m/s or faster, to limit the effect of velocity noise from 
GPS on the heading estimate. In addition, GPS measure-ments 
are used only if the robot has travelled a minimum 
distance from the last GPS measurement. The lter nudges 
the VO pose towards global consistency, while maintaining 
local consistency. Over larger loops, of course, the 3 m de-viation 
of the GPS unit means that the map may not be con-sistent. 
In this case, other techniques such as wide-baseline 
image matching [10] would have to be employed. 
The quality of the registration from ltered VO, shown 
in Figure 2(d), can be compared to the ltered GPS of Fig-ure 
2(c). The low wall, which moved almost 2m over the 
short loops when using GPS, is much more consistent when 
VO is employed. And in cases where GPS is blocked or de-graded, 
such as under heavy tree cover in Figure 3(b), VO 
still produces maps that are locally consistent. It also allows 
us to determine wheel slips and stalls with almost no false 
positives – note the end of the run in Figure 3(b), where the 
robot was hung up and the wheels were slipping, and wheel 
odometry produced a large error. 
3.5 Evaluation of Visual Odometry 
We have implemented and tested our integrated pose sys-tem 
on several outdoor terrains. Since GPS is accurate to 
only about 3-4 meters, in order to validate our results, the 
robot was moved in a closed loop on a typical outdoor envi-ronment 
over 50–100 m, and used the error in start and end 
poses. 
Table 1 compares this error for vehicle odometry (IMU + 
wheel odometry), visual odometry and the GPS integrated 
visual odometry for four loops. Except for the rst loop, 
visual odometry outperforms the vehicle odometry, even 
without GPS ltering, and is comparable to the std of GPS 
(3m). VO substantially outperformed odometry in 3 of the 
4 loops, mainly because of turns and wheel slippage during 
those runs. This is especially evident in loop 4, where the 
robot was slipping in mud for a substantial amount of time. 
4 Results 
The combined visual processing results in local maps 
that represent traversability with a high degree of delity. 
Figure 3(b) shows the results of an autonomous run of about 
130m, over a span of 150 seconds. The rst part of the 
run was along a mulch path under heavy tree cover, with 
mixed sunlight and deep shadows. For this run, we used 
an ofine learning of mulch paths on a test site, then used 
the learned models on the autonomous run to recognize the 
mulch color. Cells categorized as path are shown in yellow; 
Run Number 1 2 3 4 
Distance(meters) 82.4 141.6 55.3 51.0 
Method Percentage Error 
Vehicle Odometry 1.3 11.4 11.0 31.0 
Raw Visual Odometry 2.2 4.8 5.0 3.9 
Visual Odometry  GPS 2.0 0.3 1.7 0.9 
Table 1. Loop closure error in percentage. 
Test 12 Test 13 
BL R BL R 
Run 1 5:25 1:46 5:21 2:28 
Run 2 5:34 1:50 5:04 2:12 
Run 3 5:18 1:52 4:45 2:12 
Table 2. Run times for baseline (BL) and 
our robots(R). 
black is freespace. Obstacles are indicated by purple (for 
absolute certainty), and white-to-gray for decreasing cer-tainty. 
We did not use sight lines for this run. 
The path did not lead directly to the goal, and there were 
many opportunities for the robot to head cross-country. 
About two-thirds of the way through the run, no more paths 
were available, and the robot went through heavy grass and 
brush to the goal. The robot's pose, as estimated from l-tered 
visual odometry is in green; the ltered GPS path is in 
yellow. Because of the tree cover, GPS suffered from high 
variance at times. 
A benet of using VO is that wheel slips and stalls are 
easily detected, with no false positives. For example, at the 
end of the run, the robot was hung up on a tree branch, and 
spun its wheels for a bit. The ltered GPS, depending on 
wheel odometry, moved far off the global pose, while the 
ltered VO stayed put. 
4.1 Performance 
One of the unique aspects of the project is independent 
experimental evaluation. An independent testing group ran 
monthly blind demos of the perception and control software 
developed by eight teams and compared their performance 
to a baseline system. Experiments were conducted every 
month, with project performers sending in their code on a 
ash disk to the evaluation team. The disks were inserted 
into a robot, which then was run over a test course. The 
last three demos (11, 12, and 13) were considered tests of 
performance from the rst 18 months of the program. 
Our team was rst in the last two tests (Test 12 and 13), 
after being last in Test 11. Most of the problems in Test 
11 were caused by using the baseline planner and controller 
(we wrote our own planner and controller for Tests 12 and 
13). Table 2 shows the times for the runs in these tests. We 
achieved the short run times in Tests 12 and 13 through a 
5
(a) Tracked Features (b) Autonomous Run 
Figure 3. (a): Example of visual odometry showing the motion of successfully tracked features (b): Reconstruction on a 130m 
autonomous run. Yellow is recognized path (learnt ofine), black is freespace, and purple, gray and white are obstacles. The green 
line shows the robot's trajectory using VO and the yellow line shows the trajectory using IMU and GPS. 
combination of precise map building and high-speed path 
planning. Our average speed was over 1.1 m/s, while the 
robot top speed was limited to 1.3 m/s. Map building relied 
on VO to provide good localization, ground-plane analysis 
to help detect obstacles, and sight lines to identify distant 
regions that are likely to be navigable. 
5 Conclusion 
We have demonstrated a complete system for off-road 
navigation in unstructured environments, using stereo vi-sion 
as the main sensor. The system is very robust - we 
can typically give it a goal position several hundred meters 
away, and expect it to get there. But there are hazards that 
are not dealt with by the methods discussed in this paper: 
water and ditches are two robot-killers. Finally, we would 
like to use visual landmarks to augment GPS for global 
consistency, because it would give ner adjustment in the 
robot's position, which is critical for following routes that 
have already been found. 
References 
[1] M. Agrawal and K. Konolige. Real-time localization in out-door 
environments using stereo vision and inexpensive gps. 
In Proceedings International Conference on Pattern Recog-nition 
(ICPR), 2006. 
[2] M. Agrawal, K. Konolige, and L. Iocchi. Real-time detection 
of independent motion using stereo. In IEEE workshop on 
Motion, 2005. 
[3] P. Bellutta, R. Manduchi, L. Matthies, K. Owens, and 
A. Rankin. Terrain perception for DEMO III. In Proc. of 
the IEEE Intelligent Vehicles Symp., 2000. 
[4] M. Fischler and R. Bolles. Random sample consensus: a 
paradigm for model tting with application to image analysis 
and automated cartography. Commun. ACM., 24:381–395, 
1981. 
[5] J. Guivant, E. Nebot, and S. Baiker. High accuracy navi-gation 
using laser range sensors in outdoor applications. In 
ICRA, 2000. 
[6] J. S. Gutmann and K. Konolige. Incremental mapping of 
large cyclic environments. In CIRA, 1999. 
[7] K. Iagnemma, F. Genot, and S. Dubowsky. Rapid physics-based 
rough-terrain rover planning with sensor and control 
uncertainty. In ICRA, 1999. 
[8] K. Konolige. Small vision systems: hardware and implemen-tation. 
In Intl. Symp. on Robotics Research, pages 111–116, 
1997. 
[9] J. J. Leonard and P. Newman. Consistent, convergent, and 
constant-time slam. In IJCAI, 2003. 
[10] D. G. Lowe. Distinctive image features from scale-invariant 
keypoints. Intl. J. of Computer Vision, 60(2):91–110, 2004. 
[11] M. Montemerlo and S. Thrun. Large-scale robotic 3-d map-ping 
of urban structures. In ISER, 2004. 
[12] H. Moravec and A. Elfes. High resolution maps for wide 
angles sonar. In ICRA, 1985. 
[13] D. Nister, O. Naroditsky, and J. Bergen. Visual odometry. In 
CVPR, 2004. 
[14] C. F. Olson, L. H. Matthies, M. Schoppers, and M. W. Mai-mone. 
Robust stereo ego-motion for long distance naviga-tion. 
In CVPR, 2000. 
[15] A. Rankin, A. Huertas, and L. Matthies. Evaluation of stereo 
vision obstacle detection algorithms for off-road autonomous 
navigation. In AUVSI Symp. on Unmanned Systems, 2005. 
[16] D. J. Spero and R. A. Jarvis. 3D vision for large-scale out-door 
environments. In Proc. of the Australasian Conf. on 
Robotics and Automation (ACRA), 2002. 
6

Contenu connexe

Tendances

BEV Semantic Segmentation
BEV Semantic SegmentationBEV Semantic Segmentation
BEV Semantic SegmentationYu Huang
 
Leader Follower Formation Control of Ground Vehicles Using Dynamic Pixel Coun...
Leader Follower Formation Control of Ground Vehicles Using Dynamic Pixel Coun...Leader Follower Formation Control of Ground Vehicles Using Dynamic Pixel Coun...
Leader Follower Formation Control of Ground Vehicles Using Dynamic Pixel Coun...ijma
 
Pedestrian behavior/intention modeling for autonomous driving V
Pedestrian behavior/intention modeling for autonomous driving VPedestrian behavior/intention modeling for autonomous driving V
Pedestrian behavior/intention modeling for autonomous driving VYu Huang
 
Fisheye Omnidirectional View in Autonomous Driving
Fisheye Omnidirectional View in Autonomous DrivingFisheye Omnidirectional View in Autonomous Driving
Fisheye Omnidirectional View in Autonomous DrivingYu Huang
 
camera-based Lane detection by deep learning
camera-based Lane detection by deep learningcamera-based Lane detection by deep learning
camera-based Lane detection by deep learningYu Huang
 
fusion of Camera and lidar for autonomous driving I
fusion of Camera and lidar for autonomous driving Ifusion of Camera and lidar for autonomous driving I
fusion of Camera and lidar for autonomous driving IYu Huang
 
Deep Learning’s Application in Radar Signal Data II
Deep Learning’s Application in Radar Signal Data IIDeep Learning’s Application in Radar Signal Data II
Deep Learning’s Application in Radar Signal Data IIYu Huang
 
3-d interpretation from single 2-d image IV
3-d interpretation from single 2-d image IV3-d interpretation from single 2-d image IV
3-d interpretation from single 2-d image IVYu Huang
 
Deep VO and SLAM IV
Deep VO and SLAM IVDeep VO and SLAM IV
Deep VO and SLAM IVYu Huang
 
Depth Fusion from RGB and Depth Sensors by Deep Learning
Depth Fusion from RGB and Depth Sensors by Deep LearningDepth Fusion from RGB and Depth Sensors by Deep Learning
Depth Fusion from RGB and Depth Sensors by Deep LearningYu Huang
 
Pose estimation from RGB images by deep learning
Pose estimation from RGB images by deep learningPose estimation from RGB images by deep learning
Pose estimation from RGB images by deep learningYu Huang
 
Stereo Matching by Deep Learning
Stereo Matching by Deep LearningStereo Matching by Deep Learning
Stereo Matching by Deep LearningYu Huang
 
3-d interpretation from single 2-d image for autonomous driving II
3-d interpretation from single 2-d image for autonomous driving II3-d interpretation from single 2-d image for autonomous driving II
3-d interpretation from single 2-d image for autonomous driving IIYu Huang
 
Digital Elevation Models - WUR - Grontmij
Digital Elevation Models - WUR - GrontmijDigital Elevation Models - WUR - Grontmij
Digital Elevation Models - WUR - GrontmijXander Bakker
 
Deep vo and slam ii
Deep vo and slam iiDeep vo and slam ii
Deep vo and slam iiYu Huang
 
Passive stereo vision with deep learning
Passive stereo vision with deep learningPassive stereo vision with deep learning
Passive stereo vision with deep learningYu Huang
 
High resolution dem dtm
High resolution dem dtmHigh resolution dem dtm
High resolution dem dtmTTI Production
 

Tendances (20)

BEV Semantic Segmentation
BEV Semantic SegmentationBEV Semantic Segmentation
BEV Semantic Segmentation
 
Leader Follower Formation Control of Ground Vehicles Using Dynamic Pixel Coun...
Leader Follower Formation Control of Ground Vehicles Using Dynamic Pixel Coun...Leader Follower Formation Control of Ground Vehicles Using Dynamic Pixel Coun...
Leader Follower Formation Control of Ground Vehicles Using Dynamic Pixel Coun...
 
Pedestrian behavior/intention modeling for autonomous driving V
Pedestrian behavior/intention modeling for autonomous driving VPedestrian behavior/intention modeling for autonomous driving V
Pedestrian behavior/intention modeling for autonomous driving V
 
Fisheye Omnidirectional View in Autonomous Driving
Fisheye Omnidirectional View in Autonomous DrivingFisheye Omnidirectional View in Autonomous Driving
Fisheye Omnidirectional View in Autonomous Driving
 
camera-based Lane detection by deep learning
camera-based Lane detection by deep learningcamera-based Lane detection by deep learning
camera-based Lane detection by deep learning
 
Survey 1 (project overview)
Survey 1 (project overview)Survey 1 (project overview)
Survey 1 (project overview)
 
fusion of Camera and lidar for autonomous driving I
fusion of Camera and lidar for autonomous driving Ifusion of Camera and lidar for autonomous driving I
fusion of Camera and lidar for autonomous driving I
 
Deep Learning’s Application in Radar Signal Data II
Deep Learning’s Application in Radar Signal Data IIDeep Learning’s Application in Radar Signal Data II
Deep Learning’s Application in Radar Signal Data II
 
3D reconstruction
3D reconstruction3D reconstruction
3D reconstruction
 
3-d interpretation from single 2-d image IV
3-d interpretation from single 2-d image IV3-d interpretation from single 2-d image IV
3-d interpretation from single 2-d image IV
 
Deep VO and SLAM IV
Deep VO and SLAM IVDeep VO and SLAM IV
Deep VO and SLAM IV
 
Depth Fusion from RGB and Depth Sensors by Deep Learning
Depth Fusion from RGB and Depth Sensors by Deep LearningDepth Fusion from RGB and Depth Sensors by Deep Learning
Depth Fusion from RGB and Depth Sensors by Deep Learning
 
Introduction of slam
Introduction of slamIntroduction of slam
Introduction of slam
 
Pose estimation from RGB images by deep learning
Pose estimation from RGB images by deep learningPose estimation from RGB images by deep learning
Pose estimation from RGB images by deep learning
 
Stereo Matching by Deep Learning
Stereo Matching by Deep LearningStereo Matching by Deep Learning
Stereo Matching by Deep Learning
 
3-d interpretation from single 2-d image for autonomous driving II
3-d interpretation from single 2-d image for autonomous driving II3-d interpretation from single 2-d image for autonomous driving II
3-d interpretation from single 2-d image for autonomous driving II
 
Digital Elevation Models - WUR - Grontmij
Digital Elevation Models - WUR - GrontmijDigital Elevation Models - WUR - Grontmij
Digital Elevation Models - WUR - Grontmij
 
Deep vo and slam ii
Deep vo and slam iiDeep vo and slam ii
Deep vo and slam ii
 
Passive stereo vision with deep learning
Passive stereo vision with deep learningPassive stereo vision with deep learning
Passive stereo vision with deep learning
 
High resolution dem dtm
High resolution dem dtmHigh resolution dem dtm
High resolution dem dtm
 

En vedette

Poster analysis
Poster analysisPoster analysis
Poster analysisWill Relfe
 
Opening Storyboard
Opening Storyboard Opening Storyboard
Opening Storyboard Will Relfe
 
Double pg speardrafts
Double pg speardrafts Double pg speardrafts
Double pg speardrafts Will Relfe
 
Shotlocations jhhjf
Shotlocations jhhjfShotlocations jhhjf
Shotlocations jhhjfWill Relfe
 
A complete guide to web design
A complete guide to web designA complete guide to web design
A complete guide to web designsarliptadash
 
Character profiles
Character profilesCharacter profiles
Character profilesWill Relfe
 
Ancillary task
Ancillary taskAncillary task
Ancillary taskWill Relfe
 
Questionaire results
Questionaire resultsQuestionaire results
Questionaire resultsWill Relfe
 
Documentary Opening - Storyboard
Documentary Opening - Storyboard Documentary Opening - Storyboard
Documentary Opening - Storyboard Will Relfe
 
Draft posters
Draft posters Draft posters
Draft posters Will Relfe
 
Charactr profiles (1)
Charactr profiles (1)Charactr profiles (1)
Charactr profiles (1)Will Relfe
 
Werner Herzog
Werner Herzog Werner Herzog
Werner Herzog Will Relfe
 

En vedette (16)

Poster analysis
Poster analysisPoster analysis
Poster analysis
 
Opening Storyboard
Opening Storyboard Opening Storyboard
Opening Storyboard
 
Double pg speardrafts
Double pg speardrafts Double pg speardrafts
Double pg speardrafts
 
Shotlocations jhhjf
Shotlocations jhhjfShotlocations jhhjf
Shotlocations jhhjf
 
Magazine
Magazine Magazine
Magazine
 
A complete guide to web design
A complete guide to web designA complete guide to web design
A complete guide to web design
 
Character profiles
Character profilesCharacter profiles
Character profiles
 
Ancillary task
Ancillary taskAncillary task
Ancillary task
 
Questionaire results
Questionaire resultsQuestionaire results
Questionaire results
 
Documentary Opening - Storyboard
Documentary Opening - Storyboard Documentary Opening - Storyboard
Documentary Opening - Storyboard
 
New tv gyg
New tv gygNew tv gyg
New tv gyg
 
Draft posters
Draft posters Draft posters
Draft posters
 
Charactr profiles (1)
Charactr profiles (1)Charactr profiles (1)
Charactr profiles (1)
 
documntarians
documntariansdocumntarians
documntarians
 
Eval q 3
Eval q 3Eval q 3
Eval q 3
 
Werner Herzog
Werner Herzog Werner Herzog
Werner Herzog
 

Similaire à LocalizationandMappingforAutonomousNavigationin OutdoorTerrains: A StereoVision Approach

碩一工研院研究成果
碩一工研院研究成果碩一工研院研究成果
碩一工研院研究成果Shaun Lin
 
IRJET- Simultaneous Localization and Mapping for Automatic Chair Re-Arran...
IRJET-  	  Simultaneous Localization and Mapping for Automatic Chair Re-Arran...IRJET-  	  Simultaneous Localization and Mapping for Automatic Chair Re-Arran...
IRJET- Simultaneous Localization and Mapping for Automatic Chair Re-Arran...IRJET Journal
 
International Journal of Computational Engineering Research (IJCER)
International Journal of Computational Engineering Research (IJCER)International Journal of Computational Engineering Research (IJCER)
International Journal of Computational Engineering Research (IJCER)ijceronline
 
Localization using filtered dgps
Localization using filtered dgpsLocalization using filtered dgps
Localization using filtered dgpseSAT Journals
 
Immersive 3 d visualization of remote sensing data
Immersive 3 d visualization of remote sensing dataImmersive 3 d visualization of remote sensing data
Immersive 3 d visualization of remote sensing datasipij
 
Goal location prediction based on deep learning using RGB-D camera
Goal location prediction based on deep learning using RGB-D cameraGoal location prediction based on deep learning using RGB-D camera
Goal location prediction based on deep learning using RGB-D camerajournalBEEI
 
Global positioning system (GPS)
Global positioning system (GPS)Global positioning system (GPS)
Global positioning system (GPS)Gautham Reddy
 
Total station and its application to civil engineering
Total station and its application to civil engineeringTotal station and its application to civil engineering
Total station and its application to civil engineeringTushar Dholakia
 
Vehicle positioning in urban environments using particle filtering-based glob...
Vehicle positioning in urban environments using particle filtering-based glob...Vehicle positioning in urban environments using particle filtering-based glob...
Vehicle positioning in urban environments using particle filtering-based glob...IJECEIAES
 
Real-Time Map Building using Ultrasound Scanning
Real-Time Map Building using Ultrasound ScanningReal-Time Map Building using Ultrasound Scanning
Real-Time Map Building using Ultrasound ScanningIRJET Journal
 
In tech vision-based_obstacle_detection_module_for_a_wheeled_mobile_robot
In tech vision-based_obstacle_detection_module_for_a_wheeled_mobile_robotIn tech vision-based_obstacle_detection_module_for_a_wheeled_mobile_robot
In tech vision-based_obstacle_detection_module_for_a_wheeled_mobile_robotSudhakar Spartan
 
IRJET Autonomous Simultaneous Localization and Mapping
IRJET  	  Autonomous Simultaneous Localization and MappingIRJET  	  Autonomous Simultaneous Localization and Mapping
IRJET Autonomous Simultaneous Localization and MappingIRJET Journal
 
A ROS IMPLEMENTATION OF THE MONO-SLAM ALGORITHM
A ROS IMPLEMENTATION OF THE MONO-SLAM ALGORITHMA ROS IMPLEMENTATION OF THE MONO-SLAM ALGORITHM
A ROS IMPLEMENTATION OF THE MONO-SLAM ALGORITHMcsandit
 
A cost-effective GPS-aided autonomous guided vehicle for global path planning
A cost-effective GPS-aided autonomous guided vehicle for global path planningA cost-effective GPS-aided autonomous guided vehicle for global path planning
A cost-effective GPS-aided autonomous guided vehicle for global path planningjournalBEEI
 
Distance Estimation based on Color-Block: A Simple Big-O Analysis
Distance Estimation based on Color-Block: A Simple Big-O Analysis Distance Estimation based on Color-Block: A Simple Big-O Analysis
Distance Estimation based on Color-Block: A Simple Big-O Analysis IJECEIAES
 
DETECTION OF POWER-LINES IN COMPLEX NATURAL SURROUNDINGS
DETECTION OF POWER-LINES IN COMPLEX NATURAL SURROUNDINGSDETECTION OF POWER-LINES IN COMPLEX NATURAL SURROUNDINGS
DETECTION OF POWER-LINES IN COMPLEX NATURAL SURROUNDINGScscpconf
 

Similaire à LocalizationandMappingforAutonomousNavigationin OutdoorTerrains: A StereoVision Approach (20)

碩一工研院研究成果
碩一工研院研究成果碩一工研院研究成果
碩一工研院研究成果
 
IRJET- Simultaneous Localization and Mapping for Automatic Chair Re-Arran...
IRJET-  	  Simultaneous Localization and Mapping for Automatic Chair Re-Arran...IRJET-  	  Simultaneous Localization and Mapping for Automatic Chair Re-Arran...
IRJET- Simultaneous Localization and Mapping for Automatic Chair Re-Arran...
 
International Journal of Computational Engineering Research (IJCER)
International Journal of Computational Engineering Research (IJCER)International Journal of Computational Engineering Research (IJCER)
International Journal of Computational Engineering Research (IJCER)
 
Ijecet 06 10_003
Ijecet 06 10_003Ijecet 06 10_003
Ijecet 06 10_003
 
Localization using filtered dgps
Localization using filtered dgpsLocalization using filtered dgps
Localization using filtered dgps
 
Immersive 3 d visualization of remote sensing data
Immersive 3 d visualization of remote sensing dataImmersive 3 d visualization of remote sensing data
Immersive 3 d visualization of remote sensing data
 
Goal location prediction based on deep learning using RGB-D camera
Goal location prediction based on deep learning using RGB-D cameraGoal location prediction based on deep learning using RGB-D camera
Goal location prediction based on deep learning using RGB-D camera
 
Global positioning system (GPS)
Global positioning system (GPS)Global positioning system (GPS)
Global positioning system (GPS)
 
Total station and its application to civil engineering
Total station and its application to civil engineeringTotal station and its application to civil engineering
Total station and its application to civil engineering
 
Vehicle positioning in urban environments using particle filtering-based glob...
Vehicle positioning in urban environments using particle filtering-based glob...Vehicle positioning in urban environments using particle filtering-based glob...
Vehicle positioning in urban environments using particle filtering-based glob...
 
Ijetcas14 308
Ijetcas14 308Ijetcas14 308
Ijetcas14 308
 
Real-Time Map Building using Ultrasound Scanning
Real-Time Map Building using Ultrasound ScanningReal-Time Map Building using Ultrasound Scanning
Real-Time Map Building using Ultrasound Scanning
 
In tech vision-based_obstacle_detection_module_for_a_wheeled_mobile_robot
In tech vision-based_obstacle_detection_module_for_a_wheeled_mobile_robotIn tech vision-based_obstacle_detection_module_for_a_wheeled_mobile_robot
In tech vision-based_obstacle_detection_module_for_a_wheeled_mobile_robot
 
IRJET Autonomous Simultaneous Localization and Mapping
IRJET  	  Autonomous Simultaneous Localization and MappingIRJET  	  Autonomous Simultaneous Localization and Mapping
IRJET Autonomous Simultaneous Localization and Mapping
 
Collaborative Sensing for Automated Vehicles
Collaborative Sensing for Automated VehiclesCollaborative Sensing for Automated Vehicles
Collaborative Sensing for Automated Vehicles
 
Collaborative Sensing for Automated Vehicles
Collaborative Sensing for Automated VehiclesCollaborative Sensing for Automated Vehicles
Collaborative Sensing for Automated Vehicles
 
A ROS IMPLEMENTATION OF THE MONO-SLAM ALGORITHM
A ROS IMPLEMENTATION OF THE MONO-SLAM ALGORITHMA ROS IMPLEMENTATION OF THE MONO-SLAM ALGORITHM
A ROS IMPLEMENTATION OF THE MONO-SLAM ALGORITHM
 
A cost-effective GPS-aided autonomous guided vehicle for global path planning
A cost-effective GPS-aided autonomous guided vehicle for global path planningA cost-effective GPS-aided autonomous guided vehicle for global path planning
A cost-effective GPS-aided autonomous guided vehicle for global path planning
 
Distance Estimation based on Color-Block: A Simple Big-O Analysis
Distance Estimation based on Color-Block: A Simple Big-O Analysis Distance Estimation based on Color-Block: A Simple Big-O Analysis
Distance Estimation based on Color-Block: A Simple Big-O Analysis
 
DETECTION OF POWER-LINES IN COMPLEX NATURAL SURROUNDINGS
DETECTION OF POWER-LINES IN COMPLEX NATURAL SURROUNDINGSDETECTION OF POWER-LINES IN COMPLEX NATURAL SURROUNDINGS
DETECTION OF POWER-LINES IN COMPLEX NATURAL SURROUNDINGS
 

Dernier

Introduction to ArtificiaI Intelligence in Higher Education
Introduction to ArtificiaI Intelligence in Higher EducationIntroduction to ArtificiaI Intelligence in Higher Education
Introduction to ArtificiaI Intelligence in Higher Educationpboyjonauth
 
The Most Excellent Way | 1 Corinthians 13
The Most Excellent Way | 1 Corinthians 13The Most Excellent Way | 1 Corinthians 13
The Most Excellent Way | 1 Corinthians 13Steve Thomason
 
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...EduSkills OECD
 
Grant Readiness 101 TechSoup and Remy Consulting
Grant Readiness 101 TechSoup and Remy ConsultingGrant Readiness 101 TechSoup and Remy Consulting
Grant Readiness 101 TechSoup and Remy ConsultingTechSoup
 
Advanced Views - Calendar View in Odoo 17
Advanced Views - Calendar View in Odoo 17Advanced Views - Calendar View in Odoo 17
Advanced Views - Calendar View in Odoo 17Celine George
 
Organic Name Reactions for the students and aspirants of Chemistry12th.pptx
Organic Name Reactions  for the students and aspirants of Chemistry12th.pptxOrganic Name Reactions  for the students and aspirants of Chemistry12th.pptx
Organic Name Reactions for the students and aspirants of Chemistry12th.pptxVS Mahajan Coaching Centre
 
The basics of sentences session 2pptx copy.pptx
The basics of sentences session 2pptx copy.pptxThe basics of sentences session 2pptx copy.pptx
The basics of sentences session 2pptx copy.pptxheathfieldcps1
 
Accessible design: Minimum effort, maximum impact
Accessible design: Minimum effort, maximum impactAccessible design: Minimum effort, maximum impact
Accessible design: Minimum effort, maximum impactdawncurless
 
How to Make a Pirate ship Primary Education.pptx
How to Make a Pirate ship Primary Education.pptxHow to Make a Pirate ship Primary Education.pptx
How to Make a Pirate ship Primary Education.pptxmanuelaromero2013
 
Beyond the EU: DORA and NIS 2 Directive's Global Impact
Beyond the EU: DORA and NIS 2 Directive's Global ImpactBeyond the EU: DORA and NIS 2 Directive's Global Impact
Beyond the EU: DORA and NIS 2 Directive's Global ImpactPECB
 
Introduction to AI in Higher Education_draft.pptx
Introduction to AI in Higher Education_draft.pptxIntroduction to AI in Higher Education_draft.pptx
Introduction to AI in Higher Education_draft.pptxpboyjonauth
 
Interactive Powerpoint_How to Master effective communication
Interactive Powerpoint_How to Master effective communicationInteractive Powerpoint_How to Master effective communication
Interactive Powerpoint_How to Master effective communicationnomboosow
 
microwave assisted reaction. General introduction
microwave assisted reaction. General introductionmicrowave assisted reaction. General introduction
microwave assisted reaction. General introductionMaksud Ahmed
 
Sanyam Choudhary Chemistry practical.pdf
Sanyam Choudhary Chemistry practical.pdfSanyam Choudhary Chemistry practical.pdf
Sanyam Choudhary Chemistry practical.pdfsanyamsingh5019
 
“Oh GOSH! Reflecting on Hackteria's Collaborative Practices in a Global Do-It...
“Oh GOSH! Reflecting on Hackteria's Collaborative Practices in a Global Do-It...“Oh GOSH! Reflecting on Hackteria's Collaborative Practices in a Global Do-It...
“Oh GOSH! Reflecting on Hackteria's Collaborative Practices in a Global Do-It...Marc Dusseiller Dusjagr
 
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptxSOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptxiammrhaywood
 
mini mental status format.docx
mini    mental       status     format.docxmini    mental       status     format.docx
mini mental status format.docxPoojaSen20
 

Dernier (20)

Introduction to ArtificiaI Intelligence in Higher Education
Introduction to ArtificiaI Intelligence in Higher EducationIntroduction to ArtificiaI Intelligence in Higher Education
Introduction to ArtificiaI Intelligence in Higher Education
 
The Most Excellent Way | 1 Corinthians 13
The Most Excellent Way | 1 Corinthians 13The Most Excellent Way | 1 Corinthians 13
The Most Excellent Way | 1 Corinthians 13
 
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
 
Grant Readiness 101 TechSoup and Remy Consulting
Grant Readiness 101 TechSoup and Remy ConsultingGrant Readiness 101 TechSoup and Remy Consulting
Grant Readiness 101 TechSoup and Remy Consulting
 
Advanced Views - Calendar View in Odoo 17
Advanced Views - Calendar View in Odoo 17Advanced Views - Calendar View in Odoo 17
Advanced Views - Calendar View in Odoo 17
 
Organic Name Reactions for the students and aspirants of Chemistry12th.pptx
Organic Name Reactions  for the students and aspirants of Chemistry12th.pptxOrganic Name Reactions  for the students and aspirants of Chemistry12th.pptx
Organic Name Reactions for the students and aspirants of Chemistry12th.pptx
 
The basics of sentences session 2pptx copy.pptx
The basics of sentences session 2pptx copy.pptxThe basics of sentences session 2pptx copy.pptx
The basics of sentences session 2pptx copy.pptx
 
Accessible design: Minimum effort, maximum impact
Accessible design: Minimum effort, maximum impactAccessible design: Minimum effort, maximum impact
Accessible design: Minimum effort, maximum impact
 
Código Creativo y Arte de Software | Unidad 1
Código Creativo y Arte de Software | Unidad 1Código Creativo y Arte de Software | Unidad 1
Código Creativo y Arte de Software | Unidad 1
 
How to Make a Pirate ship Primary Education.pptx
How to Make a Pirate ship Primary Education.pptxHow to Make a Pirate ship Primary Education.pptx
How to Make a Pirate ship Primary Education.pptx
 
Beyond the EU: DORA and NIS 2 Directive's Global Impact
Beyond the EU: DORA and NIS 2 Directive's Global ImpactBeyond the EU: DORA and NIS 2 Directive's Global Impact
Beyond the EU: DORA and NIS 2 Directive's Global Impact
 
Introduction to AI in Higher Education_draft.pptx
Introduction to AI in Higher Education_draft.pptxIntroduction to AI in Higher Education_draft.pptx
Introduction to AI in Higher Education_draft.pptx
 
Interactive Powerpoint_How to Master effective communication
Interactive Powerpoint_How to Master effective communicationInteractive Powerpoint_How to Master effective communication
Interactive Powerpoint_How to Master effective communication
 
microwave assisted reaction. General introduction
microwave assisted reaction. General introductionmicrowave assisted reaction. General introduction
microwave assisted reaction. General introduction
 
Sanyam Choudhary Chemistry practical.pdf
Sanyam Choudhary Chemistry practical.pdfSanyam Choudhary Chemistry practical.pdf
Sanyam Choudhary Chemistry practical.pdf
 
“Oh GOSH! Reflecting on Hackteria's Collaborative Practices in a Global Do-It...
“Oh GOSH! Reflecting on Hackteria's Collaborative Practices in a Global Do-It...“Oh GOSH! Reflecting on Hackteria's Collaborative Practices in a Global Do-It...
“Oh GOSH! Reflecting on Hackteria's Collaborative Practices in a Global Do-It...
 
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptxSOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
 
TataKelola dan KamSiber Kecerdasan Buatan v022.pdf
TataKelola dan KamSiber Kecerdasan Buatan v022.pdfTataKelola dan KamSiber Kecerdasan Buatan v022.pdf
TataKelola dan KamSiber Kecerdasan Buatan v022.pdf
 
Mattingly "AI & Prompt Design: Structured Data, Assistants, & RAG"
Mattingly "AI & Prompt Design: Structured Data, Assistants, & RAG"Mattingly "AI & Prompt Design: Structured Data, Assistants, & RAG"
Mattingly "AI & Prompt Design: Structured Data, Assistants, & RAG"
 
mini mental status format.docx
mini    mental       status     format.docxmini    mental       status     format.docx
mini mental status format.docx
 

LocalizationandMappingforAutonomousNavigationin OutdoorTerrains: A StereoVision Approach

  • 1. Localization and Mapping for Autonomous Navigation in Outdoor Terrains : A Stereo Vision Approach Motilal Agrawal, Kurt Konolige and Robert C. Bolles SRI International 333 Ravenswood Ave. Menlo Park, CA 94025 fagrawal, konolige, bollesg@ai.sri.com Abstract We consider the problem of autonomous navigation in unstructured outdoor terrains using vision sensors. The goal is for a robot to come into a new environment, map it and move to a given goal at modest speeds (1 m/sec). The biggest challenges are in building good maps and keeping the robot well localized as it advances towards the goal. In this paper, we concentrate on showing how it is possible to build a consistent, globally correct map in real time, us-ing efcient precise stereo algorithms for map making and visual odometry for localization. While we have made ad-vances in both localization and mapping using stereo vi-sion, it is the integration of the techniques that is the biggest contribution of the research. The validity of our approach is tested in blind experiments, where we submit our code to an independent testing group that runs and validates it on an outdoor robot. 1 Introduction Recent advances in computing hardware coupled with the availability of different types of sensors have brought the dream of autonomous robots closer to reality now. Not surprisingly, good map making and localization are critical to the autonomous operation and navigation of these robots. In this paper, we concentrate on showing how it is possible to build a consistent, globally correct map for outdoor un-structured environments in real time using stereo vision as the primary sensor. The goal is for a small outdoor robot to come into a new area, learn about and map its environment, and move to a given goal at modest speeds (1 m/sec). This problem is es-pecially difcult in outdoor, off-road environments, where Financial support from DARPA under the Learning Applied to Ground Robots (LAGR) program is gratefully acknowledged. tall grass, shadows, deadfall, and other obstacles predom-inate. Although work in outdoor navigation has preferen-tially used laser rangenders [11, 3, 5], we use stereo vi-sion as the main sensor. Stereo vision is a good choice for several reasons: it is low power, low cost, and can register dense range information from close objects. More impor-tantly, vision sensors allow us to use more distant objects as landmarks for navigation, and to learn and use color and texture models of the environment, in looking further ahead than is possible with range sensors alone. Our robot uses a combination of the following vision-based techniques to make globally consistent maps in real time. Efcient, precise stereo algorithms. Over the past 8 years, we have rened a set of stereo algorithms to take advantage of parallel-data instructions on PC hard-ware. We can perform stereo analysis on 512x384 im-ages in less than 40 ms, enabling a faster system cycle time for real-time obstacle detection and avoidance. Visual odometry (VO) for ne registration of robot motion and corresponding obstacle maps. We have de-veloped techniques that run at 15 Hz on standard PC hardware, and that provide 4% error over runs of 100 m. Our method can be integrated with information from inertial (IMU) and GPS devices, for robustness in difcult lighting or motion situations, and for over-all global consistency. Visual odometry is compared to the combination of GPS and IMU, and shown to be superior in producing useable obstacle maps. A fast RANSAC method for nding the ground plane. The ground plane provides a solid base for obstacle de-tection algorithms in challenging outdoor terrain, and produces high-quality obstacle maps for a planning system. Sight-line analysis for longer-range inference. Stereo information on our robot is unreliable past 8m, but it
  • 2. is possible to infer free space by nding “sight lines,” directions in which it is likely there is freespace. While we have made advances in many of the areas above, it is the integration of the techniques that is the biggest con-tribution of the research. The validity of our approach is tested in blind experiments, where we submit our code to an independent testing group that runs and validates it on an outdoor robot. In the most recent tests, we have nished rst out of a group of eight teams, in some cases by a large margin. 1.1 System overview Figure 1(a) shows our robot equipped with two stereo de-vices, each stereo device encompassing a 120 degree eld of view, with a baseline of 12 cm. In the current setup, the robot is near-sighted: depth information for each stero device degrades rapidly after 6m. There is also an inertial unit (IMU) with angular drift of several degrees per minute, and a Garmin GPS. There are 4 Pentium-M 2 GHz comput-ers, one for each stereo device, one for planning and map-making, and one for control of the robot and integration of GPS and IMU readings. In our setup, each stereo computer performs local map making and visual odometry, and sends registered local maps to the planner, where they are inte-grated into a global map. The planner is responsible for global planning and reactive control, sending commands to the controller. 1.2 Related work There has been an explosion of work in mapping and lo-calization (SLAM), most of it concentrating on indoor en-vironments [6, 9]. The sensor of choice for indoor environ-ments is a laser rangender, augmented with monocular or stereo vision. In much of this work, high-accuracy GPS is used to register sensor scans; exceptions are [5, 11]. In con-trast, we forego laser rangenders, and explicitly use image-based registration to build accurate maps. Other approaches to mapping with vision are [15, 16], although they are not oriented towards realtime implementations. Obstacle de-tection using stereo has also received some attention [15]. There have been a number of recent approaches to visual odometry [13, 14]. Our system is distinguished by realtime implementation and high accuracy using a small baseline in realistic terrain. Finally, [16] applies visual matching tech-niques to global consistency for maps, in a technique similar to the maximum likelihood method proposed here. In the following sections, we rst discuss local map cre-ation from visual input. Then we examine visual odometry and registration in detail, and show how consistent global maps are created. Finally, we present performance results for several tests in Spring 2006. 2 Local map construction The object of the local map algorithms is to determine, from the visual information, which areas are freespace and which are obstacles for the robot: the local map. From the stereo disparity image, we compute a nominal ground plane, which yields free space near the robot. We also analyze height differences from the ground to nd obstacles. Fi-nally, via the technique of sight lines we can infer freespace to more distant points. 2.1 Stereo analysis and ground plane ex- traction We use a fast stereo algorithm [8] to compute a dispar-ity image at 512x384 resolution (Figure 1(b)). In typical outdoor scenes, it is possible to achieve very dense stereo results. The high resolution gives very detailed 3D infor-mation for nding the ground plane and obstacles. Each disparity image point [u; v; d] corresponds to a 3D point in the robot's frame. We compute the 3D points using a 4x4 homography [2] [X; Y;Z;w]T H[u; v; d; 1]T . The most important geometric analysis is nding the ground plane. Although it is possible to detect obstacles using local variation in height, using a ground plane simpli- es processing and yields more stable results. To extract a ground plane, we use a RANSAC technique [4]. Any three noncolinear 3D points dene a plane hypothesis. We choose randomly over the set of points, biasing the choices towards points the are distributed over the rst 5 meters near the robot, and rejecting hypotheses that are too slanted. Hy-pothesized planes are ranked by the number of points that are close to the plane. This method is very robust, returning reasonable ground planes even in difcult situations, with clumps of grass or other distractors. Figure 1(d) shows an example of the ex-tracted ground plane, with a green overlay indicating the inliers. Points that lie too high above the ground plane, but lower than the robot's height, are labeled as obstacles. This method is extremely simple, but has proven to work well in practice, even when the ground has modest dips and rises; one reason is that it only looks out to 6m around the robot. As the robot approaches a rise in the ground, for example, the ground plane will gradually assume the angle of the rise. A more sophisticated analysis would break the ground plane into several segments or model more complex shapes. 2.2 Sight lines Because of the wide angle and short baseline of the stereo devices, depth information becomes very uncertain after 6 to 8m. Although we cannot precisely locate obsta-cles past this limit, we can determine if there is freespace, 2
  • 3. (a) Our Robot (b) Disparity Image (c) Ground Plane (d) Obstacles Figure 1. (a): Stereo sensors on our robot (b): Disparity image from the left view of the robot; closer pixels are lighter (c): Extracted ground plane, in green overlay. Limit of ground plane is shown by green bar; sight line has a red bar. (d): Ground plane overlayed on original image, in green. Obstacles are indicated in purple using the following observation. Consider the interpreted image of Figure 1(c). There is a path that goes around the bushes and extends out a good distance. The ground plane extends over most of this area, and then ends in a distant line of trees. The trees are too far to place with any pre-cision, but we can say that there is no obstacle along the line of sight to the trees. Given a conservative estimate for the distance of the trees, we can add freespace up to this estimate. The computation of sight lines is most efciently accom-plished in the disparity space. We divide the disparity image into narrow columns, and for each column attempt to nd a contiguous ground plane up to an obstacle. If the ground plane exists, and the obstacle is distant enough, we can add a sight line hypothesis to the local map. In Figure 1(c), the limits of the contiguous ground plane in a column are marked by green bars. Where the ground plane abuts a dis-tant object, there is a vertical red line indicating a sight line. Note in the example that the sight line follows the obvious path out of the bushes. 3 Constructing consistent global maps In this section we provide solutions to two problems: representing and fusing the information provided by visual analysis, and registering local maps into a consistent global map using visual odometry. 3.1 Map representation For indoor work, a standard map representation is a 2D occupancy grid [12], which gives the probability of each cell in the map being occupied by an obstacle. Alternatives for outdoor environments include 2.5D elevation maps and full 3D voxel maps [7]. These representations can be used to determine allowable kinematic and dynamic paths for an outdoor robot in rough terrain. We choose to keep the sim-pler 2D occupancy grid, foregoing any complex calculation of the robot's interaction with the terrain. Instead, we ab-stract the geometrical characteristics of terrain into a set of categories, and fuse information from these categories to create a cost of movement. We use a grid of 20cm x 20cm cells to represent the global map. Each cell has a probability of the belonging to the three categories derived from visual analysis (Sec-tion 2): obstacle, ground plane freespace and sight line freespace. Note that these categories are not mutually ex-clusive, since, for example, a cell under an overhanging branch could have both freespace and obstacle properties. We are interested in converting these probabilities into a cost of traversing the cell. If the probabilities were mu-tually exclusive, we would simply form the cost function: c = Pi pici, where ci is the cost associated with category i. With non-exclusive categories, we chose a simple priori-tization schedule to determine the cost. Obstacles have the highest priority, followed by ground plane, sight lines, and paths. 3.2 Registration and visual odometry Our robot is equipped with a GPS that is accurate to within 3 to 10 meters in good situations. GPS informa-tion is ltered by the IMU and wheel encoders to produce a more stable position estimate. However, because GPS drifts and jumps over time, it is impossible using these devices to differentiate GPS errors from other errors such as wheel slippage, and the result is that local maps cannot be recon-structed accurately. Consider the situation of Figure 2. Here the robot goes through two loops of 10m diameter. There is a long linear feature (a low wall) that is seen as an obsta-cle at the beginning and end of the loops. Using the ltered GPS pose, the position of the wall shifts almost 2m during the run as is evident from 2(c). Our solution to the registra-tion problem is to use visual odometry (VO) to ensure local consistency in map registration. Over larger regions, lter-ing VO with GPS information provides the necessary cor- 3
  • 4. (a) First Stage (b) Second Stage (c) Third Stage (d) Third Stage using VO Figure 2. (a),(b) (c): Three stages during a run using GPS ltered pose. Obstacle points are shown in white, freespace in black, and the yellow line is the robot's path. The linear feature is marked by hand in red in all three maps, in its initial pose. (d): Map registration using VO in the same sequence. GPS ltered path is shown in yellow, VO ltered path is in green. rections to keep errors from growing without bounds. We describe these techniques in the next two sections. 3.3 Visual Odometry Our robot presents a challenging situation for visual odometry: wide FOV and short baseline make distance er-rors large, and a small offset from the ground plane makes it difcult to track points over longer distances. We have developed a robust visual odometry solution that functions well under these conditions; we describe it in some detail here. For a more detailed description of the visual odome-try system please refer to our detailed paper [1]. Our visual odometry system uses feature tracks to esti-mate the relative incremental motion between two frames that are close in time. Corner feature points are detected in the left image of each stereo pair and tracked across consec-utive frames. Figure 3(a) shows the tracked feature points over two consecutive frames. These feature points are then triangulated at each frame based on stereo correspondences. Three of these points are used to estimate the motion us-ing absolute orientation. This motion is then scored using the pixel reprojection errors in both the cameras. We use the disparity space homography [2] to evaluate the inliers for the motion. In the end, the hypothesis with the best score (maximum number of inliers) is used as the starting point for a nonlinear minimization problem that minimizes the pixel reprojection errors in both the cameras simultane-ously, resulting in a relative motion estimate between the two frames. We have found that the approach outlined above is very efcient ( 15Hz) and works remarkably well, even for stereo rigs with a small baseline. The fact that we are trian-gulating the feature points for each frame, builds a rewall for error propagation. However, this also means that there will be a drift when the rig is stationary. In order to avoid this drift, we update the reference frame (the frame with ref-erence to which the motion of the next frame is computed) only when the robot has moved some minimum distance (taken to be 5 cm in our implementation). Since, we are re-triangulating for every frame, it is important to calibrate the stereo cameras well. A standard plane based calibration step works well for all our experiments. The fundamental reason that our approach gives reliable motion estimates, even in small-baseline situations is due to the fact that we stick to image-based quantities and use both the left and right images symmetrically. The absolute orientation step used to generate the hypothesis uses the left and the right cameras symmetrically to generate the motion hypothesis. The hypothesis is evaluated and scored based on reprojec-tion errors in both views, resulting in an accurate estimate of the motion. This estimate is then rened in the nonlin-ear minimization step which also uses the two cameras uni-formly. The IMU and the wheel encoders are also used to ll in the relative poses when visual odometry fails. This happens due to sudden lighting changes, fast turns of the robot or lack of good features in the scene (e.g. blank wall). Thus it complements the visual pose system. The relative motion between consecutive frames are chained together to obtain the absolute pose at each frame. Obviously, this is bound to result in accumulation of errors and drifting. We use GPS to correct the pose of the vehicle through a very simple lter and is described next. 3.4 Global Consistency Relative motions between consecutive frames are chained together to obtain the absolute pose at each frame. Obviously, this is bound to result in accumulation of errors and drifting. We use GPS to correct the pose of the vehicle through a simple linear lter. Pose information is used when 4
  • 5. the GPS receiver has at least a 3D position x, and head-ing information is used only when the vehicle is travelling 0.5 m/s or faster, to limit the effect of velocity noise from GPS on the heading estimate. In addition, GPS measure-ments are used only if the robot has travelled a minimum distance from the last GPS measurement. The lter nudges the VO pose towards global consistency, while maintaining local consistency. Over larger loops, of course, the 3 m de-viation of the GPS unit means that the map may not be con-sistent. In this case, other techniques such as wide-baseline image matching [10] would have to be employed. The quality of the registration from ltered VO, shown in Figure 2(d), can be compared to the ltered GPS of Fig-ure 2(c). The low wall, which moved almost 2m over the short loops when using GPS, is much more consistent when VO is employed. And in cases where GPS is blocked or de-graded, such as under heavy tree cover in Figure 3(b), VO still produces maps that are locally consistent. It also allows us to determine wheel slips and stalls with almost no false positives – note the end of the run in Figure 3(b), where the robot was hung up and the wheels were slipping, and wheel odometry produced a large error. 3.5 Evaluation of Visual Odometry We have implemented and tested our integrated pose sys-tem on several outdoor terrains. Since GPS is accurate to only about 3-4 meters, in order to validate our results, the robot was moved in a closed loop on a typical outdoor envi-ronment over 50–100 m, and used the error in start and end poses. Table 1 compares this error for vehicle odometry (IMU + wheel odometry), visual odometry and the GPS integrated visual odometry for four loops. Except for the rst loop, visual odometry outperforms the vehicle odometry, even without GPS ltering, and is comparable to the std of GPS (3m). VO substantially outperformed odometry in 3 of the 4 loops, mainly because of turns and wheel slippage during those runs. This is especially evident in loop 4, where the robot was slipping in mud for a substantial amount of time. 4 Results The combined visual processing results in local maps that represent traversability with a high degree of delity. Figure 3(b) shows the results of an autonomous run of about 130m, over a span of 150 seconds. The rst part of the run was along a mulch path under heavy tree cover, with mixed sunlight and deep shadows. For this run, we used an ofine learning of mulch paths on a test site, then used the learned models on the autonomous run to recognize the mulch color. Cells categorized as path are shown in yellow; Run Number 1 2 3 4 Distance(meters) 82.4 141.6 55.3 51.0 Method Percentage Error Vehicle Odometry 1.3 11.4 11.0 31.0 Raw Visual Odometry 2.2 4.8 5.0 3.9 Visual Odometry GPS 2.0 0.3 1.7 0.9 Table 1. Loop closure error in percentage. Test 12 Test 13 BL R BL R Run 1 5:25 1:46 5:21 2:28 Run 2 5:34 1:50 5:04 2:12 Run 3 5:18 1:52 4:45 2:12 Table 2. Run times for baseline (BL) and our robots(R). black is freespace. Obstacles are indicated by purple (for absolute certainty), and white-to-gray for decreasing cer-tainty. We did not use sight lines for this run. The path did not lead directly to the goal, and there were many opportunities for the robot to head cross-country. About two-thirds of the way through the run, no more paths were available, and the robot went through heavy grass and brush to the goal. The robot's pose, as estimated from l-tered visual odometry is in green; the ltered GPS path is in yellow. Because of the tree cover, GPS suffered from high variance at times. A benet of using VO is that wheel slips and stalls are easily detected, with no false positives. For example, at the end of the run, the robot was hung up on a tree branch, and spun its wheels for a bit. The ltered GPS, depending on wheel odometry, moved far off the global pose, while the ltered VO stayed put. 4.1 Performance One of the unique aspects of the project is independent experimental evaluation. An independent testing group ran monthly blind demos of the perception and control software developed by eight teams and compared their performance to a baseline system. Experiments were conducted every month, with project performers sending in their code on a ash disk to the evaluation team. The disks were inserted into a robot, which then was run over a test course. The last three demos (11, 12, and 13) were considered tests of performance from the rst 18 months of the program. Our team was rst in the last two tests (Test 12 and 13), after being last in Test 11. Most of the problems in Test 11 were caused by using the baseline planner and controller (we wrote our own planner and controller for Tests 12 and 13). Table 2 shows the times for the runs in these tests. We achieved the short run times in Tests 12 and 13 through a 5
  • 6. (a) Tracked Features (b) Autonomous Run Figure 3. (a): Example of visual odometry showing the motion of successfully tracked features (b): Reconstruction on a 130m autonomous run. Yellow is recognized path (learnt ofine), black is freespace, and purple, gray and white are obstacles. The green line shows the robot's trajectory using VO and the yellow line shows the trajectory using IMU and GPS. combination of precise map building and high-speed path planning. Our average speed was over 1.1 m/s, while the robot top speed was limited to 1.3 m/s. Map building relied on VO to provide good localization, ground-plane analysis to help detect obstacles, and sight lines to identify distant regions that are likely to be navigable. 5 Conclusion We have demonstrated a complete system for off-road navigation in unstructured environments, using stereo vi-sion as the main sensor. The system is very robust - we can typically give it a goal position several hundred meters away, and expect it to get there. But there are hazards that are not dealt with by the methods discussed in this paper: water and ditches are two robot-killers. Finally, we would like to use visual landmarks to augment GPS for global consistency, because it would give ner adjustment in the robot's position, which is critical for following routes that have already been found. References [1] M. Agrawal and K. Konolige. Real-time localization in out-door environments using stereo vision and inexpensive gps. In Proceedings International Conference on Pattern Recog-nition (ICPR), 2006. [2] M. Agrawal, K. Konolige, and L. Iocchi. Real-time detection of independent motion using stereo. In IEEE workshop on Motion, 2005. [3] P. Bellutta, R. Manduchi, L. Matthies, K. Owens, and A. Rankin. Terrain perception for DEMO III. In Proc. of the IEEE Intelligent Vehicles Symp., 2000. [4] M. Fischler and R. Bolles. Random sample consensus: a paradigm for model tting with application to image analysis and automated cartography. Commun. ACM., 24:381–395, 1981. [5] J. Guivant, E. Nebot, and S. Baiker. High accuracy navi-gation using laser range sensors in outdoor applications. In ICRA, 2000. [6] J. S. Gutmann and K. Konolige. Incremental mapping of large cyclic environments. In CIRA, 1999. [7] K. Iagnemma, F. Genot, and S. Dubowsky. Rapid physics-based rough-terrain rover planning with sensor and control uncertainty. In ICRA, 1999. [8] K. Konolige. Small vision systems: hardware and implemen-tation. In Intl. Symp. on Robotics Research, pages 111–116, 1997. [9] J. J. Leonard and P. Newman. Consistent, convergent, and constant-time slam. In IJCAI, 2003. [10] D. G. Lowe. Distinctive image features from scale-invariant keypoints. Intl. J. of Computer Vision, 60(2):91–110, 2004. [11] M. Montemerlo and S. Thrun. Large-scale robotic 3-d map-ping of urban structures. In ISER, 2004. [12] H. Moravec and A. Elfes. High resolution maps for wide angles sonar. In ICRA, 1985. [13] D. Nister, O. Naroditsky, and J. Bergen. Visual odometry. In CVPR, 2004. [14] C. F. Olson, L. H. Matthies, M. Schoppers, and M. W. Mai-mone. Robust stereo ego-motion for long distance naviga-tion. In CVPR, 2000. [15] A. Rankin, A. Huertas, and L. Matthies. Evaluation of stereo vision obstacle detection algorithms for off-road autonomous navigation. In AUVSI Symp. on Unmanned Systems, 2005. [16] D. J. Spero and R. A. Jarvis. 3D vision for large-scale out-door environments. In Proc. of the Australasian Conf. on Robotics and Automation (ACRA), 2002. 6