SlideShare a Scribd company logo
1 of 11
Download to read offline
The International Journal of Multimedia & Its Applications (IJMA) Vol.6, No.5, October 2014 
LEADER FOLLOWER FORMATION CONTROL 
OF GROUND VEHICLES USING DYNAMIC 
PIXEL COUNT AND INVERSE PERSPECTIVE 
MAPPING 
S.M.Vaitheeswarana, Bharath.M.K, Ashish Prasad and Gokul.M 
Aerospace Electronics and Systems Division, CSIR-National Aerospace laboratories, 
HAL Airport Road, Kodihalli, Bengaluru, 560017 India 
ABSTRACT 
This paper deals with leader-follower formations of non-holonomic mobile robots, introducing a formation 
control strategy based on pixel counts using a commercial grade electro optics camera. Localization of the 
leader for motions along line of sight as well as the obliquely inclined directions are considered based on 
pixel variation of the images by referencing to two arbitrarily designated positions in the image frames. 
Based on an established relationship between the displacement of the camera movement along the viewing 
direction and the difference in pixel counts between reference points in the images, the range and the angle 
estimate between the follower camera and the leader is calculated. The Inverse Perspective Transform is 
used to account for non linear relationship between the height of vehicle in a forward facing image and its 
distance from the camera. The formulation is validated with experiments. 
KEYWORDS 
Vision based range and angle measurement, dynamic pixel count controller, inverse mapping projection. 
1. INTRODUCTION 
Unmanned Systems for autonomous navigation requires some form of situational awareness of 
the robot’s environment. This awareness is achieved commonly through different sensors such as 
high end electro-optic cameras, traditional stereo cameras, scanning laser range finders, or some 
combination of these sensors. However, these sensors provide an overwhelming amount of data 
and come at the expense of additional computational and increased costs to the system. In this 
paper, a low cost, low computation, vision solution to situational awareness in the framework of a 
formation control convoy formation problem for autonomous ground vehicles is proposed. The 
Vision system uses the traditional monocular cameras but differs in its computational 
method[1,2,3]. The traditional camera generates tens of thousands of data points creating a very 
fine map of the environment, but does not provide any insight into the recognition of objects 
within the scene. The simplified vision system presented here in the paper works by first 
recognizing the leader as a high contrast symbol within the camera images. The correlation 
problem is reduced from tens of thousands of pixels to a set of feature points and in some cases 
just one feature point. A vector for the relative position of the robot to the scene feature is 
DOI : 10.5121/ijma.2014.6501 1
The International Journal of Multimedia & Its Applications (IJMA) Vol.6, No.5, October 2014 
calculated based on the dynamic pixel count in the images and is used to generate the motion 
commands for the follower robot. The process is repeated iteratively until the robot is correctly 
localized. A second advantage of the approach is that the algorithm used is heavily researched, 
well documented and several fast open source algorithms already exist. A third contribution to 
the paper is in the distance measurement scheme based on dynamic pixel measurements. If the 
leader under measurement is not perpendicular to the optical axis, a relationship between the 
displacement of the camera movement and the difference in pixel counts is used to measure the 
photographing distance and inclination angle to get the leader position. lastly the angle of view 
under which a scene is acquired and the distance of the objects from the camera, namely the 
perspective effect, contributes different information content to each pixel of an image. To cope 
with this problem a geometrical transform Inverse Perspective Mapping, (IPM) [4], is used in this 
paper. This allows removing the perspective effect from the acquired image, remapping it into a 
new 2-dimensional domain (the remapped domain). 
2 
2. PROBLEM STATEMENT 
Leader-follower formation of non-holonomic ground vehicles is assumed. The follower ground 
vehicle is equipped with a camera to obtain its bearing and position information with respect to 
the leader robot based on pixel measurement count in the images. Vehicle kinematics obeys the 
bicycle model. The controller’s goal is to keep a desired position and orientation with respect to 
the leader. The problem is to find the relative position and orientation with respect to the leader 
so that it can be provided as feedback to the control loop. Accurate distance measurements are 
crucial, as the follower robot will be planning its path based upon the data from the vision 
system. It is important for the hardware and software to have a fast update rate with a minimum 
of delay, as both can directly affect the dynamic performance of the control system for the 
camera. 
With these requirements in mind, this work presents a system selected for their individual 
characteristics and complementary nature: 1) A color detector working in the RGB image space, 
and 2) Inverse Perspective Mapping (IPM) to create a top down view of the image 3) Angle and 
Distance Estimation of the leader and follower based on pixel counts in the images. 
3. METHODOLOGY 
This section describes the algorithm used to identify a leader object in the sequence of images 
from a color video camera mounted on the follower vehicle to estimate the range and bearing to 
the target. 
3.1. Color detection 
The color detector identifies areas that match the leader’s color characteristics using a number of 
steps shown below: 
1. The image is separated into its constituent Red Green and Blue channels. The luminance 
( , ) 
1 
f x y 
of the pixel (x, y) is described as [5]: 
f x y = R + + (1) 
( , ) 0.299 
1 x y 
( , ) 
0.114 
( , ) 
0.578 
( , ) 
B 
x y 
G 
x y
The International Journal of Multimedia & Its Applications (IJMA) Vol.6, No.5, October 2014 
2. Taking one of the RGB channels, the algorithm calculates the average pixel intensity from the 
area immediately in front of the vehicle. 
3. A binarization [5] is performed using the average value from step 2 as the threshold. The pixels 
similar to surface (path) are thresholded to white and others are threshholded to black, where 1 
signifies a road pixel, and a 0 is a non-road pixel. 
4. The steps 2 and 3 are repeated for the other channels (Green and Blue).Morphological image 
processing operations [5], such as dilation, erosion, opening and closing are used after the process 
of image segmentation, in order to enable the underlying shapes to be identified and optimally 
reconstruct the image from their noisy precursors. Connected-components that share similar pixel 
intensity values are identified and then connected with each other. The connected-component 
labelling scans an image and groups pixels into one or more components according to pixel 
connectivity. Once all groups are determined, each pixel is labelled with a grey level on the basis 
of the component. 
3 
5. Finally, the algorithm merges the 3 new binary images into a new RGB image 
3.2. Inverse perspective mapping (IPM) 
IPM is a mathematical technique whereby a coordinate system may be transformed from one 
perspective to another. In this case we use IPM to remove the effects of perspective distortion of 
the road surface in the forward facing image, to an undistorted top-down view as shown in Fig. 
(1). As can be seen lines that appear to converge near the horizon are made parallel To get a bird 
eye’s view image, a perspective transform relating the pixel (u,v) values seen by the camera in 
the original image is obtained through a transformation matrix as shown in figure. 
Figure 1. IPM Transformation 
Transforming the image in this manner removes the non linearity of distances and the orientation. 
We can focus our attention on only a sub-region of the input image, which helps in reducing the 
run time considerably. In order to create the top-down view, we first need to find the mapping of 
a point on the road to its projection on the image surface. To get the IPM of the input image, we 
assume a flat road, and use the camera intrinsic (focal length and optical center) and extrinsic (q
The International Journal of Multimedia & Its Applications (IJMA) Vol.6, No.5, October 2014 
and height above ground) parameters to perform this transformation. We start by defining a world 
frame F = X,Y,Z centered at the camera optical center, a camera frame F
 = 
X
,Y
,Z
, and an image frame F = u, v as shown in Fig.(1). We assume that the camera 
frame X
 axis stays in the world frame XY plane. The height of the camera frame above the 
ground plane is h. Starting from a point in the image plane iP = u, v, 1,1, it can be shown that 
its projection on the road plane can be found by applying the homogeneous transformation. 
4 
This mapping is expressed as [6]: 
i P ={u, v,1,1} = [K][T][R](x, y, z,1) (2) 
Where R is the rotation matrix: 
 
    
R (3) 
 
 
1 0 0 0 
    
 
q q 
0 cos sin 0 
− 
= 
q q 
0 sin cos 0 
0 0 0 1 
T is the translation matrix: 
 
    
T (4) 
 
 
1 0 0 0 
    
 
0 1 0 0 
0 0 1 / sin 
− 
= 
0 0 0 1 
h q 
And K is the camera parameter matrix: 
 
f ku s uo w 
   
K (5) 
 
 
= 
   
 
* 
f kv vo 
0 * 0 
0 0 1 0 
Here, f is the focal length of the camera and  ∗  is the aspect ratio of the pixels,  and 
 the camera centre coordinates. Fig. (2) shows a typical IPM sample. The leader vehicle is 
extracted using object detection process and a background subtraction procedure from the 
transformed bird eye view images of the follower. 
Figure 2. IPM Sample (a) Perspective (frontal) View; (b) Bird Eye View
The International Journal of Multimedia  Its Applications (IJMA) Vol.6, No.5, October 2014 
A pose estimation of the leader vehicle is performed in order for the follower vehicle to navigate. 
Only the area directly visible in front of the host vehicle is analyzed. Cropping is performed on 
the road segmented image, in order to get the region of interest which provides us the distance 
and angle of orientation of leader in term of pixels. The two red lines in Fig 2(b) indicate the 
distance of the leader vehicle which is being taken as two cropped images for computation as 
shown in Fig.(2). 
5 
3.3. Target Localization based on pixel count [6-9] 
The idea of localization is from the triangular relationship, shown in Fig. 3. In this, we first 
capture an image of the leader incorporating a known-dimension rectangle, and then the 
proportion relationship between the real-dimension and the image-dimension of the rectangle is 
found. According to the proportion relationship, the distance d is easily calculated [6,7,8,9]. 
In this section we outline the methodology used for range and angle estimation based on the pixel 
count number variation in images for the purpose of localization of the leader in the images. For 
this we first capture an image incorporating a known-dimension trapezoid, and then the 
proportion relationship between the real-dimension and the image-dimension of the trapezoid are 
found. In this we have two cases: a. the targeted leader vehicle lying on the plane perpendicular 
to the optical axis of the follower vehicle camera, b. the targeted leader vehicle lies on an oblique 
tangent plane with respect to the optical axis of the follower vehicle camera. 
Case a: Leader perpendicular to the optical axis of the follower camera: 
Figure 3. Pixel Count Measurement Geometry using Bird Eye View Image for perpendicular Case 
As shown in Fig.(3) the target plane is assumed to be perpendicular to the optical axis and the 
relationship between distance and variation of pixel counts at different viewing distances are 
obtained using Figure and geometry. The target plane formed by the leader vehicle is represented 
by points P, O, Q. A simple relationship between the change in leader follower distance , and 
the difference in pixel counts between the reference points in the images at different viewing 
distances, are obtained with the assumption that the distance between reference points P and Q 
will not change. We have the following relationship between pixel counts and distances:
The International Journal of Multimedia  Its Applications (IJMA) Vol.6, No.5, October 2014 
h of the leader on 
6 
) 
1 
) 
2 
( 
d h 
( 
d h 
max 
( ) 
d PQ 
( ) 
d PQ 
( ) 
2 
max 
( ) 
1 
n 
n PQ 
n 
n PQ 
= 
= 
(6) 
Where, points P and Q is the target plane distance.  and  are the pixel counts 
between points P and Q on the image plane at distances ℎ0 and ℎ0 respectively. Δℎ is the 
change in distance between the leader and follower positions. ℎ$ is the distance between the 
optical origin (OP) and the front end of the camera on the follower. From (1) and (2), we have: 
( ) 
2 
n PQ 
( ) 
1 
d h 
) 
2 
( 
) 
1 
( 
n PQ 
d h 
= (7) 
Because of the displacement, (0) 
Dh = h − h , we have: 
2 
(0) 
1 
+ 
(0) 
1 
h hs 
h hs 
d h 
d h 
( ) 
2 
n PQ 
n PQ 
+ 
= = 
(0) 
2 
) 
2 
( 
) 
1 
( 
( ) 
1 
(8) 
Substituting h = h (0) + Dh 
(0) 
into (4), viewing distances (0) 
1 2 
1 
h and (0) 
2 
the perpendicular plane can be obtained as: 
h hs 
( ) 
n PQ 
( ) 
( ) 2 
1 
n PQ 
( ) 1 
2 
n PQ n PQ 
or 
h O 
h hs 
n PQ n PQ 
h O 
D − 
− 
= 
D − 
− 
= 
* 
( ) 
1 
( ) 
2 
* 
( ) 
1 
( ) 
2 
(9) 
Also, distance between points A and B can be obtained as: 
( ) 
1 
* 
(0) )* tan 
2 
q 
s H h h 
(0) )* tan 
1 
s H h h 
_max 
2( 
( ) 
2 
* 
_max 
2( 
N AB 
H 
N 
N AB 
H 
N 
AB 
q 
+ 
= 
+ 
= 
(10) 
where, % is the maximal horizontal view angle of the camera and N%'()* is the maximal pixels 
in a horizontal scan line of an image frame, which is fixed and known as a priori irrelevant of 
photographing distances. As a result of this relationship, the distance between an arbitrarily-selected 
point P and the origin O can also be represented as follows:
The International Journal of Multimedia  Its Applications (IJMA) Vol.6, No.5, October 2014 
7 
( ) 
1 
* 
( ) )*tan 
2 
q 
s H h O h 
( ) )* tan 
1 
s H h O h 
_max 
2( 
( ) 
2 
* 
_max 
2( 
n PQ 
H 
n 
n PQ 
H 
n 
PQ 
q 
+ 
= 
+ 
= 
(11) 
The above-mentioned relationship is only valid for leader objects or planes that are perfectly 
perpendicular to the optical axis. 
Case b: Leader oblique to the optical axis of the follower camera: 
Fig. (4) shows the configuration of the proposed image-based system for measuring the leader 
position on an oblique surface based on variation of pixel counts, where an incline angle +, 
exists between the oblique and perpendicular planes. Virtual planes are introduced to address the 
problem to measure the incline angle +,.Thus, distances between the camera and the virtual 
planes for different positions can be obtained as: 
h hs 
n P 
n P 
n P n P 
h P 
h hs 
n P n P 
h P 
D − 
− 
= 
D − 
− 
= 
* 
( ) 
1 
( ) 
2 
( ) 
( ) 1 
2 
* 
( ) 
1 
( ) 
2 
( ) 
( ) 2 
1 
Figure 4. Pixel Count Measurement Geometry using Bird Eye View Image for Oblique Case
The International Journal of Multimedia  Its Applications (IJMA) Vol.6, No.5, October 2014 
8 
h hs 
n Q 
n Q 
n Q n Q 
h Q 
h hs 
n Q n Q 
h Q 
D − 
− 
= 
D − 
− 
= 
* 
( ) 
1 
( ) 
2 
( ) 
( ) 1 
2 
* 
( ) 
1 
( ) 
2 
( ) 
( ) 2 
1 
(12) 
where, hP and hQ are distances from the camera at position 1 to virtual plane VP_P and 
virtual plane VP_Q, respectively. hP and hQ are distances from the camera at position 2 to 
virtual plane VP_P and virtual plane VP_Q, respectively. n0 P is the pixel counts in the image 
plane for point P when image is taken at position n . From (9), (10), (11), and (12), we have: 
− = − = D 
h P h Q h P h Q k 
( ) tan 
Q 
d 
− D = 
P 
d 
k 
m 
Q m 
d 
P 
k d 
+ 
D = + 
tan 1 
( ) 
1 
( ) 
1 
( ) 
1 
( ) 
1 
q 
q (13) 
where, Δ1 is the distance between the virtual planes VP_P and VP_Q, and ( is the incline 
angle. d3 and d4 represent the distance from point P and point Q to the optical axis, respectively, 
which can be expressed as follows: 
To this end, the distance from the optical origin at positions 1 and 2 to the oblique plane 
intersecting the optical axis can be expressed as: 
= − = + 
q q 
Q m 
m h Q d P 
h O h Q d 
Q m 
m h Q d P 
h O h P d 
q q 
( ) tan 
2 
( ) tan 
2 
( ) 
2 
( ) tan 
1 
( ) tan 
1 
( ) 
1 
= − = + 
(14) 
and the distance between two arbitrary point P and Q as DPQ is derived as 
D(PQ) = d + secq 
Q m 
d 
P 
3.4 Vehicle kinematics and control 
The goal of the vehicle controller is to use the leader’s range and bearing from the vision system 
to force the follower vehicle to follow the leader’s path. The controller designed to follow the 
path the leader is delayed by an arbitrary following time, t (typically 10 seconds or so). 
If x, y is the global position of the follower and x8, y8 is the position of the leader, the 
controller attempts to make 9xt, yt;track 9x8t − , y8t − ;For simplicity, the delayed 
leader position is defined 9x=t, y=t;.The position of the leader vehicle, x8, y8 , is 
determined using the camera vision data relative to the current follower position, as described in 
Section.
The International Journal of Multimedia  Its Applications (IJMA) Vol.6, No.5, October 2014 
The controller uses a bicycle model for a kinematic model of the vehicle, which is commonly 
used in vehicle following applications. Referring to Fig. (5), x, y is the follower position, v is 
the commanded speed, and  is the commanded steering. The control laws are given as 
9 
0 
= +  
vc v (15) 
, 1 
1, 
, 1 
p e 
e k 
p e 
k 
d 
0 
= , +  
, 
g c k p e e k p e e k p e e k p e e 
1 
2, , 
2 
, , 
1 
, 
2 
q 
q 
q 
q 
(16) 
Figure 5. Leader Follower Geometry and Co-ordinate system 
= − + − 
= − − + − 
q q 
q 
q q 
q q 
= − 
d 
e 
y 
d 
x y 
d 
e x 
y 
d 
x y 
d 
e x 
( ) sin ( ) cos 
2 
( ) cos ( ) sin 
1 
(17) 
4. EXPERIMENTAL SET UP AND IMPLEMENTATION 
In order to show the effectiveness of the algorithms, experiments were performed on a set up 
consisting of two ground vehicles in a convoy formation using autonomous control for the master 
and vision control for the slave. The robot vehicle frame and chassis is a model manufactured by 
Robokit, INDIA. It has a differential 4-wheeled robot base with two wheels at the front and the 
two wheels are at the back. The electronics is developed in-house. The vehicle uses four DC 
motors (geared) for driving the four wheels independently. The DC motors are controlled by 
Atmega microcontroller and controller board is shown in figure. For this purpose, the C language, 
together with the OpenCV library of is used. 
The follower tracks the remotely controllable leader moving in an arbitrary trajectory, while 
keeping a predefined inter-distance. The control system has two parts - platform inter-distance 
control and control of follower orientation with respect to the leader. The follower orientation 
control provides actions needed for follower to face the leader, which is accomplished when the 
leader is located in the center of image captured by the follower. The distance between the leader 
and the follower is obtained from the leader size in current frame. An increase in the platform 
inter-distance causes the decrease in the platform size in an image, and vice versa. This 
dependency has been experimentally obtained.
The International Journal of Multimedia  Its Applications (IJMA) Vol.6, No.5, October 2014 
The difference between current and desired platform inter-distance causes the follower 
forward/backward motion, while the leader’s displacement from the image center causes the 
follower rotation until the leader appears in the image center. 
Three different postures (angles of view) of the platform at three separation distances have been 
examined. For each posture and distance (9 cases) 90 frames have been processed (15 fps). 
Determination of the position and orientation of the leader is achieved by estimating its distance 
and relative orientation with regard to the follower using the procedure outlined in Section III. 
Only the area directly visible in front of the host vehicle is analyzed. Cropping is performed on 
the road segmented image, in order to get the region of interest which provides us the distance 
and angle of orientation of leader in term of pixels. 
In case of front view (straight) as shown in fig 6(b) and for the different distances considered 
results show very good localization and following capabilities. For the left and right views shown 
in Fig (6a and 6c) the results are also in good agreement indicating the validity of the design and 
implementation. Fig.(7) shows typical snapshots capturing the faithful change in movements in 
straight line and orientation of the follower vehicle as demanded indicating the validity of the 
approach and method used in the paper. 
10 
(a) (b) (c) (d) 
Figure 6. Distance and orientation of the leader vehicle. (a) left orientation 350, (b) straight movement 
00,(c) Right orientation 500, (d) cropped version used for pixel count 
Figure 7. Leader Follower experimental geometry, angle measurement and Bird Eye View images showing 
pixel count at two points for straight and angle maneuvering
The International Journal of Multimedia  Its Applications (IJMA) Vol.6, No.5, October 2014 
11 
5. CONCLUSIONS 
The algorithms used for vision detection and localization provide a simple but effective 
processing approach in real time for the leader-follower formation control approach considered in 
the paper. The solution is based on pixel variation of the images by referencing to two arbitrarily 
designated positions in the image frames. The tracking enables a follower to determine its relative 
position and orientation with respect to the leader in its field of view. Performance of the 
proposed method has been demonstrated on a laboratory setup composed of two mobile robot 
platforms. Experimental results demonstrate that overall accuracy of recognition is very good (in 
some case even 100%), thus providing accurate calculation of distance between the leader and the 
follower. Future work will be toward improvement of robustness of the proposed method with 
respect to various environment conditions (light intensity, shadows, etc) as well using multiple 
vehicles for validations. 
REFERENCES 
[1] C. Harris and M. Stephens. A combined corner and edge detector. In Alvey vision conference, 
volume 15,page 50. Manchester, UK, 1988. 
[2] J. Shi and C. Tomasi. Good features to track. In IEEE Computer Society Conference on Computer 
Vision and Pattern Recognition, pages 593–600, 1994. 
[3] D.G Lowe. Object recognition from local scale invariant features. In International Conference on 
Computer Vision, volume 2, pages 1150–1157. Greece,1999. 
[4] Malamas. E. N, E G.M Petrakis, Zervakis. M, Petit. L, Jean-Didier Legat (2003). “A survey on 
industrial vision systems, applications and tools”, Image and Vision Computing, 21, 2, 10, 171–188. 
[5] Mohamed Aly, (2008) “Real time Detection of Lane Markers in Urban Streets”, IEEE Intelligent 
Vehicles Symposium Eindhoven University of Technology Eindhoven, The Netherlands, 7-12. 
[6] Hsu, C.C.; Lu, M.C.; Wang, W.Y.; Lu, Y.Y. (2009) Three dimensional measurement of distant 
objects based on laser-projected CCD images. IET Sci. Meas. Technol. 3, 197–207. 
[7] Lu, M.C.; Wang, W.Y.; Chu, C.Y. (2006) Image-based distance and area measuring systems. IEEE 
Sens. J, 6, 495–503. 
[8] Wang, W.Y.; Lu, M.C.; Kao, H.L.; Chu, C.Y. (2007) Nighttime Vehicle Distance Measuring 
Systems. IEEE Trans. Circ. Syst. II: 54, 81–85. 
[9] Hus, C.C.; Lu, M.C.; Wang, W.Y.; Lu, Y.Y. (2009) Distance measurement based in pixel variation of 
CCD image. ISA Trans., 40, 389–395.

More Related Content

What's hot

BEV Semantic Segmentation
BEV Semantic SegmentationBEV Semantic Segmentation
BEV Semantic SegmentationYu Huang
 
Structure and Motion - 3D Reconstruction of Cameras and Structure
Structure and Motion - 3D Reconstruction of Cameras and StructureStructure and Motion - 3D Reconstruction of Cameras and Structure
Structure and Motion - 3D Reconstruction of Cameras and StructureGiovanni Murru
 
Lane detection by use of canny edge
Lane detection by use of canny edgeLane detection by use of canny edge
Lane detection by use of canny edgebanz23
 
Environment Detection and Path Planning Using the E-puck Robot
Environment Detection and Path Planning Using the E-puck Robot Environment Detection and Path Planning Using the E-puck Robot
Environment Detection and Path Planning Using the E-puck Robot IRJET Journal
 
An Assessment of Image Matching Algorithms in Depth Estimation
An Assessment of Image Matching Algorithms in Depth EstimationAn Assessment of Image Matching Algorithms in Depth Estimation
An Assessment of Image Matching Algorithms in Depth EstimationCSCJournals
 
3-d interpretation from single 2-d image for autonomous driving
3-d interpretation from single 2-d image for autonomous driving3-d interpretation from single 2-d image for autonomous driving
3-d interpretation from single 2-d image for autonomous drivingYu Huang
 
camera-based Lane detection by deep learning
camera-based Lane detection by deep learningcamera-based Lane detection by deep learning
camera-based Lane detection by deep learningYu Huang
 
Multiexposure Image Fusion
Multiexposure Image FusionMultiexposure Image Fusion
Multiexposure Image FusionIJMER
 
Deep VO and SLAM IV
Deep VO and SLAM IVDeep VO and SLAM IV
Deep VO and SLAM IVYu Huang
 
Fisheye Omnidirectional View in Autonomous Driving II
Fisheye Omnidirectional View in Autonomous Driving IIFisheye Omnidirectional View in Autonomous Driving II
Fisheye Omnidirectional View in Autonomous Driving IIYu Huang
 
Pose estimation from RGB images by deep learning
Pose estimation from RGB images by deep learningPose estimation from RGB images by deep learning
Pose estimation from RGB images by deep learningYu Huang
 
3-d interpretation from single 2-d image IV
3-d interpretation from single 2-d image IV3-d interpretation from single 2-d image IV
3-d interpretation from single 2-d image IVYu Huang
 
Matching algorithm performance analysis for autocalibration method of stereo ...
Matching algorithm performance analysis for autocalibration method of stereo ...Matching algorithm performance analysis for autocalibration method of stereo ...
Matching algorithm performance analysis for autocalibration method of stereo ...TELKOMNIKA JOURNAL
 
Deep Learning’s Application in Radar Signal Data II
Deep Learning’s Application in Radar Signal Data IIDeep Learning’s Application in Radar Signal Data II
Deep Learning’s Application in Radar Signal Data IIYu Huang
 
Deep VO and SLAM
Deep VO and SLAMDeep VO and SLAM
Deep VO and SLAMYu Huang
 
Driving behaviors for adas and autonomous driving XII
Driving behaviors for adas and autonomous driving XIIDriving behaviors for adas and autonomous driving XII
Driving behaviors for adas and autonomous driving XIIYu Huang
 
Deep vo and slam iii
Deep vo and slam iiiDeep vo and slam iii
Deep vo and slam iiiYu Huang
 

What's hot (20)

BEV Semantic Segmentation
BEV Semantic SegmentationBEV Semantic Segmentation
BEV Semantic Segmentation
 
Lane Detection
Lane DetectionLane Detection
Lane Detection
 
Structure and Motion - 3D Reconstruction of Cameras and Structure
Structure and Motion - 3D Reconstruction of Cameras and StructureStructure and Motion - 3D Reconstruction of Cameras and Structure
Structure and Motion - 3D Reconstruction of Cameras and Structure
 
Lane detection by use of canny edge
Lane detection by use of canny edgeLane detection by use of canny edge
Lane detection by use of canny edge
 
Environment Detection and Path Planning Using the E-puck Robot
Environment Detection and Path Planning Using the E-puck Robot Environment Detection and Path Planning Using the E-puck Robot
Environment Detection and Path Planning Using the E-puck Robot
 
13
1313
13
 
An Assessment of Image Matching Algorithms in Depth Estimation
An Assessment of Image Matching Algorithms in Depth EstimationAn Assessment of Image Matching Algorithms in Depth Estimation
An Assessment of Image Matching Algorithms in Depth Estimation
 
3-d interpretation from single 2-d image for autonomous driving
3-d interpretation from single 2-d image for autonomous driving3-d interpretation from single 2-d image for autonomous driving
3-d interpretation from single 2-d image for autonomous driving
 
camera-based Lane detection by deep learning
camera-based Lane detection by deep learningcamera-based Lane detection by deep learning
camera-based Lane detection by deep learning
 
Multiexposure Image Fusion
Multiexposure Image FusionMultiexposure Image Fusion
Multiexposure Image Fusion
 
Deep VO and SLAM IV
Deep VO and SLAM IVDeep VO and SLAM IV
Deep VO and SLAM IV
 
Fisheye Omnidirectional View in Autonomous Driving II
Fisheye Omnidirectional View in Autonomous Driving IIFisheye Omnidirectional View in Autonomous Driving II
Fisheye Omnidirectional View in Autonomous Driving II
 
Pose estimation from RGB images by deep learning
Pose estimation from RGB images by deep learningPose estimation from RGB images by deep learning
Pose estimation from RGB images by deep learning
 
3-d interpretation from single 2-d image IV
3-d interpretation from single 2-d image IV3-d interpretation from single 2-d image IV
3-d interpretation from single 2-d image IV
 
06466595
0646659506466595
06466595
 
Matching algorithm performance analysis for autocalibration method of stereo ...
Matching algorithm performance analysis for autocalibration method of stereo ...Matching algorithm performance analysis for autocalibration method of stereo ...
Matching algorithm performance analysis for autocalibration method of stereo ...
 
Deep Learning’s Application in Radar Signal Data II
Deep Learning’s Application in Radar Signal Data IIDeep Learning’s Application in Radar Signal Data II
Deep Learning’s Application in Radar Signal Data II
 
Deep VO and SLAM
Deep VO and SLAMDeep VO and SLAM
Deep VO and SLAM
 
Driving behaviors for adas and autonomous driving XII
Driving behaviors for adas and autonomous driving XIIDriving behaviors for adas and autonomous driving XII
Driving behaviors for adas and autonomous driving XII
 
Deep vo and slam iii
Deep vo and slam iiiDeep vo and slam iii
Deep vo and slam iii
 

Viewers also liked

Scott.Jessup-Resume (1)
Scott.Jessup-Resume (1) Scott.Jessup-Resume (1)
Scott.Jessup-Resume (1) Scott Jessup
 
Produtos AGX Tecnologia
Produtos AGX TecnologiaProdutos AGX Tecnologia
Produtos AGX Tecnologiajenjohnlee
 
Technology Trends in Insurance
Technology Trends in InsuranceTechnology Trends in Insurance
Technology Trends in Insuranceminorifh
 
Direct torque control of induction motor using space vector modulation
Direct torque control of induction motor using space vector modulationDirect torque control of induction motor using space vector modulation
Direct torque control of induction motor using space vector modulationIAEME Publication
 
Las redes sociales tellez
Las redes sociales tellezLas redes sociales tellez
Las redes sociales tellezctellezvirgen
 
Insurance Technology As A Catalyst For Change - Emerging Market Segments
Insurance Technology As A Catalyst For Change - Emerging Market SegmentsInsurance Technology As A Catalyst For Change - Emerging Market Segments
Insurance Technology As A Catalyst For Change - Emerging Market SegmentsAgile Financial Technologies
 
NASA Lunabotics Mining Competition 2012, Chondrobot-2, Bangladesh
NASA Lunabotics Mining Competition 2012, Chondrobot-2, BangladeshNASA Lunabotics Mining Competition 2012, Chondrobot-2, Bangladesh
NASA Lunabotics Mining Competition 2012, Chondrobot-2, BangladeshKazi Anik
 
2010 Honda Accord Los Angeles Glendale CA Diamond Honda Glendale Your Los Ang...
2010 Honda Accord Los Angeles Glendale CA Diamond Honda Glendale Your Los Ang...2010 Honda Accord Los Angeles Glendale CA Diamond Honda Glendale Your Los Ang...
2010 Honda Accord Los Angeles Glendale CA Diamond Honda Glendale Your Los Ang...Diamond Hond of Glendale
 
Portland, Maine Asia Modern Master Bedroom
Portland, Maine Asia Modern Master BedroomPortland, Maine Asia Modern Master Bedroom
Portland, Maine Asia Modern Master Bedroomsanctuarydesignportland
 
201205 The Intersection: Technology and Insurance
201205 The Intersection: Technology and Insurance201205 The Intersection: Technology and Insurance
201205 The Intersection: Technology and InsuranceSteven Callahan
 
Sample lemon beer name development
Sample lemon beer name developmentSample lemon beer name development
Sample lemon beer name developmentBrand Acumen
 

Viewers also liked (17)

Sena estacion 2
Sena estacion 2Sena estacion 2
Sena estacion 2
 
Scott.Jessup-Resume (1)
Scott.Jessup-Resume (1) Scott.Jessup-Resume (1)
Scott.Jessup-Resume (1)
 
Produtos AGX Tecnologia
Produtos AGX TecnologiaProdutos AGX Tecnologia
Produtos AGX Tecnologia
 
Technology Trends in Insurance
Technology Trends in InsuranceTechnology Trends in Insurance
Technology Trends in Insurance
 
Iss
IssIss
Iss
 
Slowakei
SlowakeiSlowakei
Slowakei
 
Folder tiriba
Folder tiribaFolder tiriba
Folder tiriba
 
Direct torque control of induction motor using space vector modulation
Direct torque control of induction motor using space vector modulationDirect torque control of induction motor using space vector modulation
Direct torque control of induction motor using space vector modulation
 
Technologyvoice
TechnologyvoiceTechnologyvoice
Technologyvoice
 
Las redes sociales tellez
Las redes sociales tellezLas redes sociales tellez
Las redes sociales tellez
 
Insurance Technology As A Catalyst For Change - Emerging Market Segments
Insurance Technology As A Catalyst For Change - Emerging Market SegmentsInsurance Technology As A Catalyst For Change - Emerging Market Segments
Insurance Technology As A Catalyst For Change - Emerging Market Segments
 
NASA Lunabotics Mining Competition 2012, Chondrobot-2, Bangladesh
NASA Lunabotics Mining Competition 2012, Chondrobot-2, BangladeshNASA Lunabotics Mining Competition 2012, Chondrobot-2, Bangladesh
NASA Lunabotics Mining Competition 2012, Chondrobot-2, Bangladesh
 
Mike Golden
Mike GoldenMike Golden
Mike Golden
 
2010 Honda Accord Los Angeles Glendale CA Diamond Honda Glendale Your Los Ang...
2010 Honda Accord Los Angeles Glendale CA Diamond Honda Glendale Your Los Ang...2010 Honda Accord Los Angeles Glendale CA Diamond Honda Glendale Your Los Ang...
2010 Honda Accord Los Angeles Glendale CA Diamond Honda Glendale Your Los Ang...
 
Portland, Maine Asia Modern Master Bedroom
Portland, Maine Asia Modern Master BedroomPortland, Maine Asia Modern Master Bedroom
Portland, Maine Asia Modern Master Bedroom
 
201205 The Intersection: Technology and Insurance
201205 The Intersection: Technology and Insurance201205 The Intersection: Technology and Insurance
201205 The Intersection: Technology and Insurance
 
Sample lemon beer name development
Sample lemon beer name developmentSample lemon beer name development
Sample lemon beer name development
 

Similar to Leader Follower Formation Control of Ground Vehicles Using Dynamic Pixel Count and Inverse Perspective Mapping

Visual Odometry using Stereo Vision
Visual Odometry using Stereo VisionVisual Odometry using Stereo Vision
Visual Odometry using Stereo VisionRSIS International
 
An automatic algorithm for object recognition and detection based on asift ke...
An automatic algorithm for object recognition and detection based on asift ke...An automatic algorithm for object recognition and detection based on asift ke...
An automatic algorithm for object recognition and detection based on asift ke...Kunal Kishor Nirala
 
Dense Visual Odometry Using Genetic Algorithm
Dense Visual Odometry Using Genetic AlgorithmDense Visual Odometry Using Genetic Algorithm
Dense Visual Odometry Using Genetic AlgorithmSlimane Djema
 
Application of Vision based Techniques for Position Estimation
Application of Vision based Techniques for Position EstimationApplication of Vision based Techniques for Position Estimation
Application of Vision based Techniques for Position EstimationIRJET Journal
 
V.KARTHIKEYAN PUBLISHED ARTICLE 1
V.KARTHIKEYAN PUBLISHED ARTICLE 1V.KARTHIKEYAN PUBLISHED ARTICLE 1
V.KARTHIKEYAN PUBLISHED ARTICLE 1KARTHIKEYAN V
 
Image fusion using nsct denoising and target extraction for visual surveillance
Image fusion using nsct denoising and target extraction for visual surveillanceImage fusion using nsct denoising and target extraction for visual surveillance
Image fusion using nsct denoising and target extraction for visual surveillanceeSAT Publishing House
 
Automatic License Plate Detection in Foggy Condition using Enhanced OTSU Tech...
Automatic License Plate Detection in Foggy Condition using Enhanced OTSU Tech...Automatic License Plate Detection in Foggy Condition using Enhanced OTSU Tech...
Automatic License Plate Detection in Foggy Condition using Enhanced OTSU Tech...IRJET Journal
 
Fuzzy-proportional-integral-derivative-based controller for object tracking i...
Fuzzy-proportional-integral-derivative-based controller for object tracking i...Fuzzy-proportional-integral-derivative-based controller for object tracking i...
Fuzzy-proportional-integral-derivative-based controller for object tracking i...IJECEIAES
 
Image Processing Algorithms For Deep-Space Autonomous Optical Navigation 2.pptx
Image Processing Algorithms For Deep-Space Autonomous Optical Navigation 2.pptxImage Processing Algorithms For Deep-Space Autonomous Optical Navigation 2.pptx
Image Processing Algorithms For Deep-Space Autonomous Optical Navigation 2.pptxahmadzahran219
 
Tracking Chessboard Corners Using Projective Transformation for Augmented Rea...
Tracking Chessboard Corners Using Projective Transformation for Augmented Rea...Tracking Chessboard Corners Using Projective Transformation for Augmented Rea...
Tracking Chessboard Corners Using Projective Transformation for Augmented Rea...CSCJournals
 
IRJET- Digital Image Forgery Detection using Local Binary Patterns (LBP) and ...
IRJET- Digital Image Forgery Detection using Local Binary Patterns (LBP) and ...IRJET- Digital Image Forgery Detection using Local Binary Patterns (LBP) and ...
IRJET- Digital Image Forgery Detection using Local Binary Patterns (LBP) and ...IRJET Journal
 
Goal location prediction based on deep learning using RGB-D camera
Goal location prediction based on deep learning using RGB-D cameraGoal location prediction based on deep learning using RGB-D camera
Goal location prediction based on deep learning using RGB-D camerajournalBEEI
 
International Journal of Engineering Research and Development (IJERD)
International Journal of Engineering Research and Development (IJERD)International Journal of Engineering Research and Development (IJERD)
International Journal of Engineering Research and Development (IJERD)IJERD Editor
 
The flow of baseline estimation using a single omnidirectional camera
The flow of baseline estimation using a single omnidirectional cameraThe flow of baseline estimation using a single omnidirectional camera
The flow of baseline estimation using a single omnidirectional cameraTELKOMNIKA JOURNAL
 
Enhanced Algorithm for Obstacle Detection and Avoidance Using a Hybrid of Pla...
Enhanced Algorithm for Obstacle Detection and Avoidance Using a Hybrid of Pla...Enhanced Algorithm for Obstacle Detection and Avoidance Using a Hybrid of Pla...
Enhanced Algorithm for Obstacle Detection and Avoidance Using a Hybrid of Pla...IOSR Journals
 
Automatic rectification of perspective distortion from a single image using p...
Automatic rectification of perspective distortion from a single image using p...Automatic rectification of perspective distortion from a single image using p...
Automatic rectification of perspective distortion from a single image using p...ijcsa
 

Similar to Leader Follower Formation Control of Ground Vehicles Using Dynamic Pixel Count and Inverse Perspective Mapping (20)

Visual Odometry using Stereo Vision
Visual Odometry using Stereo VisionVisual Odometry using Stereo Vision
Visual Odometry using Stereo Vision
 
INCAST_2008-014__2_
INCAST_2008-014__2_INCAST_2008-014__2_
INCAST_2008-014__2_
 
An automatic algorithm for object recognition and detection based on asift ke...
An automatic algorithm for object recognition and detection based on asift ke...An automatic algorithm for object recognition and detection based on asift ke...
An automatic algorithm for object recognition and detection based on asift ke...
 
Dense Visual Odometry Using Genetic Algorithm
Dense Visual Odometry Using Genetic AlgorithmDense Visual Odometry Using Genetic Algorithm
Dense Visual Odometry Using Genetic Algorithm
 
Application of Vision based Techniques for Position Estimation
Application of Vision based Techniques for Position EstimationApplication of Vision based Techniques for Position Estimation
Application of Vision based Techniques for Position Estimation
 
D04432528
D04432528D04432528
D04432528
 
V.KARTHIKEYAN PUBLISHED ARTICLE 1
V.KARTHIKEYAN PUBLISHED ARTICLE 1V.KARTHIKEYAN PUBLISHED ARTICLE 1
V.KARTHIKEYAN PUBLISHED ARTICLE 1
 
Image fusion using nsct denoising and target extraction for visual surveillance
Image fusion using nsct denoising and target extraction for visual surveillanceImage fusion using nsct denoising and target extraction for visual surveillance
Image fusion using nsct denoising and target extraction for visual surveillance
 
Automatic License Plate Detection in Foggy Condition using Enhanced OTSU Tech...
Automatic License Plate Detection in Foggy Condition using Enhanced OTSU Tech...Automatic License Plate Detection in Foggy Condition using Enhanced OTSU Tech...
Automatic License Plate Detection in Foggy Condition using Enhanced OTSU Tech...
 
Fuzzy-proportional-integral-derivative-based controller for object tracking i...
Fuzzy-proportional-integral-derivative-based controller for object tracking i...Fuzzy-proportional-integral-derivative-based controller for object tracking i...
Fuzzy-proportional-integral-derivative-based controller for object tracking i...
 
Image Processing Algorithms For Deep-Space Autonomous Optical Navigation 2.pptx
Image Processing Algorithms For Deep-Space Autonomous Optical Navigation 2.pptxImage Processing Algorithms For Deep-Space Autonomous Optical Navigation 2.pptx
Image Processing Algorithms For Deep-Space Autonomous Optical Navigation 2.pptx
 
Ay33292297
Ay33292297Ay33292297
Ay33292297
 
Ay33292297
Ay33292297Ay33292297
Ay33292297
 
Tracking Chessboard Corners Using Projective Transformation for Augmented Rea...
Tracking Chessboard Corners Using Projective Transformation for Augmented Rea...Tracking Chessboard Corners Using Projective Transformation for Augmented Rea...
Tracking Chessboard Corners Using Projective Transformation for Augmented Rea...
 
IRJET- Digital Image Forgery Detection using Local Binary Patterns (LBP) and ...
IRJET- Digital Image Forgery Detection using Local Binary Patterns (LBP) and ...IRJET- Digital Image Forgery Detection using Local Binary Patterns (LBP) and ...
IRJET- Digital Image Forgery Detection using Local Binary Patterns (LBP) and ...
 
Goal location prediction based on deep learning using RGB-D camera
Goal location prediction based on deep learning using RGB-D cameraGoal location prediction based on deep learning using RGB-D camera
Goal location prediction based on deep learning using RGB-D camera
 
International Journal of Engineering Research and Development (IJERD)
International Journal of Engineering Research and Development (IJERD)International Journal of Engineering Research and Development (IJERD)
International Journal of Engineering Research and Development (IJERD)
 
The flow of baseline estimation using a single omnidirectional camera
The flow of baseline estimation using a single omnidirectional cameraThe flow of baseline estimation using a single omnidirectional camera
The flow of baseline estimation using a single omnidirectional camera
 
Enhanced Algorithm for Obstacle Detection and Avoidance Using a Hybrid of Pla...
Enhanced Algorithm for Obstacle Detection and Avoidance Using a Hybrid of Pla...Enhanced Algorithm for Obstacle Detection and Avoidance Using a Hybrid of Pla...
Enhanced Algorithm for Obstacle Detection and Avoidance Using a Hybrid of Pla...
 
Automatic rectification of perspective distortion from a single image using p...
Automatic rectification of perspective distortion from a single image using p...Automatic rectification of perspective distortion from a single image using p...
Automatic rectification of perspective distortion from a single image using p...
 

Recently uploaded

Glass Ceramics: Processing and Properties
Glass Ceramics: Processing and PropertiesGlass Ceramics: Processing and Properties
Glass Ceramics: Processing and PropertiesPrabhanshu Chaturvedi
 
ONLINE FOOD ORDER SYSTEM PROJECT REPORT.pdf
ONLINE FOOD ORDER SYSTEM PROJECT REPORT.pdfONLINE FOOD ORDER SYSTEM PROJECT REPORT.pdf
ONLINE FOOD ORDER SYSTEM PROJECT REPORT.pdfKamal Acharya
 
result management system report for college project
result management system report for college projectresult management system report for college project
result management system report for college projectTonystark477637
 
The Most Attractive Pune Call Girls Manchar 8250192130 Will You Miss This Cha...
The Most Attractive Pune Call Girls Manchar 8250192130 Will You Miss This Cha...The Most Attractive Pune Call Girls Manchar 8250192130 Will You Miss This Cha...
The Most Attractive Pune Call Girls Manchar 8250192130 Will You Miss This Cha...ranjana rawat
 
Top Rated Pune Call Girls Budhwar Peth ⟟ 6297143586 ⟟ Call Me For Genuine Se...
Top Rated  Pune Call Girls Budhwar Peth ⟟ 6297143586 ⟟ Call Me For Genuine Se...Top Rated  Pune Call Girls Budhwar Peth ⟟ 6297143586 ⟟ Call Me For Genuine Se...
Top Rated Pune Call Girls Budhwar Peth ⟟ 6297143586 ⟟ Call Me For Genuine Se...Call Girls in Nagpur High Profile
 
MANUFACTURING PROCESS-II UNIT-2 LATHE MACHINE
MANUFACTURING PROCESS-II UNIT-2 LATHE MACHINEMANUFACTURING PROCESS-II UNIT-2 LATHE MACHINE
MANUFACTURING PROCESS-II UNIT-2 LATHE MACHINESIVASHANKAR N
 
College Call Girls Nashik Nehal 7001305949 Independent Escort Service Nashik
College Call Girls Nashik Nehal 7001305949 Independent Escort Service NashikCollege Call Girls Nashik Nehal 7001305949 Independent Escort Service Nashik
College Call Girls Nashik Nehal 7001305949 Independent Escort Service NashikCall Girls in Nagpur High Profile
 
BSides Seattle 2024 - Stopping Ethan Hunt From Taking Your Data.pptx
BSides Seattle 2024 - Stopping Ethan Hunt From Taking Your Data.pptxBSides Seattle 2024 - Stopping Ethan Hunt From Taking Your Data.pptx
BSides Seattle 2024 - Stopping Ethan Hunt From Taking Your Data.pptxfenichawla
 
CCS335 _ Neural Networks and Deep Learning Laboratory_Lab Complete Record
CCS335 _ Neural Networks and Deep Learning Laboratory_Lab Complete RecordCCS335 _ Neural Networks and Deep Learning Laboratory_Lab Complete Record
CCS335 _ Neural Networks and Deep Learning Laboratory_Lab Complete RecordAsst.prof M.Gokilavani
 
University management System project report..pdf
University management System project report..pdfUniversity management System project report..pdf
University management System project report..pdfKamal Acharya
 
UNIT-III FMM. DIMENSIONAL ANALYSIS
UNIT-III FMM.        DIMENSIONAL ANALYSISUNIT-III FMM.        DIMENSIONAL ANALYSIS
UNIT-III FMM. DIMENSIONAL ANALYSISrknatarajan
 
Extrusion Processes and Their Limitations
Extrusion Processes and Their LimitationsExtrusion Processes and Their Limitations
Extrusion Processes and Their Limitations120cr0395
 
Introduction to IEEE STANDARDS and its different types.pptx
Introduction to IEEE STANDARDS and its different types.pptxIntroduction to IEEE STANDARDS and its different types.pptx
Introduction to IEEE STANDARDS and its different types.pptxupamatechverse
 
UNIT - IV - Air Compressors and its Performance
UNIT - IV - Air Compressors and its PerformanceUNIT - IV - Air Compressors and its Performance
UNIT - IV - Air Compressors and its Performancesivaprakash250
 
Call for Papers - Educational Administration: Theory and Practice, E-ISSN: 21...
Call for Papers - Educational Administration: Theory and Practice, E-ISSN: 21...Call for Papers - Educational Administration: Theory and Practice, E-ISSN: 21...
Call for Papers - Educational Administration: Theory and Practice, E-ISSN: 21...Christo Ananth
 
Online banking management system project.pdf
Online banking management system project.pdfOnline banking management system project.pdf
Online banking management system project.pdfKamal Acharya
 
Structural Analysis and Design of Foundations: A Comprehensive Handbook for S...
Structural Analysis and Design of Foundations: A Comprehensive Handbook for S...Structural Analysis and Design of Foundations: A Comprehensive Handbook for S...
Structural Analysis and Design of Foundations: A Comprehensive Handbook for S...Dr.Costas Sachpazis
 

Recently uploaded (20)

Glass Ceramics: Processing and Properties
Glass Ceramics: Processing and PropertiesGlass Ceramics: Processing and Properties
Glass Ceramics: Processing and Properties
 
ONLINE FOOD ORDER SYSTEM PROJECT REPORT.pdf
ONLINE FOOD ORDER SYSTEM PROJECT REPORT.pdfONLINE FOOD ORDER SYSTEM PROJECT REPORT.pdf
ONLINE FOOD ORDER SYSTEM PROJECT REPORT.pdf
 
result management system report for college project
result management system report for college projectresult management system report for college project
result management system report for college project
 
The Most Attractive Pune Call Girls Manchar 8250192130 Will You Miss This Cha...
The Most Attractive Pune Call Girls Manchar 8250192130 Will You Miss This Cha...The Most Attractive Pune Call Girls Manchar 8250192130 Will You Miss This Cha...
The Most Attractive Pune Call Girls Manchar 8250192130 Will You Miss This Cha...
 
Top Rated Pune Call Girls Budhwar Peth ⟟ 6297143586 ⟟ Call Me For Genuine Se...
Top Rated  Pune Call Girls Budhwar Peth ⟟ 6297143586 ⟟ Call Me For Genuine Se...Top Rated  Pune Call Girls Budhwar Peth ⟟ 6297143586 ⟟ Call Me For Genuine Se...
Top Rated Pune Call Girls Budhwar Peth ⟟ 6297143586 ⟟ Call Me For Genuine Se...
 
MANUFACTURING PROCESS-II UNIT-2 LATHE MACHINE
MANUFACTURING PROCESS-II UNIT-2 LATHE MACHINEMANUFACTURING PROCESS-II UNIT-2 LATHE MACHINE
MANUFACTURING PROCESS-II UNIT-2 LATHE MACHINE
 
College Call Girls Nashik Nehal 7001305949 Independent Escort Service Nashik
College Call Girls Nashik Nehal 7001305949 Independent Escort Service NashikCollege Call Girls Nashik Nehal 7001305949 Independent Escort Service Nashik
College Call Girls Nashik Nehal 7001305949 Independent Escort Service Nashik
 
BSides Seattle 2024 - Stopping Ethan Hunt From Taking Your Data.pptx
BSides Seattle 2024 - Stopping Ethan Hunt From Taking Your Data.pptxBSides Seattle 2024 - Stopping Ethan Hunt From Taking Your Data.pptx
BSides Seattle 2024 - Stopping Ethan Hunt From Taking Your Data.pptx
 
CCS335 _ Neural Networks and Deep Learning Laboratory_Lab Complete Record
CCS335 _ Neural Networks and Deep Learning Laboratory_Lab Complete RecordCCS335 _ Neural Networks and Deep Learning Laboratory_Lab Complete Record
CCS335 _ Neural Networks and Deep Learning Laboratory_Lab Complete Record
 
University management System project report..pdf
University management System project report..pdfUniversity management System project report..pdf
University management System project report..pdf
 
UNIT-III FMM. DIMENSIONAL ANALYSIS
UNIT-III FMM.        DIMENSIONAL ANALYSISUNIT-III FMM.        DIMENSIONAL ANALYSIS
UNIT-III FMM. DIMENSIONAL ANALYSIS
 
Extrusion Processes and Their Limitations
Extrusion Processes and Their LimitationsExtrusion Processes and Their Limitations
Extrusion Processes and Their Limitations
 
Roadmap to Membership of RICS - Pathways and Routes
Roadmap to Membership of RICS - Pathways and RoutesRoadmap to Membership of RICS - Pathways and Routes
Roadmap to Membership of RICS - Pathways and Routes
 
Introduction to IEEE STANDARDS and its different types.pptx
Introduction to IEEE STANDARDS and its different types.pptxIntroduction to IEEE STANDARDS and its different types.pptx
Introduction to IEEE STANDARDS and its different types.pptx
 
Water Industry Process Automation & Control Monthly - April 2024
Water Industry Process Automation & Control Monthly - April 2024Water Industry Process Automation & Control Monthly - April 2024
Water Industry Process Automation & Control Monthly - April 2024
 
DJARUM4D - SLOT GACOR ONLINE | SLOT DEMO ONLINE
DJARUM4D - SLOT GACOR ONLINE | SLOT DEMO ONLINEDJARUM4D - SLOT GACOR ONLINE | SLOT DEMO ONLINE
DJARUM4D - SLOT GACOR ONLINE | SLOT DEMO ONLINE
 
UNIT - IV - Air Compressors and its Performance
UNIT - IV - Air Compressors and its PerformanceUNIT - IV - Air Compressors and its Performance
UNIT - IV - Air Compressors and its Performance
 
Call for Papers - Educational Administration: Theory and Practice, E-ISSN: 21...
Call for Papers - Educational Administration: Theory and Practice, E-ISSN: 21...Call for Papers - Educational Administration: Theory and Practice, E-ISSN: 21...
Call for Papers - Educational Administration: Theory and Practice, E-ISSN: 21...
 
Online banking management system project.pdf
Online banking management system project.pdfOnline banking management system project.pdf
Online banking management system project.pdf
 
Structural Analysis and Design of Foundations: A Comprehensive Handbook for S...
Structural Analysis and Design of Foundations: A Comprehensive Handbook for S...Structural Analysis and Design of Foundations: A Comprehensive Handbook for S...
Structural Analysis and Design of Foundations: A Comprehensive Handbook for S...
 

Leader Follower Formation Control of Ground Vehicles Using Dynamic Pixel Count and Inverse Perspective Mapping

  • 1. The International Journal of Multimedia & Its Applications (IJMA) Vol.6, No.5, October 2014 LEADER FOLLOWER FORMATION CONTROL OF GROUND VEHICLES USING DYNAMIC PIXEL COUNT AND INVERSE PERSPECTIVE MAPPING S.M.Vaitheeswarana, Bharath.M.K, Ashish Prasad and Gokul.M Aerospace Electronics and Systems Division, CSIR-National Aerospace laboratories, HAL Airport Road, Kodihalli, Bengaluru, 560017 India ABSTRACT This paper deals with leader-follower formations of non-holonomic mobile robots, introducing a formation control strategy based on pixel counts using a commercial grade electro optics camera. Localization of the leader for motions along line of sight as well as the obliquely inclined directions are considered based on pixel variation of the images by referencing to two arbitrarily designated positions in the image frames. Based on an established relationship between the displacement of the camera movement along the viewing direction and the difference in pixel counts between reference points in the images, the range and the angle estimate between the follower camera and the leader is calculated. The Inverse Perspective Transform is used to account for non linear relationship between the height of vehicle in a forward facing image and its distance from the camera. The formulation is validated with experiments. KEYWORDS Vision based range and angle measurement, dynamic pixel count controller, inverse mapping projection. 1. INTRODUCTION Unmanned Systems for autonomous navigation requires some form of situational awareness of the robot’s environment. This awareness is achieved commonly through different sensors such as high end electro-optic cameras, traditional stereo cameras, scanning laser range finders, or some combination of these sensors. However, these sensors provide an overwhelming amount of data and come at the expense of additional computational and increased costs to the system. In this paper, a low cost, low computation, vision solution to situational awareness in the framework of a formation control convoy formation problem for autonomous ground vehicles is proposed. The Vision system uses the traditional monocular cameras but differs in its computational method[1,2,3]. The traditional camera generates tens of thousands of data points creating a very fine map of the environment, but does not provide any insight into the recognition of objects within the scene. The simplified vision system presented here in the paper works by first recognizing the leader as a high contrast symbol within the camera images. The correlation problem is reduced from tens of thousands of pixels to a set of feature points and in some cases just one feature point. A vector for the relative position of the robot to the scene feature is DOI : 10.5121/ijma.2014.6501 1
  • 2. The International Journal of Multimedia & Its Applications (IJMA) Vol.6, No.5, October 2014 calculated based on the dynamic pixel count in the images and is used to generate the motion commands for the follower robot. The process is repeated iteratively until the robot is correctly localized. A second advantage of the approach is that the algorithm used is heavily researched, well documented and several fast open source algorithms already exist. A third contribution to the paper is in the distance measurement scheme based on dynamic pixel measurements. If the leader under measurement is not perpendicular to the optical axis, a relationship between the displacement of the camera movement and the difference in pixel counts is used to measure the photographing distance and inclination angle to get the leader position. lastly the angle of view under which a scene is acquired and the distance of the objects from the camera, namely the perspective effect, contributes different information content to each pixel of an image. To cope with this problem a geometrical transform Inverse Perspective Mapping, (IPM) [4], is used in this paper. This allows removing the perspective effect from the acquired image, remapping it into a new 2-dimensional domain (the remapped domain). 2 2. PROBLEM STATEMENT Leader-follower formation of non-holonomic ground vehicles is assumed. The follower ground vehicle is equipped with a camera to obtain its bearing and position information with respect to the leader robot based on pixel measurement count in the images. Vehicle kinematics obeys the bicycle model. The controller’s goal is to keep a desired position and orientation with respect to the leader. The problem is to find the relative position and orientation with respect to the leader so that it can be provided as feedback to the control loop. Accurate distance measurements are crucial, as the follower robot will be planning its path based upon the data from the vision system. It is important for the hardware and software to have a fast update rate with a minimum of delay, as both can directly affect the dynamic performance of the control system for the camera. With these requirements in mind, this work presents a system selected for their individual characteristics and complementary nature: 1) A color detector working in the RGB image space, and 2) Inverse Perspective Mapping (IPM) to create a top down view of the image 3) Angle and Distance Estimation of the leader and follower based on pixel counts in the images. 3. METHODOLOGY This section describes the algorithm used to identify a leader object in the sequence of images from a color video camera mounted on the follower vehicle to estimate the range and bearing to the target. 3.1. Color detection The color detector identifies areas that match the leader’s color characteristics using a number of steps shown below: 1. The image is separated into its constituent Red Green and Blue channels. The luminance ( , ) 1 f x y of the pixel (x, y) is described as [5]: f x y = R + + (1) ( , ) 0.299 1 x y ( , ) 0.114 ( , ) 0.578 ( , ) B x y G x y
  • 3. The International Journal of Multimedia & Its Applications (IJMA) Vol.6, No.5, October 2014 2. Taking one of the RGB channels, the algorithm calculates the average pixel intensity from the area immediately in front of the vehicle. 3. A binarization [5] is performed using the average value from step 2 as the threshold. The pixels similar to surface (path) are thresholded to white and others are threshholded to black, where 1 signifies a road pixel, and a 0 is a non-road pixel. 4. The steps 2 and 3 are repeated for the other channels (Green and Blue).Morphological image processing operations [5], such as dilation, erosion, opening and closing are used after the process of image segmentation, in order to enable the underlying shapes to be identified and optimally reconstruct the image from their noisy precursors. Connected-components that share similar pixel intensity values are identified and then connected with each other. The connected-component labelling scans an image and groups pixels into one or more components according to pixel connectivity. Once all groups are determined, each pixel is labelled with a grey level on the basis of the component. 3 5. Finally, the algorithm merges the 3 new binary images into a new RGB image 3.2. Inverse perspective mapping (IPM) IPM is a mathematical technique whereby a coordinate system may be transformed from one perspective to another. In this case we use IPM to remove the effects of perspective distortion of the road surface in the forward facing image, to an undistorted top-down view as shown in Fig. (1). As can be seen lines that appear to converge near the horizon are made parallel To get a bird eye’s view image, a perspective transform relating the pixel (u,v) values seen by the camera in the original image is obtained through a transformation matrix as shown in figure. Figure 1. IPM Transformation Transforming the image in this manner removes the non linearity of distances and the orientation. We can focus our attention on only a sub-region of the input image, which helps in reducing the run time considerably. In order to create the top-down view, we first need to find the mapping of a point on the road to its projection on the image surface. To get the IPM of the input image, we assume a flat road, and use the camera intrinsic (focal length and optical center) and extrinsic (q
  • 4. The International Journal of Multimedia & Its Applications (IJMA) Vol.6, No.5, October 2014 and height above ground) parameters to perform this transformation. We start by defining a world frame F = X,Y,Z centered at the camera optical center, a camera frame F = X ,Y ,Z , and an image frame F = u, v as shown in Fig.(1). We assume that the camera frame X axis stays in the world frame XY plane. The height of the camera frame above the ground plane is h. Starting from a point in the image plane iP = u, v, 1,1, it can be shown that its projection on the road plane can be found by applying the homogeneous transformation. 4 This mapping is expressed as [6]: i P ={u, v,1,1} = [K][T][R](x, y, z,1) (2) Where R is the rotation matrix: R (3) 1 0 0 0 q q 0 cos sin 0 − = q q 0 sin cos 0 0 0 0 1 T is the translation matrix: T (4) 1 0 0 0 0 1 0 0 0 0 1 / sin − = 0 0 0 1 h q And K is the camera parameter matrix: f ku s uo w K (5) = * f kv vo 0 * 0 0 0 1 0 Here, f is the focal length of the camera and ∗ is the aspect ratio of the pixels, and the camera centre coordinates. Fig. (2) shows a typical IPM sample. The leader vehicle is extracted using object detection process and a background subtraction procedure from the transformed bird eye view images of the follower. Figure 2. IPM Sample (a) Perspective (frontal) View; (b) Bird Eye View
  • 5. The International Journal of Multimedia Its Applications (IJMA) Vol.6, No.5, October 2014 A pose estimation of the leader vehicle is performed in order for the follower vehicle to navigate. Only the area directly visible in front of the host vehicle is analyzed. Cropping is performed on the road segmented image, in order to get the region of interest which provides us the distance and angle of orientation of leader in term of pixels. The two red lines in Fig 2(b) indicate the distance of the leader vehicle which is being taken as two cropped images for computation as shown in Fig.(2). 5 3.3. Target Localization based on pixel count [6-9] The idea of localization is from the triangular relationship, shown in Fig. 3. In this, we first capture an image of the leader incorporating a known-dimension rectangle, and then the proportion relationship between the real-dimension and the image-dimension of the rectangle is found. According to the proportion relationship, the distance d is easily calculated [6,7,8,9]. In this section we outline the methodology used for range and angle estimation based on the pixel count number variation in images for the purpose of localization of the leader in the images. For this we first capture an image incorporating a known-dimension trapezoid, and then the proportion relationship between the real-dimension and the image-dimension of the trapezoid are found. In this we have two cases: a. the targeted leader vehicle lying on the plane perpendicular to the optical axis of the follower vehicle camera, b. the targeted leader vehicle lies on an oblique tangent plane with respect to the optical axis of the follower vehicle camera. Case a: Leader perpendicular to the optical axis of the follower camera: Figure 3. Pixel Count Measurement Geometry using Bird Eye View Image for perpendicular Case As shown in Fig.(3) the target plane is assumed to be perpendicular to the optical axis and the relationship between distance and variation of pixel counts at different viewing distances are obtained using Figure and geometry. The target plane formed by the leader vehicle is represented by points P, O, Q. A simple relationship between the change in leader follower distance , and the difference in pixel counts between the reference points in the images at different viewing distances, are obtained with the assumption that the distance between reference points P and Q will not change. We have the following relationship between pixel counts and distances:
  • 6. The International Journal of Multimedia Its Applications (IJMA) Vol.6, No.5, October 2014 h of the leader on 6 ) 1 ) 2 ( d h ( d h max ( ) d PQ ( ) d PQ ( ) 2 max ( ) 1 n n PQ n n PQ = = (6) Where, points P and Q is the target plane distance. and are the pixel counts between points P and Q on the image plane at distances ℎ0 and ℎ0 respectively. Δℎ is the change in distance between the leader and follower positions. ℎ$ is the distance between the optical origin (OP) and the front end of the camera on the follower. From (1) and (2), we have: ( ) 2 n PQ ( ) 1 d h ) 2 ( ) 1 ( n PQ d h = (7) Because of the displacement, (0) Dh = h − h , we have: 2 (0) 1 + (0) 1 h hs h hs d h d h ( ) 2 n PQ n PQ + = = (0) 2 ) 2 ( ) 1 ( ( ) 1 (8) Substituting h = h (0) + Dh (0) into (4), viewing distances (0) 1 2 1 h and (0) 2 the perpendicular plane can be obtained as: h hs ( ) n PQ ( ) ( ) 2 1 n PQ ( ) 1 2 n PQ n PQ or h O h hs n PQ n PQ h O D − − = D − − = * ( ) 1 ( ) 2 * ( ) 1 ( ) 2 (9) Also, distance between points A and B can be obtained as: ( ) 1 * (0) )* tan 2 q s H h h (0) )* tan 1 s H h h _max 2( ( ) 2 * _max 2( N AB H N N AB H N AB q + = + = (10) where, % is the maximal horizontal view angle of the camera and N%'()* is the maximal pixels in a horizontal scan line of an image frame, which is fixed and known as a priori irrelevant of photographing distances. As a result of this relationship, the distance between an arbitrarily-selected point P and the origin O can also be represented as follows:
  • 7. The International Journal of Multimedia Its Applications (IJMA) Vol.6, No.5, October 2014 7 ( ) 1 * ( ) )*tan 2 q s H h O h ( ) )* tan 1 s H h O h _max 2( ( ) 2 * _max 2( n PQ H n n PQ H n PQ q + = + = (11) The above-mentioned relationship is only valid for leader objects or planes that are perfectly perpendicular to the optical axis. Case b: Leader oblique to the optical axis of the follower camera: Fig. (4) shows the configuration of the proposed image-based system for measuring the leader position on an oblique surface based on variation of pixel counts, where an incline angle +, exists between the oblique and perpendicular planes. Virtual planes are introduced to address the problem to measure the incline angle +,.Thus, distances between the camera and the virtual planes for different positions can be obtained as: h hs n P n P n P n P h P h hs n P n P h P D − − = D − − = * ( ) 1 ( ) 2 ( ) ( ) 1 2 * ( ) 1 ( ) 2 ( ) ( ) 2 1 Figure 4. Pixel Count Measurement Geometry using Bird Eye View Image for Oblique Case
  • 8. The International Journal of Multimedia Its Applications (IJMA) Vol.6, No.5, October 2014 8 h hs n Q n Q n Q n Q h Q h hs n Q n Q h Q D − − = D − − = * ( ) 1 ( ) 2 ( ) ( ) 1 2 * ( ) 1 ( ) 2 ( ) ( ) 2 1 (12) where, hP and hQ are distances from the camera at position 1 to virtual plane VP_P and virtual plane VP_Q, respectively. hP and hQ are distances from the camera at position 2 to virtual plane VP_P and virtual plane VP_Q, respectively. n0 P is the pixel counts in the image plane for point P when image is taken at position n . From (9), (10), (11), and (12), we have: − = − = D h P h Q h P h Q k ( ) tan Q d − D = P d k m Q m d P k d + D = + tan 1 ( ) 1 ( ) 1 ( ) 1 ( ) 1 q q (13) where, Δ1 is the distance between the virtual planes VP_P and VP_Q, and ( is the incline angle. d3 and d4 represent the distance from point P and point Q to the optical axis, respectively, which can be expressed as follows: To this end, the distance from the optical origin at positions 1 and 2 to the oblique plane intersecting the optical axis can be expressed as: = − = + q q Q m m h Q d P h O h Q d Q m m h Q d P h O h P d q q ( ) tan 2 ( ) tan 2 ( ) 2 ( ) tan 1 ( ) tan 1 ( ) 1 = − = + (14) and the distance between two arbitrary point P and Q as DPQ is derived as D(PQ) = d + secq Q m d P 3.4 Vehicle kinematics and control The goal of the vehicle controller is to use the leader’s range and bearing from the vision system to force the follower vehicle to follow the leader’s path. The controller designed to follow the path the leader is delayed by an arbitrary following time, t (typically 10 seconds or so). If x, y is the global position of the follower and x8, y8 is the position of the leader, the controller attempts to make 9xt, yt;track 9x8t − , y8t − ;For simplicity, the delayed leader position is defined 9x=t, y=t;.The position of the leader vehicle, x8, y8 , is determined using the camera vision data relative to the current follower position, as described in Section.
  • 9. The International Journal of Multimedia Its Applications (IJMA) Vol.6, No.5, October 2014 The controller uses a bicycle model for a kinematic model of the vehicle, which is commonly used in vehicle following applications. Referring to Fig. (5), x, y is the follower position, v is the commanded speed, and is the commanded steering. The control laws are given as 9 0 = + vc v (15) , 1 1, , 1 p e e k p e k d 0 = , + , g c k p e e k p e e k p e e k p e e 1 2, , 2 , , 1 , 2 q q q q (16) Figure 5. Leader Follower Geometry and Co-ordinate system = − + − = − − + − q q q q q q q = − d e y d x y d e x y d x y d e x ( ) sin ( ) cos 2 ( ) cos ( ) sin 1 (17) 4. EXPERIMENTAL SET UP AND IMPLEMENTATION In order to show the effectiveness of the algorithms, experiments were performed on a set up consisting of two ground vehicles in a convoy formation using autonomous control for the master and vision control for the slave. The robot vehicle frame and chassis is a model manufactured by Robokit, INDIA. It has a differential 4-wheeled robot base with two wheels at the front and the two wheels are at the back. The electronics is developed in-house. The vehicle uses four DC motors (geared) for driving the four wheels independently. The DC motors are controlled by Atmega microcontroller and controller board is shown in figure. For this purpose, the C language, together with the OpenCV library of is used. The follower tracks the remotely controllable leader moving in an arbitrary trajectory, while keeping a predefined inter-distance. The control system has two parts - platform inter-distance control and control of follower orientation with respect to the leader. The follower orientation control provides actions needed for follower to face the leader, which is accomplished when the leader is located in the center of image captured by the follower. The distance between the leader and the follower is obtained from the leader size in current frame. An increase in the platform inter-distance causes the decrease in the platform size in an image, and vice versa. This dependency has been experimentally obtained.
  • 10. The International Journal of Multimedia Its Applications (IJMA) Vol.6, No.5, October 2014 The difference between current and desired platform inter-distance causes the follower forward/backward motion, while the leader’s displacement from the image center causes the follower rotation until the leader appears in the image center. Three different postures (angles of view) of the platform at three separation distances have been examined. For each posture and distance (9 cases) 90 frames have been processed (15 fps). Determination of the position and orientation of the leader is achieved by estimating its distance and relative orientation with regard to the follower using the procedure outlined in Section III. Only the area directly visible in front of the host vehicle is analyzed. Cropping is performed on the road segmented image, in order to get the region of interest which provides us the distance and angle of orientation of leader in term of pixels. In case of front view (straight) as shown in fig 6(b) and for the different distances considered results show very good localization and following capabilities. For the left and right views shown in Fig (6a and 6c) the results are also in good agreement indicating the validity of the design and implementation. Fig.(7) shows typical snapshots capturing the faithful change in movements in straight line and orientation of the follower vehicle as demanded indicating the validity of the approach and method used in the paper. 10 (a) (b) (c) (d) Figure 6. Distance and orientation of the leader vehicle. (a) left orientation 350, (b) straight movement 00,(c) Right orientation 500, (d) cropped version used for pixel count Figure 7. Leader Follower experimental geometry, angle measurement and Bird Eye View images showing pixel count at two points for straight and angle maneuvering
  • 11. The International Journal of Multimedia Its Applications (IJMA) Vol.6, No.5, October 2014 11 5. CONCLUSIONS The algorithms used for vision detection and localization provide a simple but effective processing approach in real time for the leader-follower formation control approach considered in the paper. The solution is based on pixel variation of the images by referencing to two arbitrarily designated positions in the image frames. The tracking enables a follower to determine its relative position and orientation with respect to the leader in its field of view. Performance of the proposed method has been demonstrated on a laboratory setup composed of two mobile robot platforms. Experimental results demonstrate that overall accuracy of recognition is very good (in some case even 100%), thus providing accurate calculation of distance between the leader and the follower. Future work will be toward improvement of robustness of the proposed method with respect to various environment conditions (light intensity, shadows, etc) as well using multiple vehicles for validations. REFERENCES [1] C. Harris and M. Stephens. A combined corner and edge detector. In Alvey vision conference, volume 15,page 50. Manchester, UK, 1988. [2] J. Shi and C. Tomasi. Good features to track. In IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pages 593–600, 1994. [3] D.G Lowe. Object recognition from local scale invariant features. In International Conference on Computer Vision, volume 2, pages 1150–1157. Greece,1999. [4] Malamas. E. N, E G.M Petrakis, Zervakis. M, Petit. L, Jean-Didier Legat (2003). “A survey on industrial vision systems, applications and tools”, Image and Vision Computing, 21, 2, 10, 171–188. [5] Mohamed Aly, (2008) “Real time Detection of Lane Markers in Urban Streets”, IEEE Intelligent Vehicles Symposium Eindhoven University of Technology Eindhoven, The Netherlands, 7-12. [6] Hsu, C.C.; Lu, M.C.; Wang, W.Y.; Lu, Y.Y. (2009) Three dimensional measurement of distant objects based on laser-projected CCD images. IET Sci. Meas. Technol. 3, 197–207. [7] Lu, M.C.; Wang, W.Y.; Chu, C.Y. (2006) Image-based distance and area measuring systems. IEEE Sens. J, 6, 495–503. [8] Wang, W.Y.; Lu, M.C.; Kao, H.L.; Chu, C.Y. (2007) Nighttime Vehicle Distance Measuring Systems. IEEE Trans. Circ. Syst. II: 54, 81–85. [9] Hus, C.C.; Lu, M.C.; Wang, W.Y.; Lu, Y.Y. (2009) Distance measurement based in pixel variation of CCD image. ISA Trans., 40, 389–395.