SlideShare une entreprise Scribd logo
1  sur  10
Télécharger pour lire hors ligne
TELKOMNIKA Telecommunication, Computing, Electronics and Control
Vol. 18, No. 3, June 2020, pp. 1633~1642
ISSN: 1693-6930, accredited First Grade by Kemenristekdikti, Decree No: 21/E/KPT/2018
DOI: 10.12928/TELKOMNIKA.v18i3.14797  1633
Journal homepage: http://journal.uad.ac.id/index.php/TELKOMNIKA
The flow of baseline estimation using a single
omnidirectional camera
Sukma Meganova Effendi1
, Dadet Pramadihanto2
, Riyanto Sigit3
1
Department of Mechatronics, Politeknik Mekatronika Sanata Dharma, Indonesia
2,3
Electronics Engineering Polytechnic Institute of Surabaya (EEPIS), Indonesia
Article Info ABSTRACT
Article history:
Received Aug 17, 2019
Revised Jan 30, 2020
Accepted Feb 26, 2020
Baseline is a distance between two cameras, but we cannot get information
from a single camera. Baseline is one of the important parameters to find
the depth of objects in stereo image triangulation. The flow of baseline
is produced by moving the camera in horizontal axis from its original
location. Using baseline estimation, we can determined the depth of an object
by using only an omnidirectional camera. This research focus on determining
the flow of baseline before calculating the disparity map. To estimate
the flow and to tracking the object, we use three and four points
in the surface of an object from two different data (panoramic image) that
were already chosen. By moving the camera horizontally, we get the tracks
of them. The obtained tracks are visually similar. Each track represent
the coordinate of each tracking point. Two of four tracks have a graphical
representation similar to second order polynomial.
Keywords:
Baseline
Flow
Omnidirectional camera
This is an open access article under the CC BY-SA license.
Corresponding Author:
Sukma Meganova Effendi,
Department of Mechatronics,
Politeknik Mekatronika Sanata Dharma,
Paingan, Maguwoharjo, Depok, Sleman 55282, Yogyakarta, Indonesia.
Email: sukma.meganova@gmail.com
1. INTRODUCTION
Depth of objects can be estimated from stereo image. In previous research, stereo image was
obtained by using two pocket cameras (with improper alignment-vertical error) [1] or two omnidirectional
sensors [2]. Other research use two omnidirectional cameras (both of them are built in vertical) [3],
combination of convex mirror and concave lens [4], two cameras with two convex mirrors [5, 6],
double lobed mirror [7], and hyperbolic double lobed mirror [8]. In this research, we only employ a single
omnidirectional camera with a hyperbolic catadioptric.
Catadioptric is one of the most important components of an omnidirectional sensor to produce
an omnidirectional image. By using the catadioptric, all of the objects around the omnidirectional sensor can
be seen in one omnidirectional image without any efforts to rotate the sensor. It means that this sensor has
a wider field of view: 360°. There are many types of catadioptric and each catadioptric will give a different
omnidirectional image. According to [9-15], the best type of catadioptric is catadioptric that has
a hyperbolic shape.
There are many advantages using omnidirectional sensor in computer vision. Some of them are
using for face detection [16], soccer robot (based on obstacle detection) [17], matching and localization [18].
In the conventional stereo image processing, baseline is one of the parameters to find the depth of objects.
By using a pair of camera/omnidirectional camera/catadioptrics, baseline can be determined from
 ISSN: 1693-6930
TELKOMNIKA Telecommun Comput El Control, Vol. 18, No. 3, June 2020: 1633 - 1642
1634
the distance between cameras. Lukierski, Leutenegger, and Davison in [19] use an omnidirectional camera
on mobile robot. Its baseline is determined by the length of the mobile robot’s track and the length of its track
will affect the depth estimation. In this research, our baseline estimation is not affected by the length
of the track. When we try to obtain the depth and map, we should know about baseline. Using an
omnidirectional camera with a hyperbolic catadioptric, there is no information about baseline. Depth map can
be developed using stereo triangulation calculation. Our proposed method only focuses on determining
the flow of baseline.
The flow of baseline in our method is estimated using panoramic image because next process
is stereo triangulation calculation. The panoramic image is obtained by transformed the omnidirectional
image into panoramic image. By using stereo triangulation and panoramic image, variable ( x ) should have
a constant value in one axis [20] and the depth can be determined. This research produces images within
different value in two axes ( ,x y ). It shows that panoramic image, especially each pixel (of the objects) has
a unique flow that represented a certain pattern.
From the unique flow of each pixel in the panoramic image, we can track each pixel using
Lucas-Kanade’s optical flow algorithm [21-23]. In order to that, we have to move the camera continuously
in horizontal axis from its original location and use an object for detection to produce the desired tracks.
As the result, we obtain coordinates shifting of the object that is saved into a file and transform it into graphs
to get baseline estimation.
2. RESEARCH METHOD
Image that is captured using omnidirectional sensor is called as omnidirectional image. It has
a black-circle in the center of the image. Objects around it will be reflected into the catadioptric mirror
and the CCD sensor from the camera will capture the objects. The omnidirectional image is transformed into
panoramic image before calibration process. Afterward, we can use the panoramic images to obtain flow
of baseline estimation.
The structure of omnidirectional camera shown in Figure 1. The mirror or catadioptric should be put
and centered on the upper of the camera. Uncentered mirror will cause a distortion in omnidirectional image
transformation to panoramic image. To center the mirror to the camera, we use nuts, bolts, and springs that
put in each corner of the mirror structure. In this research, we propose a method to estimate flow of baseline
from omnidirectional images.
Figure 1. System’s structure of our omnidirectional camera using webcam
2.1. The algorithm of estimation process
Figure 2 illustrates all the methods that used in the flow of baseline estimation process. There are
three (3) process. The first process is loading live image, transform omnidirectional image to panoramic
image, and calibration. Second process is object detection and tracking the object’s points. The third process
is to plot the points of the tracks into graphs, to find equation of baseline estimation’s flow, and to determine
the coefficient of the equation.
2.2. Modelling of hyperboloid mirror
The projection from hyperbolic catadioptric or hyperboloid mirror that produces an omnidirectional
camera is shown in Figure 3. From Figures 3 and 4, the equation of hyperboloid mirror modelling is shown below.
TELKOMNIKA Telecommun Comput El Control 
The flow of baseline estimation using a single omnidirectional camera (Sukma Meganova Effendi)
1635
( )−
− = −
22
2 2
1
Z hR
a b
(1)
where: = −2 2
a c b . So, the map for hyperboloid mirror can be calculate as follows:
= + +tanZ R c h (2)
Because all of the objects touch the field and given value, thus:

+
= −tan
c h
R
(3)
From Figure 3, the equation is given, as follows:
( )
 

+
= +
−−
2 2
2 22 2
2 1
tan tan
cos
b c bc
x
c bb c
(4)
 =tan
f
r
(5)
To simplify, hence:
+
=
−
2 2
1 2 2
b c
a
b c ;
=
−
2 2 2
2bc
a
c b ;
= +3 ( )a c h (6)
focal point of the hyperboloid mirror is assumed on the principal point of the camera. So, (4) to (6) are
simplified as follows:


= + 2
1 tan
cos
f a
a
r
(7)
( ) ( )( )
 − + = +
2
2 21 2
1 22
2 tan
tan 1 tan
a ff
a a
rr
(8)
By (3) and (8), the equation is given as follows:
 
− + = + 
 
3
2 2 2 21
1 3 32
22 2 2
2
1
a
a f
f a a aR a
rr R R
(9)
( ) ( )− − + − =2 2 2 2 2 2 2 2
2 1 3 3 1 2( ) 2 0f r a R a a fr R r a a a (10)
where:
f = focal length of camera
r = radius in pixel
R = radius in world coordinate
1 2 3a a a = parameter of hyperboloid mirror
 ISSN: 1693-6930
TELKOMNIKA Telecommun Comput El Control, Vol. 18, No. 3, June 2020: 1633 - 1642
1636
Load the camera
Start
Transformation
omnidirectional image into
panoramic image and
calibrate
Tracking object
by camera movements
Save
each tracking points of object
in panoramic image
Represent
each tracking points into
each graphics
Find the estimation
(equation)
of each tracking points
Determined the coefficients
from the estimation
(equation)
The flow of baseline
estimation
Detect object
End
Camera is
open?
No
Yes
Objects
detected ?
Yes
A
A
No
Figure 2. The flow of baseline estimation process
αᶲ
b
c
c
h
p(r,θ)
Image plane
P(R,Z,θ)
Z
Om
Oc
R
Mirror
ᶲ
c
c
h
p(r,θ)
Image plane
Z
Om
Oc
R
Wall
0.5
d m
e
Figure 3. Geometry of hyperboloid
catadioptric/mirror and projection of point from
world coordinate on the image plane [24]
Figure 4. Installation specification of hyperboloid
catadioptric/mirror with axis
is laid on the field [24, 25]
TELKOMNIKA Telecommun Comput El Control 
The flow of baseline estimation using a single omnidirectional camera (Sukma Meganova Effendi)
1637
2.3. Calibration method
When the omnidirectional sensor is employed, some distortion will appear due to its lens
(improper position). Omnidirectional sensor parameters should be adjusted properly to minimize
the distortion. All of the parameter’s adjustment is called calibration. The calibration process is one
of important step because the omnidirectional sensor is used to map objects inside a room. There are two
parameters in a camera, extrinsic and intrinsic. Both of them describe the mathematical relationship between
3D coordinates of real word scene and the 2D coordinates of its projection onto the image plane. Specifically,
the intrinsic or internal parameter is used to represent the pixel coordinates of image point with
the corresponding coordinates in the camera reference frame. The extrinsic or external parameter
of the camera is the parameter that defines the location and orientation of real world coordinates to camera
coordinates. Focal length in pixel ( ,x yf f ), principal point ( ,x yc c ), pixel size, radial distortion ( 1 2 3, ,k k k )
and tangential distortion ( 1 2,p p ) represent the intrinsic parameters. The camera matrix is represented
in 3×3 matrix.
 
 =  
  
0
camera matrix 0
0 0 1
x x
y y
f c
f c (11)
This is of radial distortion.
= + + +
= + + +
2 4 6
1 2 3
2 4 6
1 2 3
disorted (1 )
disorted (1 )
x y k r k r k r
y y k r k r k r
(12)
And this is of tangential distortion.
( )
( )
 = + + + 
 = + + + 
2 2
1 2
2 2
1 2
disorted 2 2
disorted 2 2
x x p xy p r x
y y p r y p xy
(13)
Then, coefficient must be known is:
( )= 1 2 1 2 3Distortion coefficients k k p p k (14)
Matrix of rotation (R ) and translation (T ) are represented below.
 
 =  
  
11 12 13
21 22 23
31 32 33
r r r
R r r r
r r r
(15)
 
 =  
  
1
2
3
t
T t
t
(16)
2.4. Panoramic transformation
Omnidirectional image is transformed to be a panoramic image using Polar mapping. The Polar
coordinate system is mapped into Cartesian coordinate system. Polar coordinate system is denoted by ( ,r  )
and in Cartesian coordinate system is denoted by ( ,x y ). Coordinate center of polar sampling is calculated
using (17) and (18).
 −  
=  
 
1
tan
y
x
(17)
= +2 2
r x y (18)
 ISSN: 1693-6930
TELKOMNIKA Telecommun Comput El Control, Vol. 18, No. 3, June 2020: 1633 - 1642
1638
Value of Cartesian coordinate ( ,x y ) can be calculated from the Polar coordinate ( ,r  ) using trigonometry
as follows.
+ =2 2 2
x y r (19)


=
=
cos
sin
x r
y r
(20)
So, the correlation of Cartesian coordinate system and Polar coordinate system with ( ,c cx y )
is the center point of the coordinate of omnidirectional image (Polar coordinate system) and are given
as follows.
( )
( )
 
 
= +
= +
0
0
, cos
, sin
c
c
x r r x
y r r y
(21)
2.5. Optical flow
Intensity is one of the parameters in optical flow that used for representation the flow of every pixel
in the image. Based on its characteristic, the track of each pixel can be estimated. We employ
Lucas-Kanade’s optical flow algorithm. This algorithm combines some information from several nearby
pixels to resolves the ambiguity [21-22].
The Lucas-Kanade’s optical flow algorithm work as a pyramidal structure within process from
the highest layer until the lowest layer [21, 23]. This algorithm assumes the transfer of information
in the image between two images (frames) is less and almost constant by considering pixels nearby
of p coordinate. According to that, Lucas-Kanade assumes to maintain all pixels with a local window that
is centered on p coordinate. The of its velocity are given as follows.
( ) ( ) ( )
( ) ( ) ( )
( ) ( ) ( )
+ =−
+ =−
+ =−
1 1 1
1 1 1
x x y y t
x x y y t
x n x y n y t n
I q V I q V I q
I q V I q V I q
I q V I q V I q
(22)
And (22) in matrix Av b= , is represented in (23).
( ) ( )
( ) ( )
( ) ( )
( )
( )
( )
−  
  
  −  = = =         
−      
11 1
2 2 1
, ,
tx y
xx y t
y
x n y n t n
I qI q I q
VI q I q I q
A v b
V
I q I q I q
(23)
Where:
21 , , .... , nq q q = pixels in the window (or local window)
( ) ( ) ( ), ,yx i i t iI q I q I q = partial derivatives of image I in accordance with the position ( ,x y ) and the time (t )
on pixel iq at the time.
Lucas-Kanade’s algorithm can be obtained using least square algorithm and it is called matrix system
of 2 2x
T T
A Av A b= or ( )
−
=
1T T
v A A A b (24)
Where, T
A is transpose of matrix A and represented in the (25).
( ) ( ) ( )
( ) ( ) ( )
( ) ( )
( ) ( )
−
   −
     =     −       
  
 
12
2
x i x i y i x i t i
x i i i
y y i t ix i y i y i
ii i
I q I q I q I q I q
V
V I q I qI q I q I q
(25)
With the total summing are from i =1 until n .
TELKOMNIKA Telecommun Comput El Control 
The flow of baseline estimation using a single omnidirectional camera (Sukma Meganova Effendi)
1639
3. RESULTS AND ANALYSIS
The sample of image that was captured using our omnidirectional camera is shown in Figure 5.
The omnidirectional image must be transformed into panoramic image in order to be processed using stereo
triangulation. Figure 6 shows the result of panoramic image that was produced using polar mapping
algorithm. The baseline estimation was calculated using coordinates shifting of an object in panoramic
image. Coordinates shifting were obtained by moving the camera from one location to another location.
Each coordinate is an image’s pixel that was represented by a point (green circle) in panoramic image.
Each point was assigned on the surface of the object in panoramic image. We assign three and four (4) points
in the detected object for different length of track (94 cm and 129 cm) and both of them is shown
in Figures 6 (a) and (b). The points were used as tracking points in Lucas-Kanade’s optical flow algorithm.
Figure 7 shows detection points in the board were successfully detected in three different image.
Those images were captured from a different timestamp.
Figure 5. Omnidirectional image
(a)
(b)
(c)
Figure 6. Panoramic image that was captured in a room within length of track;
(a) 94 cm and (b) 129 cm, (c) Panoramic image that was captured in laboratory
(a) (b)
Figure 7. Four detected points (a) on length of track 94 cm and three detected points (b) on length
of track 129 cm in the object and the flow of them from the current frame to the next frame
 ISSN: 1693-6930
TELKOMNIKA Telecommun Comput El Control, Vol. 18, No. 3, June 2020: 1633 - 1642
1640
The coordinates of tracking points were saved into a file in *.txt file format. The saved coordinates
in the file were transformed into graph for each point. After the graphics were obtained for each point,
the flow of baseline estimation was calculated. The data is tested in various conditions based on track’s
length and horizontal distance of camera’s movement. Several data from tracking points are shown
in Figures 8 and 9. Two of the tracks in both graphs obtain similar flow and the other tracks show an
improper flow. The improper flow of the tracks occurs because of the difference light intensity in
the panoramic image. This phenomenon occurs because of the points detection process use Lucas-Kanade’s
optical flow algorithm.
Figure 8. The tracking flow of detected points from 94 cm camera’s movement
Figure 9. The tracking flow of detected points from 129 cm camera’s movement
After we compute the estimation graphs, the coefficients of its are given as follows.
=
=
0.001
0.349
a
b
The flow of baseline estimation is given as follows:
= − + +2
0.001 0.349y x x c (22)
After we get the baseline estimation, that equation was tested by tracking object in a different length
and different area Figure 6 (c). The result shows the track that was produced has similar flow. The value of c
also different because it depends on the initial position of the tracking points that are not always the same.
TELKOMNIKA Telecommun Comput El Control 
The flow of baseline estimation using a single omnidirectional camera (Sukma Meganova Effendi)
1641
4. CONCLUSION
The proposed method to estimate the flow of baseline from a single omnidirectional camera
is achieved. From the detected object, we choose three and four points to track and two of the tracking points
have a similar flow. The flow of baseline estimation method that is obtained in this research shows a second
order polynomial equation with value of 0.001a = and value of the 0.349b = . The constant value of c depends
on the initial position of the tracking points. From several experiments that were conducted, we conclude that
the equation of baseline estimation is not affected by length of tracking points. It is indicated by the same
coefficient values of a and b . The equation of baseline estimation that was obtained will be used for
the matching process in panoramic image.
REFERENCES
[1] J. Mrovlje J and V. Damir, “Distance Measuring based on Stereoscopic Pictures,” Proceeding of 9th
International
PhD Workshop on Systems and Control: Young Generation Viewpoint, Izola, Slovenia, Ljubljana: Institut Jožef
Stefan, 2008.
[2] O. M. M. K. Fouad, “Position Estimation of Mobile Robots Using Omni-Directional Cameras,” M. S. thesis,
Mechanical Engineering, Carleton University, Ottawa, Ontario, Canada, 2014.
[3] Z. Zhu, “Omnidirectional Stereo Vision,” Proceeding of the Workshop on Omnidirectional Vision, the 10th IEEE
ICAR, Budapest, Hungary, 2001.
[4] S. Yi and N. Ahuja, “An Omnidirectional Stereo Vision System Using a Single Camera,” in Proceeding
of International Conference on Pattern Recognition, pp. 861-865, Hong Kong, August 2006.
[5] J. Gluckman, S. K. Nayar, and K. J. Thoresz, “Real-Time Omnidirectional and Panoramic Stereo,” in Proceedings
of the 1998 DARPA Image Understanding Workshop, IUW’98, Monterey, California, 1998.
[6] T. Svoboda and T. Pajdla, “Epipolar Geometry of Central Catadioptric Camera,” International Journal
of Computer Vision, vol. 49, no. 1, pp. 23-37, August 2002.
[7] D. Southwell, et al., “Panoramic Stereo,” in Proceedings of 13th International Conference on Pattern Recognition,
Vienna, Austria, pp. 378-382, 1996.
[8] E. L. L. Cabral, J. C. de Souza Junior, and M. C. Hunold, “Omnidirectional Stereo Vision with a Hyperbolic
Double Lobed Mirror,” in Proceedings of the 17th
International Conference on Pattern Recognition, pp. 1-9,
Cambridge, UK, August 26, 2004.
[9] T. Sato and N. Yokoya, “Efficient Hundreds-baseline Stereo by Counting Interest Points for Moving
Omni-directional Multi-camera System,” Journal of Visual Communication and Image Representation, vol. 21,
no. 5-6, pp. 416-426, July-August 2010.
[10] K. Yang Tu, “Analysis and Comparison of Various Shape Mirrors Used in Omnidirectional Cameras with
Parameter Variation,“ Journal of Information Science and Engineering, vol. 25, no. 3, pp 945-959, May 2009.
[11] M. Yachida, “Omnidirectional Sensing and Combined Multiple Sensing,” in Proceedings of IEEE and ATR
Workshop on Computer Vision for Virtual Reality Based Human Communications, pp. 20-27, Bombay, India,
January 1998.
[12] K. Yamazama K, Y. Yagi, and M. Yachida, “Omnidirectional Imaging with Hyperboloidal Projection,” in
Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS ’93), pp. 1029-1034,
Yokohama, Japan, July 1993.
[13] J. Gaspar, et al., “Constant Resolution Omnidirectional Cameras,” in Proceeding of OMNIVIS'02 Workshop on
Omni-directional Vision, pp. 27-34, Copenhagen, Denmark, June, 2002.
[14] T. L. Conroy and J. B. Moore, “Resolution Invariant Surfaces for Panoramic Vision Systems,” in Proceedings of
the Seventh IEEE International Conference on Computer Vision, pp. 392-397, Kerkyra, Greece, September 20-27, 1999.
[15] S. K. Nayar, “Catadioptric Omnidirectional Camera,” Proceeding of IEEE Computer Society Conference on
Computer Vision and Pattern Recognition, pp. 482-488, San Juan, USA, June 1997.
[16] A. L. C. Barczak, J. Okamoto Jr., and V. Grassi Jr., “Face Tracking Using a Hyperbolic Catadioptric
Omnidirectional System”, Research Letters in the Information and Mathematical Sciences, vol. 13, pp. 55-67,
January 2009.
[17] J. Silva, et al., “Obstacle Detection, Identification and Sharing on a Robotic Soccer Team,” Proceeding
of Portuguese Conference on Artificial Intelligence (EPIA 2009), pp. 350-360, Aveiro, Portugal, 2009.
[18] D. Michel, A. A. Argyros, and M. Lourakis, “Horizon Matching for Localizing Unordered Panoramic Images”,
International Journal of Computer Vision and Image Understanding, vol. 114, no. 2, pp. 274-285, January 2010.
[19] R. Lukierski, S. Leutenegger, and A. J. Davison, “Rapid Free-Space Mapping from A Single Omnidirectional
Camera,” in Proceeding of the European Conference on Mobile Robots (ECMR), pp. 1-8, Lincoln, UK,
September 2015.
[20] OpenCV Dev Team, “Depth Map from Stereo Images,” OpenCV 3.0.0-dev documentation: Camera Calibration and
3D Reconstruction, Nov 10, 2014. [Online]. Available from: https://docs.opencv.org/3.0-beta/doc/py_tutorials/
py_calib3d/py_depthmap/py_depthmap.html. [Accessed November 9, 2015].
[21] R. Sigit, et al., “Improved Echocardiography Segmentation Using Active Shape Model and Optical Flow,”
International Journal of Telkomnika telecommunication computing electronic and control, vol. 17, no. 2,
pp. 809-818, April 2019.
 ISSN: 1693-6930
TELKOMNIKA Telecommun Comput El Control, Vol. 18, No. 3, June 2020: 1633 - 1642
1642
[22] A. A. Pratiwi, et al., “Improved Ejection Fraction Measurement on Cardiac Image Using Optical Flow,” in
Proceeding of 2017 International Electronics Symposium on Knowledge Creation and Intelligent Computing
(IES-KCIC) IEEE, pp. 295-300, September, 2017.
[23] U. Umar, R. Soelistijorini, and H. A. Darwito, “Tracking Arah Gerakan Telunjuk Jari Berbasis Webcam
Menggunakan Metode Optical Flow,” in Proceeding of the 13th Industrial Electronics Seminar 2011 (IES 2011),
pp. 249-254, October, 2011.
[24] M. Jamzad, A. R. Hadjkhodabakhshi, V. S. Mirrokni, “Object Detection and Localization Using Omnidirectional
Vision in the RoboCup Environment,” International Journal of Scientia Iranica, vol. 14, no. 6, pp. 599-611,
November 2007.
[25] A. K. Mulya, F. Ardilla, and D. Pramadihanto, “Ball Tracking and Goal Detection for Middle Size Soccer Robot
Using Omnidirectional Camera,” Proceeding of International Electronics Symposium (IES), pp. 432-437,
September 2016.
BIOGRAPHIES OF AUTHORS
Sukma Meganova Effendi is currently a master student with Electronic Engineering
Department, Electronics Engineering Polytechnic Institute of Surabaya (EEPIS), Surabaya,
Indonesia. She also works in Mechatronics Department, Politeknik Mekatronika Sanata
Dharma, Yogyakarta, Indonesia as a lecturer.
Dadet Pramadihanto received his Master degree in 1997 and Ph.D degree in 2003 from
Department of Systems and Human Sciences, Graduate School of Engineering Science,
Osaka University, Japan. His research interests include computer vision, robotics, real-time
systems, environmental monitoring, and cyber-physical-systems. He is currently working
as a lecturer at Computer Engineering, Department of Informatics and Computer Engineering,
and Graduate School of Informatics and Computer Engineering, Electronics Engineering
Polytechnic Institute of Surabaya (EEPIS), Surabaya, Indonesia. He serves as Head of EEPIS
Robotics Centre (ER2C-the research centre that focus research on the development
of humanoid/human like robots, real-time robotics operating systems, computer vision
in robotics, and engineering education based on robotics technology).
Riyanto Sigit received his Master degree from Department of Informatics, Institute
of Technology 10 November Surabaya (ITS), Surabaya, Indonesia in 1995 and Ph.D degree
from Universiti Kebangsaan Malaysia, Bangi, Selangor, Malaysia in 2014. He is currently
working as a lecturer at Computer Engineering, Department of Informatics and Computer
Engineering, and Graduate School of Informatics and Computer Engineering, Electronics
Engineering Polytechnic Institute of Surabaya (EEPIS), Surabaya, Indonesia. His research
interests include medical imaging, artificial intelligence, computer vision, and sensor.

Contenu connexe

Tendances

mihara_iccp16_presentation
mihara_iccp16_presentationmihara_iccp16_presentation
mihara_iccp16_presentationHajime Mihara
 
Video Surveillance Systems For Traffic Monitoring
Video Surveillance Systems For Traffic MonitoringVideo Surveillance Systems For Traffic Monitoring
Video Surveillance Systems For Traffic MonitoringMeridian Media
 
Two Dimensional Image Reconstruction Algorithms
Two Dimensional Image Reconstruction AlgorithmsTwo Dimensional Image Reconstruction Algorithms
Two Dimensional Image Reconstruction Algorithmsmastersrihari
 
Scanning 3 d full human bodies using kinects
Scanning 3 d full human bodies using kinectsScanning 3 d full human bodies using kinects
Scanning 3 d full human bodies using kinectsFensa Saj
 
Implementing Camshift on a Mobile Robot for Person Tracking and Pursuit_ICDM
Implementing Camshift on a Mobile Robot for Person Tracking and Pursuit_ICDMImplementing Camshift on a Mobile Robot for Person Tracking and Pursuit_ICDM
Implementing Camshift on a Mobile Robot for Person Tracking and Pursuit_ICDMSoma Boubou
 
Tracking Chessboard Corners Using Projective Transformation for Augmented Rea...
Tracking Chessboard Corners Using Projective Transformation for Augmented Rea...Tracking Chessboard Corners Using Projective Transformation for Augmented Rea...
Tracking Chessboard Corners Using Projective Transformation for Augmented Rea...CSCJournals
 
Human action recognition with kinect using a joint motion descriptor
Human action recognition with kinect using a joint motion descriptorHuman action recognition with kinect using a joint motion descriptor
Human action recognition with kinect using a joint motion descriptorSoma Boubou
 
Practical 02: Modern Survey Instrument
Practical 02: Modern Survey InstrumentPractical 02: Modern Survey Instrument
Practical 02: Modern Survey InstrumentSarhat Adam
 
Passive stereo vision with deep learning
Passive stereo vision with deep learningPassive stereo vision with deep learning
Passive stereo vision with deep learningYu Huang
 
Estimation of Terrain Gradient Conditions & Obstacle Detection Using a Monocu...
Estimation of Terrain Gradient Conditions & Obstacle Detection Using a Monocu...Estimation of Terrain Gradient Conditions & Obstacle Detection Using a Monocu...
Estimation of Terrain Gradient Conditions & Obstacle Detection Using a Monocu...Connor Goddard
 
Removal of Transformation Errors by Quarterion In Multi View Image Registration
Removal of Transformation Errors by Quarterion In Multi View Image RegistrationRemoval of Transformation Errors by Quarterion In Multi View Image Registration
Removal of Transformation Errors by Quarterion In Multi View Image RegistrationIDES Editor
 
Build Your Own 3D Scanner: Introduction
Build Your Own 3D Scanner: IntroductionBuild Your Own 3D Scanner: Introduction
Build Your Own 3D Scanner: IntroductionDouglas Lanman
 
Inertial Sensor Array Calibration Made Easy !
Inertial Sensor Array Calibration Made Easy !Inertial Sensor Array Calibration Made Easy !
Inertial Sensor Array Calibration Made Easy !oblu.io
 

Tendances (20)

mihara_iccp16_presentation
mihara_iccp16_presentationmihara_iccp16_presentation
mihara_iccp16_presentation
 
Video Surveillance Systems For Traffic Monitoring
Video Surveillance Systems For Traffic MonitoringVideo Surveillance Systems For Traffic Monitoring
Video Surveillance Systems For Traffic Monitoring
 
Final Paper
Final PaperFinal Paper
Final Paper
 
Two Dimensional Image Reconstruction Algorithms
Two Dimensional Image Reconstruction AlgorithmsTwo Dimensional Image Reconstruction Algorithms
Two Dimensional Image Reconstruction Algorithms
 
Smart Room Gesture Control
Smart Room Gesture ControlSmart Room Gesture Control
Smart Room Gesture Control
 
Scanning 3 d full human bodies using kinects
Scanning 3 d full human bodies using kinectsScanning 3 d full human bodies using kinects
Scanning 3 d full human bodies using kinects
 
Survey 1 (project overview)
Survey 1 (project overview)Survey 1 (project overview)
Survey 1 (project overview)
 
Implementing Camshift on a Mobile Robot for Person Tracking and Pursuit_ICDM
Implementing Camshift on a Mobile Robot for Person Tracking and Pursuit_ICDMImplementing Camshift on a Mobile Robot for Person Tracking and Pursuit_ICDM
Implementing Camshift on a Mobile Robot for Person Tracking and Pursuit_ICDM
 
Tracking Chessboard Corners Using Projective Transformation for Augmented Rea...
Tracking Chessboard Corners Using Projective Transformation for Augmented Rea...Tracking Chessboard Corners Using Projective Transformation for Augmented Rea...
Tracking Chessboard Corners Using Projective Transformation for Augmented Rea...
 
Unit 1 notes
Unit 1 notesUnit 1 notes
Unit 1 notes
 
Human action recognition with kinect using a joint motion descriptor
Human action recognition with kinect using a joint motion descriptorHuman action recognition with kinect using a joint motion descriptor
Human action recognition with kinect using a joint motion descriptor
 
Practical 02: Modern Survey Instrument
Practical 02: Modern Survey InstrumentPractical 02: Modern Survey Instrument
Practical 02: Modern Survey Instrument
 
D018112429
D018112429D018112429
D018112429
 
Passive stereo vision with deep learning
Passive stereo vision with deep learningPassive stereo vision with deep learning
Passive stereo vision with deep learning
 
Estimation of Terrain Gradient Conditions & Obstacle Detection Using a Monocu...
Estimation of Terrain Gradient Conditions & Obstacle Detection Using a Monocu...Estimation of Terrain Gradient Conditions & Obstacle Detection Using a Monocu...
Estimation of Terrain Gradient Conditions & Obstacle Detection Using a Monocu...
 
40120140503006
4012014050300640120140503006
40120140503006
 
report
reportreport
report
 
Removal of Transformation Errors by Quarterion In Multi View Image Registration
Removal of Transformation Errors by Quarterion In Multi View Image RegistrationRemoval of Transformation Errors by Quarterion In Multi View Image Registration
Removal of Transformation Errors by Quarterion In Multi View Image Registration
 
Build Your Own 3D Scanner: Introduction
Build Your Own 3D Scanner: IntroductionBuild Your Own 3D Scanner: Introduction
Build Your Own 3D Scanner: Introduction
 
Inertial Sensor Array Calibration Made Easy !
Inertial Sensor Array Calibration Made Easy !Inertial Sensor Array Calibration Made Easy !
Inertial Sensor Array Calibration Made Easy !
 

Similaire à The flow of baseline estimation using a single omnidirectional camera

Fuzzy-proportional-integral-derivative-based controller for object tracking i...
Fuzzy-proportional-integral-derivative-based controller for object tracking i...Fuzzy-proportional-integral-derivative-based controller for object tracking i...
Fuzzy-proportional-integral-derivative-based controller for object tracking i...IJECEIAES
 
Leader Follower Formation Control of Ground Vehicles Using Dynamic Pixel Coun...
Leader Follower Formation Control of Ground Vehicles Using Dynamic Pixel Coun...Leader Follower Formation Control of Ground Vehicles Using Dynamic Pixel Coun...
Leader Follower Formation Control of Ground Vehicles Using Dynamic Pixel Coun...ijma
 
Visual Mapping and Collision Avoidance Dynamic Environments in Dynamic Enviro...
Visual Mapping and Collision Avoidance Dynamic Environments in Dynamic Enviro...Visual Mapping and Collision Avoidance Dynamic Environments in Dynamic Enviro...
Visual Mapping and Collision Avoidance Dynamic Environments in Dynamic Enviro...Darius Burschka
 
IRJET- Study on the Feature of Cavitation Bubbles in Hydraulic Valve by using...
IRJET- Study on the Feature of Cavitation Bubbles in Hydraulic Valve by using...IRJET- Study on the Feature of Cavitation Bubbles in Hydraulic Valve by using...
IRJET- Study on the Feature of Cavitation Bubbles in Hydraulic Valve by using...IRJET Journal
 
Image Processing Algorithms For Deep-Space Autonomous Optical Navigation 2.pptx
Image Processing Algorithms For Deep-Space Autonomous Optical Navigation 2.pptxImage Processing Algorithms For Deep-Space Autonomous Optical Navigation 2.pptx
Image Processing Algorithms For Deep-Space Autonomous Optical Navigation 2.pptxahmadzahran219
 
Satellite Imaging System
Satellite Imaging SystemSatellite Imaging System
Satellite Imaging SystemCSCJournals
 
Object tracking using motion flow projection for pan-tilt configuration
Object tracking using motion flow projection for pan-tilt configurationObject tracking using motion flow projection for pan-tilt configuration
Object tracking using motion flow projection for pan-tilt configurationIJECEIAES
 
Review_of_phase_measuring_deflectometry.pdf
Review_of_phase_measuring_deflectometry.pdfReview_of_phase_measuring_deflectometry.pdf
Review_of_phase_measuring_deflectometry.pdfSevgulDemir
 
Report bep thomas_blanken
Report bep thomas_blankenReport bep thomas_blanken
Report bep thomas_blankenxepost
 
AUTO LANDING PROCESS FOR AUTONOMOUS FLYING ROBOT BY USING IMAGE PROCESSING BA...
AUTO LANDING PROCESS FOR AUTONOMOUS FLYING ROBOT BY USING IMAGE PROCESSING BA...AUTO LANDING PROCESS FOR AUTONOMOUS FLYING ROBOT BY USING IMAGE PROCESSING BA...
AUTO LANDING PROCESS FOR AUTONOMOUS FLYING ROBOT BY USING IMAGE PROCESSING BA...csandit
 
Goal location prediction based on deep learning using RGB-D camera
Goal location prediction based on deep learning using RGB-D cameraGoal location prediction based on deep learning using RGB-D camera
Goal location prediction based on deep learning using RGB-D camerajournalBEEI
 
Intelligent indoor mobile robot navigation using stereo vision
Intelligent indoor mobile robot navigation using stereo visionIntelligent indoor mobile robot navigation using stereo vision
Intelligent indoor mobile robot navigation using stereo visionsipij
 
A Study on the Development of High Accuracy Solar Tracking Systems
A Study on the Development of High Accuracy Solar Tracking SystemsA Study on the Development of High Accuracy Solar Tracking Systems
A Study on the Development of High Accuracy Solar Tracking Systemsiskandaruz
 
dissertation master degree
dissertation master degreedissertation master degree
dissertation master degreeKubica Marek
 
Three-dimensional structure from motion recovery of a moving object with nois...
Three-dimensional structure from motion recovery of a moving object with nois...Three-dimensional structure from motion recovery of a moving object with nois...
Three-dimensional structure from motion recovery of a moving object with nois...IJECEIAES
 
10.1109@ecs.2015.7124874
10.1109@ecs.2015.712487410.1109@ecs.2015.7124874
10.1109@ecs.2015.7124874Ganesh Raja
 

Similaire à The flow of baseline estimation using a single omnidirectional camera (20)

Fuzzy-proportional-integral-derivative-based controller for object tracking i...
Fuzzy-proportional-integral-derivative-based controller for object tracking i...Fuzzy-proportional-integral-derivative-based controller for object tracking i...
Fuzzy-proportional-integral-derivative-based controller for object tracking i...
 
Leader Follower Formation Control of Ground Vehicles Using Dynamic Pixel Coun...
Leader Follower Formation Control of Ground Vehicles Using Dynamic Pixel Coun...Leader Follower Formation Control of Ground Vehicles Using Dynamic Pixel Coun...
Leader Follower Formation Control of Ground Vehicles Using Dynamic Pixel Coun...
 
Visual Mapping and Collision Avoidance Dynamic Environments in Dynamic Enviro...
Visual Mapping and Collision Avoidance Dynamic Environments in Dynamic Enviro...Visual Mapping and Collision Avoidance Dynamic Environments in Dynamic Enviro...
Visual Mapping and Collision Avoidance Dynamic Environments in Dynamic Enviro...
 
998-isvc16
998-isvc16998-isvc16
998-isvc16
 
IRJET- Study on the Feature of Cavitation Bubbles in Hydraulic Valve by using...
IRJET- Study on the Feature of Cavitation Bubbles in Hydraulic Valve by using...IRJET- Study on the Feature of Cavitation Bubbles in Hydraulic Valve by using...
IRJET- Study on the Feature of Cavitation Bubbles in Hydraulic Valve by using...
 
Image Processing Algorithms For Deep-Space Autonomous Optical Navigation 2.pptx
Image Processing Algorithms For Deep-Space Autonomous Optical Navigation 2.pptxImage Processing Algorithms For Deep-Space Autonomous Optical Navigation 2.pptx
Image Processing Algorithms For Deep-Space Autonomous Optical Navigation 2.pptx
 
Satellite Imaging System
Satellite Imaging SystemSatellite Imaging System
Satellite Imaging System
 
D04432528
D04432528D04432528
D04432528
 
Object tracking using motion flow projection for pan-tilt configuration
Object tracking using motion flow projection for pan-tilt configurationObject tracking using motion flow projection for pan-tilt configuration
Object tracking using motion flow projection for pan-tilt configuration
 
Review_of_phase_measuring_deflectometry.pdf
Review_of_phase_measuring_deflectometry.pdfReview_of_phase_measuring_deflectometry.pdf
Review_of_phase_measuring_deflectometry.pdf
 
Report bep thomas_blanken
Report bep thomas_blankenReport bep thomas_blanken
Report bep thomas_blanken
 
AUTO LANDING PROCESS FOR AUTONOMOUS FLYING ROBOT BY USING IMAGE PROCESSING BA...
AUTO LANDING PROCESS FOR AUTONOMOUS FLYING ROBOT BY USING IMAGE PROCESSING BA...AUTO LANDING PROCESS FOR AUTONOMOUS FLYING ROBOT BY USING IMAGE PROCESSING BA...
AUTO LANDING PROCESS FOR AUTONOMOUS FLYING ROBOT BY USING IMAGE PROCESSING BA...
 
Goal location prediction based on deep learning using RGB-D camera
Goal location prediction based on deep learning using RGB-D cameraGoal location prediction based on deep learning using RGB-D camera
Goal location prediction based on deep learning using RGB-D camera
 
Intelligent indoor mobile robot navigation using stereo vision
Intelligent indoor mobile robot navigation using stereo visionIntelligent indoor mobile robot navigation using stereo vision
Intelligent indoor mobile robot navigation using stereo vision
 
A Study on the Development of High Accuracy Solar Tracking Systems
A Study on the Development of High Accuracy Solar Tracking SystemsA Study on the Development of High Accuracy Solar Tracking Systems
A Study on the Development of High Accuracy Solar Tracking Systems
 
dissertation master degree
dissertation master degreedissertation master degree
dissertation master degree
 
Three-dimensional structure from motion recovery of a moving object with nois...
Three-dimensional structure from motion recovery of a moving object with nois...Three-dimensional structure from motion recovery of a moving object with nois...
Three-dimensional structure from motion recovery of a moving object with nois...
 
10.1109@ecs.2015.7124874
10.1109@ecs.2015.712487410.1109@ecs.2015.7124874
10.1109@ecs.2015.7124874
 
Photogrammetry1
Photogrammetry1Photogrammetry1
Photogrammetry1
 
Motion Human Detection & Tracking Based On Background Subtraction
Motion Human Detection & Tracking Based On Background SubtractionMotion Human Detection & Tracking Based On Background Subtraction
Motion Human Detection & Tracking Based On Background Subtraction
 

Plus de TELKOMNIKA JOURNAL

Amazon products reviews classification based on machine learning, deep learni...
Amazon products reviews classification based on machine learning, deep learni...Amazon products reviews classification based on machine learning, deep learni...
Amazon products reviews classification based on machine learning, deep learni...TELKOMNIKA JOURNAL
 
Design, simulation, and analysis of microstrip patch antenna for wireless app...
Design, simulation, and analysis of microstrip patch antenna for wireless app...Design, simulation, and analysis of microstrip patch antenna for wireless app...
Design, simulation, and analysis of microstrip patch antenna for wireless app...TELKOMNIKA JOURNAL
 
Design and simulation an optimal enhanced PI controller for congestion avoida...
Design and simulation an optimal enhanced PI controller for congestion avoida...Design and simulation an optimal enhanced PI controller for congestion avoida...
Design and simulation an optimal enhanced PI controller for congestion avoida...TELKOMNIKA JOURNAL
 
Improving the detection of intrusion in vehicular ad-hoc networks with modifi...
Improving the detection of intrusion in vehicular ad-hoc networks with modifi...Improving the detection of intrusion in vehicular ad-hoc networks with modifi...
Improving the detection of intrusion in vehicular ad-hoc networks with modifi...TELKOMNIKA JOURNAL
 
Conceptual model of internet banking adoption with perceived risk and trust f...
Conceptual model of internet banking adoption with perceived risk and trust f...Conceptual model of internet banking adoption with perceived risk and trust f...
Conceptual model of internet banking adoption with perceived risk and trust f...TELKOMNIKA JOURNAL
 
Efficient combined fuzzy logic and LMS algorithm for smart antenna
Efficient combined fuzzy logic and LMS algorithm for smart antennaEfficient combined fuzzy logic and LMS algorithm for smart antenna
Efficient combined fuzzy logic and LMS algorithm for smart antennaTELKOMNIKA JOURNAL
 
Design and implementation of a LoRa-based system for warning of forest fire
Design and implementation of a LoRa-based system for warning of forest fireDesign and implementation of a LoRa-based system for warning of forest fire
Design and implementation of a LoRa-based system for warning of forest fireTELKOMNIKA JOURNAL
 
Wavelet-based sensing technique in cognitive radio network
Wavelet-based sensing technique in cognitive radio networkWavelet-based sensing technique in cognitive radio network
Wavelet-based sensing technique in cognitive radio networkTELKOMNIKA JOURNAL
 
A novel compact dual-band bandstop filter with enhanced rejection bands
A novel compact dual-band bandstop filter with enhanced rejection bandsA novel compact dual-band bandstop filter with enhanced rejection bands
A novel compact dual-band bandstop filter with enhanced rejection bandsTELKOMNIKA JOURNAL
 
Deep learning approach to DDoS attack with imbalanced data at the application...
Deep learning approach to DDoS attack with imbalanced data at the application...Deep learning approach to DDoS attack with imbalanced data at the application...
Deep learning approach to DDoS attack with imbalanced data at the application...TELKOMNIKA JOURNAL
 
Brief note on match and miss-match uncertainties
Brief note on match and miss-match uncertaintiesBrief note on match and miss-match uncertainties
Brief note on match and miss-match uncertaintiesTELKOMNIKA JOURNAL
 
Implementation of FinFET technology based low power 4×4 Wallace tree multipli...
Implementation of FinFET technology based low power 4×4 Wallace tree multipli...Implementation of FinFET technology based low power 4×4 Wallace tree multipli...
Implementation of FinFET technology based low power 4×4 Wallace tree multipli...TELKOMNIKA JOURNAL
 
Evaluation of the weighted-overlap add model with massive MIMO in a 5G system
Evaluation of the weighted-overlap add model with massive MIMO in a 5G systemEvaluation of the weighted-overlap add model with massive MIMO in a 5G system
Evaluation of the weighted-overlap add model with massive MIMO in a 5G systemTELKOMNIKA JOURNAL
 
Reflector antenna design in different frequencies using frequency selective s...
Reflector antenna design in different frequencies using frequency selective s...Reflector antenna design in different frequencies using frequency selective s...
Reflector antenna design in different frequencies using frequency selective s...TELKOMNIKA JOURNAL
 
Reagentless iron detection in water based on unclad fiber optical sensor
Reagentless iron detection in water based on unclad fiber optical sensorReagentless iron detection in water based on unclad fiber optical sensor
Reagentless iron detection in water based on unclad fiber optical sensorTELKOMNIKA JOURNAL
 
Impact of CuS counter electrode calcination temperature on quantum dot sensit...
Impact of CuS counter electrode calcination temperature on quantum dot sensit...Impact of CuS counter electrode calcination temperature on quantum dot sensit...
Impact of CuS counter electrode calcination temperature on quantum dot sensit...TELKOMNIKA JOURNAL
 
A progressive learning for structural tolerance online sequential extreme lea...
A progressive learning for structural tolerance online sequential extreme lea...A progressive learning for structural tolerance online sequential extreme lea...
A progressive learning for structural tolerance online sequential extreme lea...TELKOMNIKA JOURNAL
 
Electroencephalography-based brain-computer interface using neural networks
Electroencephalography-based brain-computer interface using neural networksElectroencephalography-based brain-computer interface using neural networks
Electroencephalography-based brain-computer interface using neural networksTELKOMNIKA JOURNAL
 
Adaptive segmentation algorithm based on level set model in medical imaging
Adaptive segmentation algorithm based on level set model in medical imagingAdaptive segmentation algorithm based on level set model in medical imaging
Adaptive segmentation algorithm based on level set model in medical imagingTELKOMNIKA JOURNAL
 
Automatic channel selection using shuffled frog leaping algorithm for EEG bas...
Automatic channel selection using shuffled frog leaping algorithm for EEG bas...Automatic channel selection using shuffled frog leaping algorithm for EEG bas...
Automatic channel selection using shuffled frog leaping algorithm for EEG bas...TELKOMNIKA JOURNAL
 

Plus de TELKOMNIKA JOURNAL (20)

Amazon products reviews classification based on machine learning, deep learni...
Amazon products reviews classification based on machine learning, deep learni...Amazon products reviews classification based on machine learning, deep learni...
Amazon products reviews classification based on machine learning, deep learni...
 
Design, simulation, and analysis of microstrip patch antenna for wireless app...
Design, simulation, and analysis of microstrip patch antenna for wireless app...Design, simulation, and analysis of microstrip patch antenna for wireless app...
Design, simulation, and analysis of microstrip patch antenna for wireless app...
 
Design and simulation an optimal enhanced PI controller for congestion avoida...
Design and simulation an optimal enhanced PI controller for congestion avoida...Design and simulation an optimal enhanced PI controller for congestion avoida...
Design and simulation an optimal enhanced PI controller for congestion avoida...
 
Improving the detection of intrusion in vehicular ad-hoc networks with modifi...
Improving the detection of intrusion in vehicular ad-hoc networks with modifi...Improving the detection of intrusion in vehicular ad-hoc networks with modifi...
Improving the detection of intrusion in vehicular ad-hoc networks with modifi...
 
Conceptual model of internet banking adoption with perceived risk and trust f...
Conceptual model of internet banking adoption with perceived risk and trust f...Conceptual model of internet banking adoption with perceived risk and trust f...
Conceptual model of internet banking adoption with perceived risk and trust f...
 
Efficient combined fuzzy logic and LMS algorithm for smart antenna
Efficient combined fuzzy logic and LMS algorithm for smart antennaEfficient combined fuzzy logic and LMS algorithm for smart antenna
Efficient combined fuzzy logic and LMS algorithm for smart antenna
 
Design and implementation of a LoRa-based system for warning of forest fire
Design and implementation of a LoRa-based system for warning of forest fireDesign and implementation of a LoRa-based system for warning of forest fire
Design and implementation of a LoRa-based system for warning of forest fire
 
Wavelet-based sensing technique in cognitive radio network
Wavelet-based sensing technique in cognitive radio networkWavelet-based sensing technique in cognitive radio network
Wavelet-based sensing technique in cognitive radio network
 
A novel compact dual-band bandstop filter with enhanced rejection bands
A novel compact dual-band bandstop filter with enhanced rejection bandsA novel compact dual-band bandstop filter with enhanced rejection bands
A novel compact dual-band bandstop filter with enhanced rejection bands
 
Deep learning approach to DDoS attack with imbalanced data at the application...
Deep learning approach to DDoS attack with imbalanced data at the application...Deep learning approach to DDoS attack with imbalanced data at the application...
Deep learning approach to DDoS attack with imbalanced data at the application...
 
Brief note on match and miss-match uncertainties
Brief note on match and miss-match uncertaintiesBrief note on match and miss-match uncertainties
Brief note on match and miss-match uncertainties
 
Implementation of FinFET technology based low power 4×4 Wallace tree multipli...
Implementation of FinFET technology based low power 4×4 Wallace tree multipli...Implementation of FinFET technology based low power 4×4 Wallace tree multipli...
Implementation of FinFET technology based low power 4×4 Wallace tree multipli...
 
Evaluation of the weighted-overlap add model with massive MIMO in a 5G system
Evaluation of the weighted-overlap add model with massive MIMO in a 5G systemEvaluation of the weighted-overlap add model with massive MIMO in a 5G system
Evaluation of the weighted-overlap add model with massive MIMO in a 5G system
 
Reflector antenna design in different frequencies using frequency selective s...
Reflector antenna design in different frequencies using frequency selective s...Reflector antenna design in different frequencies using frequency selective s...
Reflector antenna design in different frequencies using frequency selective s...
 
Reagentless iron detection in water based on unclad fiber optical sensor
Reagentless iron detection in water based on unclad fiber optical sensorReagentless iron detection in water based on unclad fiber optical sensor
Reagentless iron detection in water based on unclad fiber optical sensor
 
Impact of CuS counter electrode calcination temperature on quantum dot sensit...
Impact of CuS counter electrode calcination temperature on quantum dot sensit...Impact of CuS counter electrode calcination temperature on quantum dot sensit...
Impact of CuS counter electrode calcination temperature on quantum dot sensit...
 
A progressive learning for structural tolerance online sequential extreme lea...
A progressive learning for structural tolerance online sequential extreme lea...A progressive learning for structural tolerance online sequential extreme lea...
A progressive learning for structural tolerance online sequential extreme lea...
 
Electroencephalography-based brain-computer interface using neural networks
Electroencephalography-based brain-computer interface using neural networksElectroencephalography-based brain-computer interface using neural networks
Electroencephalography-based brain-computer interface using neural networks
 
Adaptive segmentation algorithm based on level set model in medical imaging
Adaptive segmentation algorithm based on level set model in medical imagingAdaptive segmentation algorithm based on level set model in medical imaging
Adaptive segmentation algorithm based on level set model in medical imaging
 
Automatic channel selection using shuffled frog leaping algorithm for EEG bas...
Automatic channel selection using shuffled frog leaping algorithm for EEG bas...Automatic channel selection using shuffled frog leaping algorithm for EEG bas...
Automatic channel selection using shuffled frog leaping algorithm for EEG bas...
 

Dernier

Double Revolving field theory-how the rotor develops torque
Double Revolving field theory-how the rotor develops torqueDouble Revolving field theory-how the rotor develops torque
Double Revolving field theory-how the rotor develops torqueBhangaleSonal
 
Booking open Available Pune Call Girls Pargaon 6297143586 Call Hot Indian Gi...
Booking open Available Pune Call Girls Pargaon  6297143586 Call Hot Indian Gi...Booking open Available Pune Call Girls Pargaon  6297143586 Call Hot Indian Gi...
Booking open Available Pune Call Girls Pargaon 6297143586 Call Hot Indian Gi...Call Girls in Nagpur High Profile
 
Thermal Engineering-R & A / C - unit - V
Thermal Engineering-R & A / C - unit - VThermal Engineering-R & A / C - unit - V
Thermal Engineering-R & A / C - unit - VDineshKumar4165
 
CCS335 _ Neural Networks and Deep Learning Laboratory_Lab Complete Record
CCS335 _ Neural Networks and Deep Learning Laboratory_Lab Complete RecordCCS335 _ Neural Networks and Deep Learning Laboratory_Lab Complete Record
CCS335 _ Neural Networks and Deep Learning Laboratory_Lab Complete RecordAsst.prof M.Gokilavani
 
ONLINE FOOD ORDER SYSTEM PROJECT REPORT.pdf
ONLINE FOOD ORDER SYSTEM PROJECT REPORT.pdfONLINE FOOD ORDER SYSTEM PROJECT REPORT.pdf
ONLINE FOOD ORDER SYSTEM PROJECT REPORT.pdfKamal Acharya
 
University management System project report..pdf
University management System project report..pdfUniversity management System project report..pdf
University management System project report..pdfKamal Acharya
 
Unit 1 - Soil Classification and Compaction.pdf
Unit 1 - Soil Classification and Compaction.pdfUnit 1 - Soil Classification and Compaction.pdf
Unit 1 - Soil Classification and Compaction.pdfRagavanV2
 
Java Programming :Event Handling(Types of Events)
Java Programming :Event Handling(Types of Events)Java Programming :Event Handling(Types of Events)
Java Programming :Event Handling(Types of Events)simmis5
 
AKTU Computer Networks notes --- Unit 3.pdf
AKTU Computer Networks notes ---  Unit 3.pdfAKTU Computer Networks notes ---  Unit 3.pdf
AKTU Computer Networks notes --- Unit 3.pdfankushspencer015
 
The Most Attractive Pune Call Girls Budhwar Peth 8250192130 Will You Miss Thi...
The Most Attractive Pune Call Girls Budhwar Peth 8250192130 Will You Miss Thi...The Most Attractive Pune Call Girls Budhwar Peth 8250192130 Will You Miss Thi...
The Most Attractive Pune Call Girls Budhwar Peth 8250192130 Will You Miss Thi...ranjana rawat
 
chapter 5.pptx: drainage and irrigation engineering
chapter 5.pptx: drainage and irrigation engineeringchapter 5.pptx: drainage and irrigation engineering
chapter 5.pptx: drainage and irrigation engineeringmulugeta48
 
KubeKraft presentation @CloudNativeHooghly
KubeKraft presentation @CloudNativeHooghlyKubeKraft presentation @CloudNativeHooghly
KubeKraft presentation @CloudNativeHooghlysanyuktamishra911
 
Double rodded leveling 1 pdf activity 01
Double rodded leveling 1 pdf activity 01Double rodded leveling 1 pdf activity 01
Double rodded leveling 1 pdf activity 01KreezheaRecto
 
Call for Papers - Educational Administration: Theory and Practice, E-ISSN: 21...
Call for Papers - Educational Administration: Theory and Practice, E-ISSN: 21...Call for Papers - Educational Administration: Theory and Practice, E-ISSN: 21...
Call for Papers - Educational Administration: Theory and Practice, E-ISSN: 21...Christo Ananth
 
Unleashing the Power of the SORA AI lastest leap
Unleashing the Power of the SORA AI lastest leapUnleashing the Power of the SORA AI lastest leap
Unleashing the Power of the SORA AI lastest leapRishantSharmaFr
 
VIP Model Call Girls Kothrud ( Pune ) Call ON 8005736733 Starting From 5K to ...
VIP Model Call Girls Kothrud ( Pune ) Call ON 8005736733 Starting From 5K to ...VIP Model Call Girls Kothrud ( Pune ) Call ON 8005736733 Starting From 5K to ...
VIP Model Call Girls Kothrud ( Pune ) Call ON 8005736733 Starting From 5K to ...SUHANI PANDEY
 
Call Girls Walvekar Nagar Call Me 7737669865 Budget Friendly No Advance Booking
Call Girls Walvekar Nagar Call Me 7737669865 Budget Friendly No Advance BookingCall Girls Walvekar Nagar Call Me 7737669865 Budget Friendly No Advance Booking
Call Girls Walvekar Nagar Call Me 7737669865 Budget Friendly No Advance Bookingroncy bisnoi
 

Dernier (20)

Double Revolving field theory-how the rotor develops torque
Double Revolving field theory-how the rotor develops torqueDouble Revolving field theory-how the rotor develops torque
Double Revolving field theory-how the rotor develops torque
 
Booking open Available Pune Call Girls Pargaon 6297143586 Call Hot Indian Gi...
Booking open Available Pune Call Girls Pargaon  6297143586 Call Hot Indian Gi...Booking open Available Pune Call Girls Pargaon  6297143586 Call Hot Indian Gi...
Booking open Available Pune Call Girls Pargaon 6297143586 Call Hot Indian Gi...
 
Thermal Engineering-R & A / C - unit - V
Thermal Engineering-R & A / C - unit - VThermal Engineering-R & A / C - unit - V
Thermal Engineering-R & A / C - unit - V
 
(INDIRA) Call Girl Aurangabad Call Now 8617697112 Aurangabad Escorts 24x7
(INDIRA) Call Girl Aurangabad Call Now 8617697112 Aurangabad Escorts 24x7(INDIRA) Call Girl Aurangabad Call Now 8617697112 Aurangabad Escorts 24x7
(INDIRA) Call Girl Aurangabad Call Now 8617697112 Aurangabad Escorts 24x7
 
Roadmap to Membership of RICS - Pathways and Routes
Roadmap to Membership of RICS - Pathways and RoutesRoadmap to Membership of RICS - Pathways and Routes
Roadmap to Membership of RICS - Pathways and Routes
 
CCS335 _ Neural Networks and Deep Learning Laboratory_Lab Complete Record
CCS335 _ Neural Networks and Deep Learning Laboratory_Lab Complete RecordCCS335 _ Neural Networks and Deep Learning Laboratory_Lab Complete Record
CCS335 _ Neural Networks and Deep Learning Laboratory_Lab Complete Record
 
ONLINE FOOD ORDER SYSTEM PROJECT REPORT.pdf
ONLINE FOOD ORDER SYSTEM PROJECT REPORT.pdfONLINE FOOD ORDER SYSTEM PROJECT REPORT.pdf
ONLINE FOOD ORDER SYSTEM PROJECT REPORT.pdf
 
University management System project report..pdf
University management System project report..pdfUniversity management System project report..pdf
University management System project report..pdf
 
Unit 1 - Soil Classification and Compaction.pdf
Unit 1 - Soil Classification and Compaction.pdfUnit 1 - Soil Classification and Compaction.pdf
Unit 1 - Soil Classification and Compaction.pdf
 
Java Programming :Event Handling(Types of Events)
Java Programming :Event Handling(Types of Events)Java Programming :Event Handling(Types of Events)
Java Programming :Event Handling(Types of Events)
 
AKTU Computer Networks notes --- Unit 3.pdf
AKTU Computer Networks notes ---  Unit 3.pdfAKTU Computer Networks notes ---  Unit 3.pdf
AKTU Computer Networks notes --- Unit 3.pdf
 
The Most Attractive Pune Call Girls Budhwar Peth 8250192130 Will You Miss Thi...
The Most Attractive Pune Call Girls Budhwar Peth 8250192130 Will You Miss Thi...The Most Attractive Pune Call Girls Budhwar Peth 8250192130 Will You Miss Thi...
The Most Attractive Pune Call Girls Budhwar Peth 8250192130 Will You Miss Thi...
 
chapter 5.pptx: drainage and irrigation engineering
chapter 5.pptx: drainage and irrigation engineeringchapter 5.pptx: drainage and irrigation engineering
chapter 5.pptx: drainage and irrigation engineering
 
KubeKraft presentation @CloudNativeHooghly
KubeKraft presentation @CloudNativeHooghlyKubeKraft presentation @CloudNativeHooghly
KubeKraft presentation @CloudNativeHooghly
 
Double rodded leveling 1 pdf activity 01
Double rodded leveling 1 pdf activity 01Double rodded leveling 1 pdf activity 01
Double rodded leveling 1 pdf activity 01
 
(INDIRA) Call Girl Meerut Call Now 8617697112 Meerut Escorts 24x7
(INDIRA) Call Girl Meerut Call Now 8617697112 Meerut Escorts 24x7(INDIRA) Call Girl Meerut Call Now 8617697112 Meerut Escorts 24x7
(INDIRA) Call Girl Meerut Call Now 8617697112 Meerut Escorts 24x7
 
Call for Papers - Educational Administration: Theory and Practice, E-ISSN: 21...
Call for Papers - Educational Administration: Theory and Practice, E-ISSN: 21...Call for Papers - Educational Administration: Theory and Practice, E-ISSN: 21...
Call for Papers - Educational Administration: Theory and Practice, E-ISSN: 21...
 
Unleashing the Power of the SORA AI lastest leap
Unleashing the Power of the SORA AI lastest leapUnleashing the Power of the SORA AI lastest leap
Unleashing the Power of the SORA AI lastest leap
 
VIP Model Call Girls Kothrud ( Pune ) Call ON 8005736733 Starting From 5K to ...
VIP Model Call Girls Kothrud ( Pune ) Call ON 8005736733 Starting From 5K to ...VIP Model Call Girls Kothrud ( Pune ) Call ON 8005736733 Starting From 5K to ...
VIP Model Call Girls Kothrud ( Pune ) Call ON 8005736733 Starting From 5K to ...
 
Call Girls Walvekar Nagar Call Me 7737669865 Budget Friendly No Advance Booking
Call Girls Walvekar Nagar Call Me 7737669865 Budget Friendly No Advance BookingCall Girls Walvekar Nagar Call Me 7737669865 Budget Friendly No Advance Booking
Call Girls Walvekar Nagar Call Me 7737669865 Budget Friendly No Advance Booking
 

The flow of baseline estimation using a single omnidirectional camera

  • 1. TELKOMNIKA Telecommunication, Computing, Electronics and Control Vol. 18, No. 3, June 2020, pp. 1633~1642 ISSN: 1693-6930, accredited First Grade by Kemenristekdikti, Decree No: 21/E/KPT/2018 DOI: 10.12928/TELKOMNIKA.v18i3.14797  1633 Journal homepage: http://journal.uad.ac.id/index.php/TELKOMNIKA The flow of baseline estimation using a single omnidirectional camera Sukma Meganova Effendi1 , Dadet Pramadihanto2 , Riyanto Sigit3 1 Department of Mechatronics, Politeknik Mekatronika Sanata Dharma, Indonesia 2,3 Electronics Engineering Polytechnic Institute of Surabaya (EEPIS), Indonesia Article Info ABSTRACT Article history: Received Aug 17, 2019 Revised Jan 30, 2020 Accepted Feb 26, 2020 Baseline is a distance between two cameras, but we cannot get information from a single camera. Baseline is one of the important parameters to find the depth of objects in stereo image triangulation. The flow of baseline is produced by moving the camera in horizontal axis from its original location. Using baseline estimation, we can determined the depth of an object by using only an omnidirectional camera. This research focus on determining the flow of baseline before calculating the disparity map. To estimate the flow and to tracking the object, we use three and four points in the surface of an object from two different data (panoramic image) that were already chosen. By moving the camera horizontally, we get the tracks of them. The obtained tracks are visually similar. Each track represent the coordinate of each tracking point. Two of four tracks have a graphical representation similar to second order polynomial. Keywords: Baseline Flow Omnidirectional camera This is an open access article under the CC BY-SA license. Corresponding Author: Sukma Meganova Effendi, Department of Mechatronics, Politeknik Mekatronika Sanata Dharma, Paingan, Maguwoharjo, Depok, Sleman 55282, Yogyakarta, Indonesia. Email: sukma.meganova@gmail.com 1. INTRODUCTION Depth of objects can be estimated from stereo image. In previous research, stereo image was obtained by using two pocket cameras (with improper alignment-vertical error) [1] or two omnidirectional sensors [2]. Other research use two omnidirectional cameras (both of them are built in vertical) [3], combination of convex mirror and concave lens [4], two cameras with two convex mirrors [5, 6], double lobed mirror [7], and hyperbolic double lobed mirror [8]. In this research, we only employ a single omnidirectional camera with a hyperbolic catadioptric. Catadioptric is one of the most important components of an omnidirectional sensor to produce an omnidirectional image. By using the catadioptric, all of the objects around the omnidirectional sensor can be seen in one omnidirectional image without any efforts to rotate the sensor. It means that this sensor has a wider field of view: 360°. There are many types of catadioptric and each catadioptric will give a different omnidirectional image. According to [9-15], the best type of catadioptric is catadioptric that has a hyperbolic shape. There are many advantages using omnidirectional sensor in computer vision. Some of them are using for face detection [16], soccer robot (based on obstacle detection) [17], matching and localization [18]. In the conventional stereo image processing, baseline is one of the parameters to find the depth of objects. By using a pair of camera/omnidirectional camera/catadioptrics, baseline can be determined from
  • 2.  ISSN: 1693-6930 TELKOMNIKA Telecommun Comput El Control, Vol. 18, No. 3, June 2020: 1633 - 1642 1634 the distance between cameras. Lukierski, Leutenegger, and Davison in [19] use an omnidirectional camera on mobile robot. Its baseline is determined by the length of the mobile robot’s track and the length of its track will affect the depth estimation. In this research, our baseline estimation is not affected by the length of the track. When we try to obtain the depth and map, we should know about baseline. Using an omnidirectional camera with a hyperbolic catadioptric, there is no information about baseline. Depth map can be developed using stereo triangulation calculation. Our proposed method only focuses on determining the flow of baseline. The flow of baseline in our method is estimated using panoramic image because next process is stereo triangulation calculation. The panoramic image is obtained by transformed the omnidirectional image into panoramic image. By using stereo triangulation and panoramic image, variable ( x ) should have a constant value in one axis [20] and the depth can be determined. This research produces images within different value in two axes ( ,x y ). It shows that panoramic image, especially each pixel (of the objects) has a unique flow that represented a certain pattern. From the unique flow of each pixel in the panoramic image, we can track each pixel using Lucas-Kanade’s optical flow algorithm [21-23]. In order to that, we have to move the camera continuously in horizontal axis from its original location and use an object for detection to produce the desired tracks. As the result, we obtain coordinates shifting of the object that is saved into a file and transform it into graphs to get baseline estimation. 2. RESEARCH METHOD Image that is captured using omnidirectional sensor is called as omnidirectional image. It has a black-circle in the center of the image. Objects around it will be reflected into the catadioptric mirror and the CCD sensor from the camera will capture the objects. The omnidirectional image is transformed into panoramic image before calibration process. Afterward, we can use the panoramic images to obtain flow of baseline estimation. The structure of omnidirectional camera shown in Figure 1. The mirror or catadioptric should be put and centered on the upper of the camera. Uncentered mirror will cause a distortion in omnidirectional image transformation to panoramic image. To center the mirror to the camera, we use nuts, bolts, and springs that put in each corner of the mirror structure. In this research, we propose a method to estimate flow of baseline from omnidirectional images. Figure 1. System’s structure of our omnidirectional camera using webcam 2.1. The algorithm of estimation process Figure 2 illustrates all the methods that used in the flow of baseline estimation process. There are three (3) process. The first process is loading live image, transform omnidirectional image to panoramic image, and calibration. Second process is object detection and tracking the object’s points. The third process is to plot the points of the tracks into graphs, to find equation of baseline estimation’s flow, and to determine the coefficient of the equation. 2.2. Modelling of hyperboloid mirror The projection from hyperbolic catadioptric or hyperboloid mirror that produces an omnidirectional camera is shown in Figure 3. From Figures 3 and 4, the equation of hyperboloid mirror modelling is shown below.
  • 3. TELKOMNIKA Telecommun Comput El Control  The flow of baseline estimation using a single omnidirectional camera (Sukma Meganova Effendi) 1635 ( )− − = − 22 2 2 1 Z hR a b (1) where: = −2 2 a c b . So, the map for hyperboloid mirror can be calculate as follows: = + +tanZ R c h (2) Because all of the objects touch the field and given value, thus:  + = −tan c h R (3) From Figure 3, the equation is given, as follows: ( )    + = + −− 2 2 2 22 2 2 1 tan tan cos b c bc x c bb c (4)  =tan f r (5) To simplify, hence: + = − 2 2 1 2 2 b c a b c ; = − 2 2 2 2bc a c b ; = +3 ( )a c h (6) focal point of the hyperboloid mirror is assumed on the principal point of the camera. So, (4) to (6) are simplified as follows:   = + 2 1 tan cos f a a r (7) ( ) ( )( )  − + = + 2 2 21 2 1 22 2 tan tan 1 tan a ff a a rr (8) By (3) and (8), the equation is given as follows:   − + = +    3 2 2 2 21 1 3 32 22 2 2 2 1 a a f f a a aR a rr R R (9) ( ) ( )− − + − =2 2 2 2 2 2 2 2 2 1 3 3 1 2( ) 2 0f r a R a a fr R r a a a (10) where: f = focal length of camera r = radius in pixel R = radius in world coordinate 1 2 3a a a = parameter of hyperboloid mirror
  • 4.  ISSN: 1693-6930 TELKOMNIKA Telecommun Comput El Control, Vol. 18, No. 3, June 2020: 1633 - 1642 1636 Load the camera Start Transformation omnidirectional image into panoramic image and calibrate Tracking object by camera movements Save each tracking points of object in panoramic image Represent each tracking points into each graphics Find the estimation (equation) of each tracking points Determined the coefficients from the estimation (equation) The flow of baseline estimation Detect object End Camera is open? No Yes Objects detected ? Yes A A No Figure 2. The flow of baseline estimation process αᶲ b c c h p(r,θ) Image plane P(R,Z,θ) Z Om Oc R Mirror ᶲ c c h p(r,θ) Image plane Z Om Oc R Wall 0.5 d m e Figure 3. Geometry of hyperboloid catadioptric/mirror and projection of point from world coordinate on the image plane [24] Figure 4. Installation specification of hyperboloid catadioptric/mirror with axis is laid on the field [24, 25]
  • 5. TELKOMNIKA Telecommun Comput El Control  The flow of baseline estimation using a single omnidirectional camera (Sukma Meganova Effendi) 1637 2.3. Calibration method When the omnidirectional sensor is employed, some distortion will appear due to its lens (improper position). Omnidirectional sensor parameters should be adjusted properly to minimize the distortion. All of the parameter’s adjustment is called calibration. The calibration process is one of important step because the omnidirectional sensor is used to map objects inside a room. There are two parameters in a camera, extrinsic and intrinsic. Both of them describe the mathematical relationship between 3D coordinates of real word scene and the 2D coordinates of its projection onto the image plane. Specifically, the intrinsic or internal parameter is used to represent the pixel coordinates of image point with the corresponding coordinates in the camera reference frame. The extrinsic or external parameter of the camera is the parameter that defines the location and orientation of real world coordinates to camera coordinates. Focal length in pixel ( ,x yf f ), principal point ( ,x yc c ), pixel size, radial distortion ( 1 2 3, ,k k k ) and tangential distortion ( 1 2,p p ) represent the intrinsic parameters. The camera matrix is represented in 3×3 matrix.    =      0 camera matrix 0 0 0 1 x x y y f c f c (11) This is of radial distortion. = + + + = + + + 2 4 6 1 2 3 2 4 6 1 2 3 disorted (1 ) disorted (1 ) x y k r k r k r y y k r k r k r (12) And this is of tangential distortion. ( ) ( )  = + + +   = + + +  2 2 1 2 2 2 1 2 disorted 2 2 disorted 2 2 x x p xy p r x y y p r y p xy (13) Then, coefficient must be known is: ( )= 1 2 1 2 3Distortion coefficients k k p p k (14) Matrix of rotation (R ) and translation (T ) are represented below.    =      11 12 13 21 22 23 31 32 33 r r r R r r r r r r (15)    =      1 2 3 t T t t (16) 2.4. Panoramic transformation Omnidirectional image is transformed to be a panoramic image using Polar mapping. The Polar coordinate system is mapped into Cartesian coordinate system. Polar coordinate system is denoted by ( ,r  ) and in Cartesian coordinate system is denoted by ( ,x y ). Coordinate center of polar sampling is calculated using (17) and (18).  −   =     1 tan y x (17) = +2 2 r x y (18)
  • 6.  ISSN: 1693-6930 TELKOMNIKA Telecommun Comput El Control, Vol. 18, No. 3, June 2020: 1633 - 1642 1638 Value of Cartesian coordinate ( ,x y ) can be calculated from the Polar coordinate ( ,r  ) using trigonometry as follows. + =2 2 2 x y r (19)   = = cos sin x r y r (20) So, the correlation of Cartesian coordinate system and Polar coordinate system with ( ,c cx y ) is the center point of the coordinate of omnidirectional image (Polar coordinate system) and are given as follows. ( ) ( )     = + = + 0 0 , cos , sin c c x r r x y r r y (21) 2.5. Optical flow Intensity is one of the parameters in optical flow that used for representation the flow of every pixel in the image. Based on its characteristic, the track of each pixel can be estimated. We employ Lucas-Kanade’s optical flow algorithm. This algorithm combines some information from several nearby pixels to resolves the ambiguity [21-22]. The Lucas-Kanade’s optical flow algorithm work as a pyramidal structure within process from the highest layer until the lowest layer [21, 23]. This algorithm assumes the transfer of information in the image between two images (frames) is less and almost constant by considering pixels nearby of p coordinate. According to that, Lucas-Kanade assumes to maintain all pixels with a local window that is centered on p coordinate. The of its velocity are given as follows. ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) + =− + =− + =− 1 1 1 1 1 1 x x y y t x x y y t x n x y n y t n I q V I q V I q I q V I q V I q I q V I q V I q (22) And (22) in matrix Av b= , is represented in (23). ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) −        −  = = =          −       11 1 2 2 1 , , tx y xx y t y x n y n t n I qI q I q VI q I q I q A v b V I q I q I q (23) Where: 21 , , .... , nq q q = pixels in the window (or local window) ( ) ( ) ( ), ,yx i i t iI q I q I q = partial derivatives of image I in accordance with the position ( ,x y ) and the time (t ) on pixel iq at the time. Lucas-Kanade’s algorithm can be obtained using least square algorithm and it is called matrix system of 2 2x T T A Av A b= or ( ) − = 1T T v A A A b (24) Where, T A is transpose of matrix A and represented in the (25). ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) −    −      =     −             12 2 x i x i y i x i t i x i i i y y i t ix i y i y i ii i I q I q I q I q I q V V I q I qI q I q I q (25) With the total summing are from i =1 until n .
  • 7. TELKOMNIKA Telecommun Comput El Control  The flow of baseline estimation using a single omnidirectional camera (Sukma Meganova Effendi) 1639 3. RESULTS AND ANALYSIS The sample of image that was captured using our omnidirectional camera is shown in Figure 5. The omnidirectional image must be transformed into panoramic image in order to be processed using stereo triangulation. Figure 6 shows the result of panoramic image that was produced using polar mapping algorithm. The baseline estimation was calculated using coordinates shifting of an object in panoramic image. Coordinates shifting were obtained by moving the camera from one location to another location. Each coordinate is an image’s pixel that was represented by a point (green circle) in panoramic image. Each point was assigned on the surface of the object in panoramic image. We assign three and four (4) points in the detected object for different length of track (94 cm and 129 cm) and both of them is shown in Figures 6 (a) and (b). The points were used as tracking points in Lucas-Kanade’s optical flow algorithm. Figure 7 shows detection points in the board were successfully detected in three different image. Those images were captured from a different timestamp. Figure 5. Omnidirectional image (a) (b) (c) Figure 6. Panoramic image that was captured in a room within length of track; (a) 94 cm and (b) 129 cm, (c) Panoramic image that was captured in laboratory (a) (b) Figure 7. Four detected points (a) on length of track 94 cm and three detected points (b) on length of track 129 cm in the object and the flow of them from the current frame to the next frame
  • 8.  ISSN: 1693-6930 TELKOMNIKA Telecommun Comput El Control, Vol. 18, No. 3, June 2020: 1633 - 1642 1640 The coordinates of tracking points were saved into a file in *.txt file format. The saved coordinates in the file were transformed into graph for each point. After the graphics were obtained for each point, the flow of baseline estimation was calculated. The data is tested in various conditions based on track’s length and horizontal distance of camera’s movement. Several data from tracking points are shown in Figures 8 and 9. Two of the tracks in both graphs obtain similar flow and the other tracks show an improper flow. The improper flow of the tracks occurs because of the difference light intensity in the panoramic image. This phenomenon occurs because of the points detection process use Lucas-Kanade’s optical flow algorithm. Figure 8. The tracking flow of detected points from 94 cm camera’s movement Figure 9. The tracking flow of detected points from 129 cm camera’s movement After we compute the estimation graphs, the coefficients of its are given as follows. = = 0.001 0.349 a b The flow of baseline estimation is given as follows: = − + +2 0.001 0.349y x x c (22) After we get the baseline estimation, that equation was tested by tracking object in a different length and different area Figure 6 (c). The result shows the track that was produced has similar flow. The value of c also different because it depends on the initial position of the tracking points that are not always the same.
  • 9. TELKOMNIKA Telecommun Comput El Control  The flow of baseline estimation using a single omnidirectional camera (Sukma Meganova Effendi) 1641 4. CONCLUSION The proposed method to estimate the flow of baseline from a single omnidirectional camera is achieved. From the detected object, we choose three and four points to track and two of the tracking points have a similar flow. The flow of baseline estimation method that is obtained in this research shows a second order polynomial equation with value of 0.001a = and value of the 0.349b = . The constant value of c depends on the initial position of the tracking points. From several experiments that were conducted, we conclude that the equation of baseline estimation is not affected by length of tracking points. It is indicated by the same coefficient values of a and b . The equation of baseline estimation that was obtained will be used for the matching process in panoramic image. REFERENCES [1] J. Mrovlje J and V. Damir, “Distance Measuring based on Stereoscopic Pictures,” Proceeding of 9th International PhD Workshop on Systems and Control: Young Generation Viewpoint, Izola, Slovenia, Ljubljana: Institut Jožef Stefan, 2008. [2] O. M. M. K. Fouad, “Position Estimation of Mobile Robots Using Omni-Directional Cameras,” M. S. thesis, Mechanical Engineering, Carleton University, Ottawa, Ontario, Canada, 2014. [3] Z. Zhu, “Omnidirectional Stereo Vision,” Proceeding of the Workshop on Omnidirectional Vision, the 10th IEEE ICAR, Budapest, Hungary, 2001. [4] S. Yi and N. Ahuja, “An Omnidirectional Stereo Vision System Using a Single Camera,” in Proceeding of International Conference on Pattern Recognition, pp. 861-865, Hong Kong, August 2006. [5] J. Gluckman, S. K. Nayar, and K. J. Thoresz, “Real-Time Omnidirectional and Panoramic Stereo,” in Proceedings of the 1998 DARPA Image Understanding Workshop, IUW’98, Monterey, California, 1998. [6] T. Svoboda and T. Pajdla, “Epipolar Geometry of Central Catadioptric Camera,” International Journal of Computer Vision, vol. 49, no. 1, pp. 23-37, August 2002. [7] D. Southwell, et al., “Panoramic Stereo,” in Proceedings of 13th International Conference on Pattern Recognition, Vienna, Austria, pp. 378-382, 1996. [8] E. L. L. Cabral, J. C. de Souza Junior, and M. C. Hunold, “Omnidirectional Stereo Vision with a Hyperbolic Double Lobed Mirror,” in Proceedings of the 17th International Conference on Pattern Recognition, pp. 1-9, Cambridge, UK, August 26, 2004. [9] T. Sato and N. Yokoya, “Efficient Hundreds-baseline Stereo by Counting Interest Points for Moving Omni-directional Multi-camera System,” Journal of Visual Communication and Image Representation, vol. 21, no. 5-6, pp. 416-426, July-August 2010. [10] K. Yang Tu, “Analysis and Comparison of Various Shape Mirrors Used in Omnidirectional Cameras with Parameter Variation,“ Journal of Information Science and Engineering, vol. 25, no. 3, pp 945-959, May 2009. [11] M. Yachida, “Omnidirectional Sensing and Combined Multiple Sensing,” in Proceedings of IEEE and ATR Workshop on Computer Vision for Virtual Reality Based Human Communications, pp. 20-27, Bombay, India, January 1998. [12] K. Yamazama K, Y. Yagi, and M. Yachida, “Omnidirectional Imaging with Hyperboloidal Projection,” in Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS ’93), pp. 1029-1034, Yokohama, Japan, July 1993. [13] J. Gaspar, et al., “Constant Resolution Omnidirectional Cameras,” in Proceeding of OMNIVIS'02 Workshop on Omni-directional Vision, pp. 27-34, Copenhagen, Denmark, June, 2002. [14] T. L. Conroy and J. B. Moore, “Resolution Invariant Surfaces for Panoramic Vision Systems,” in Proceedings of the Seventh IEEE International Conference on Computer Vision, pp. 392-397, Kerkyra, Greece, September 20-27, 1999. [15] S. K. Nayar, “Catadioptric Omnidirectional Camera,” Proceeding of IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 482-488, San Juan, USA, June 1997. [16] A. L. C. Barczak, J. Okamoto Jr., and V. Grassi Jr., “Face Tracking Using a Hyperbolic Catadioptric Omnidirectional System”, Research Letters in the Information and Mathematical Sciences, vol. 13, pp. 55-67, January 2009. [17] J. Silva, et al., “Obstacle Detection, Identification and Sharing on a Robotic Soccer Team,” Proceeding of Portuguese Conference on Artificial Intelligence (EPIA 2009), pp. 350-360, Aveiro, Portugal, 2009. [18] D. Michel, A. A. Argyros, and M. Lourakis, “Horizon Matching for Localizing Unordered Panoramic Images”, International Journal of Computer Vision and Image Understanding, vol. 114, no. 2, pp. 274-285, January 2010. [19] R. Lukierski, S. Leutenegger, and A. J. Davison, “Rapid Free-Space Mapping from A Single Omnidirectional Camera,” in Proceeding of the European Conference on Mobile Robots (ECMR), pp. 1-8, Lincoln, UK, September 2015. [20] OpenCV Dev Team, “Depth Map from Stereo Images,” OpenCV 3.0.0-dev documentation: Camera Calibration and 3D Reconstruction, Nov 10, 2014. [Online]. Available from: https://docs.opencv.org/3.0-beta/doc/py_tutorials/ py_calib3d/py_depthmap/py_depthmap.html. [Accessed November 9, 2015]. [21] R. Sigit, et al., “Improved Echocardiography Segmentation Using Active Shape Model and Optical Flow,” International Journal of Telkomnika telecommunication computing electronic and control, vol. 17, no. 2, pp. 809-818, April 2019.
  • 10.  ISSN: 1693-6930 TELKOMNIKA Telecommun Comput El Control, Vol. 18, No. 3, June 2020: 1633 - 1642 1642 [22] A. A. Pratiwi, et al., “Improved Ejection Fraction Measurement on Cardiac Image Using Optical Flow,” in Proceeding of 2017 International Electronics Symposium on Knowledge Creation and Intelligent Computing (IES-KCIC) IEEE, pp. 295-300, September, 2017. [23] U. Umar, R. Soelistijorini, and H. A. Darwito, “Tracking Arah Gerakan Telunjuk Jari Berbasis Webcam Menggunakan Metode Optical Flow,” in Proceeding of the 13th Industrial Electronics Seminar 2011 (IES 2011), pp. 249-254, October, 2011. [24] M. Jamzad, A. R. Hadjkhodabakhshi, V. S. Mirrokni, “Object Detection and Localization Using Omnidirectional Vision in the RoboCup Environment,” International Journal of Scientia Iranica, vol. 14, no. 6, pp. 599-611, November 2007. [25] A. K. Mulya, F. Ardilla, and D. Pramadihanto, “Ball Tracking and Goal Detection for Middle Size Soccer Robot Using Omnidirectional Camera,” Proceeding of International Electronics Symposium (IES), pp. 432-437, September 2016. BIOGRAPHIES OF AUTHORS Sukma Meganova Effendi is currently a master student with Electronic Engineering Department, Electronics Engineering Polytechnic Institute of Surabaya (EEPIS), Surabaya, Indonesia. She also works in Mechatronics Department, Politeknik Mekatronika Sanata Dharma, Yogyakarta, Indonesia as a lecturer. Dadet Pramadihanto received his Master degree in 1997 and Ph.D degree in 2003 from Department of Systems and Human Sciences, Graduate School of Engineering Science, Osaka University, Japan. His research interests include computer vision, robotics, real-time systems, environmental monitoring, and cyber-physical-systems. He is currently working as a lecturer at Computer Engineering, Department of Informatics and Computer Engineering, and Graduate School of Informatics and Computer Engineering, Electronics Engineering Polytechnic Institute of Surabaya (EEPIS), Surabaya, Indonesia. He serves as Head of EEPIS Robotics Centre (ER2C-the research centre that focus research on the development of humanoid/human like robots, real-time robotics operating systems, computer vision in robotics, and engineering education based on robotics technology). Riyanto Sigit received his Master degree from Department of Informatics, Institute of Technology 10 November Surabaya (ITS), Surabaya, Indonesia in 1995 and Ph.D degree from Universiti Kebangsaan Malaysia, Bangi, Selangor, Malaysia in 2014. He is currently working as a lecturer at Computer Engineering, Department of Informatics and Computer Engineering, and Graduate School of Informatics and Computer Engineering, Electronics Engineering Polytechnic Institute of Surabaya (EEPIS), Surabaya, Indonesia. His research interests include medical imaging, artificial intelligence, computer vision, and sensor.