The document summarizes Luis Contreras' upcoming lecture on robot localization using particle filters. The key points covered are:
1. Robot localization is the process of determining a robot's pose (position and orientation) over time using motion and sensor measurements within a map.
2. Particle filters represent the robot's uncertain pose as a set of weighted particles, with each particle being a hypothesis of the robot's state.
3. As the robot moves and senses its environment, the particles are propagated and weighted according to the motion and sensor models to estimate the posterior probability distribution over poses.
UNIT-V FMM.HYDRAULIC TURBINE - Construction and working
Robot Localisation: An Introduction - Luis Contreras 2020.06.09 | RoboCup@Home Education
1. Robot Localisation: An Introduction
Speaker: Luis Contreras | Tamagawa University
Time: June 09, 2020 (Tue) 09:00~11:00 (GMT+8)
https://www.robocupathomeedu.org/learn/online-classroom/invited-lecture-series
RoboCup@Home Education
ONLINE CLASSROOM
Invited Lecture Series
Highlights
● Probabilistic in robot localisation
● Probabilistic model for robot motion
and particle filters
Luis Contreras received his Ph.D. in Computer Science at the Visual
Information Laboratory, in the Department of Computer Vision, University of
Bristol, UK. Currently, he is a research fellow at the Advanced Intelligence &
Robotics Research Center, Tamagawa University, Japan. He has also been
an active member of the Bio-robotics Laboratory at the Faculty of
Engineering, National Autonomous University of Mexico, Mexico. He has
been working on service robots and has tested his latest results at the
RoboCup and similar robot competitions for the last ten years.
2. RoboCup@Home Education | www.RoboCupatHomeEDU.org
Robot Localisation: An Introduction
● Speaker: Luis Contreras | Tamagawa University
● Host: Jeffrey Tan | @HomeEDU
● Date and Time:
○ June 09, 2020 (Tue) 09:00~11:00 (GMT+8 China/Malaysia)
○ June 08, 2020 (Mon) 21:00~23:00 (EDT New York)
○ June 08, 2020 (Mon) 03:00~05:00 (CEST Italy/France)
○ Web: https://www.robocupathomeedu.org/learn/online-classroom/invited-lecture-series
** Privacy reminder: Video will be recorded and published online **
RoboCup@Home Education Online Classroom
2
3. RoboCup@Home Education | www.RoboCupatHomeEDU.org
RoboCup@Home Education is an educational initiative in RoboCup@Home that promotes educational
efforts to boost RoboCup@Home participation and artificial intelligence (AI)-focused service robot
development.
Under this initiative, currently there are 4 efforts in active operation:
1. RoboCup@Home Education Challenge events (national, regional, international)
2. Open Source Educational Robot Platforms for RoboCup@Home (service robotics)
3. OpenCourseWare for the learning of AI-focused service robot development
4. Outreach Programs (local workshops, international academic exchanges, etc.)
Web: https://www.robocupathomeedu.org/
FB: https://www.facebook.com/robocupathomeedu/
RoboCup@Home Education
3
4. RoboCup@Home Education | www.RoboCupatHomeEDU.org
Special Online Challenge Tracks
● Open Platform Online Classroom [EN]
● Open Platform Online Classroom [CN]
● Standard Platform Pepper 2.9 Online
Classroom [EN]
● Standard Platform Pepper 2.5 Online
Classroom [CN]
More details:
https://www.robocupathomeedu.org/learn/online
-classroom
Invited Lecture Series
● Robotics Development with MATLAB [EN]
● Robot Localisation: An Introduction [EN]
● World Representation Through Artificial
Neural Networks: An Introduction [EN]
● ROS with AI [TH]
Regular Online Classroom Tracks
● Introduction to Service Robotics [EN]
○ 6 weeks
○ ROS, Python
○ Speech, Vision, Navigation, Arm
RoboCup@Home Education Online Classroom
4
5. RoboCup@Home Education | www.RoboCupatHomeEDU.org
Luis Contreras | Tamagawa University
5
Luis Contreras received his Ph.D. in
Computer Science at the Visual Information
Laboratory, in the Department of Computer
Vision, University of Bristol, UK. Currently, he is
a research fellow at the Advanced Intelligence
& Robotics Research Center, Tamagawa
University, Japan. He has also been an active
member of the Bio-robotics Laboratory at the
Faculty of Engineering, National Autonomous
University of Mexico, Mexico. He has been
working on service robots and has tested his
latest results at the RoboCup and similar robot
competitions for the last ten years.
6. tamagawa.jp
Robot Localisation: An Introduction
Luis Angel Contreras-Toledo, PhD
Advance Intelligence and Robotics Research Center
Tamagawa University
https://aibot.jp/
2020
18. tamagawa.jp
Localisation
Given a map m, with ui~N(μ,σ) and zi~N(μ,σ), at time T we have
ST ={s0, s1, s2, …, sT}
UT ={u1, u2, u3, …, uT}
ZT ={z1, z2, z3, …, zT}
The localisation problem is then defined as
p(ST¦UT, ZT,m)
26. tamagawa.jp
Error model
-Pose (i.e. position and orientation) error
ut+1 = (d+ε, α+φ)
𝜀 = 0 + 𝜎𝜀 ∙ randn 1,1 , a random Gaussian number with μ=0 and σ=σε.
𝜑 = 0 + 𝜎 𝜑 ∙ randn(1,1) , a random Gaussian number with μ=0 and σ=σφ.
𝑠𝑡+1 =
𝑥 𝑡+1
𝑦𝑡+1
𝜃𝑡+1
=
𝑥 𝑡 + (𝑑 + 𝜀) cos 𝜃𝑡+1
𝑦𝑡 + (𝑑 + 𝜀) sin 𝜃𝑡+1
𝜃𝑡 + 𝛼 + 𝜑
𝜎𝜀
𝜎 𝜑
27. tamagawa.jp
Error model
Sensor error
z
Error from reported distance. It can be modelled as
a probability function, e.g. a Gaussian distribution,
given a reading z = r and a distance to the obstacle
d, then given x = |r – d| we have
𝑃 𝑥 =
1
2𝜋𝜎𝑧
2
𝑒
−
𝑥2
2𝜎 𝑧
2
0
𝜎𝑧
d
29. tamagawa.jp
A probabilistic robot
Uniform distribution
After one measurement, uncertainty is centred around possible locations
Images from S. Thrun et al. “Probabilistic Robotics”. MIT Press, 2005.
30. tamagawa.jp
After moving to the right, uncertainty is propagated to
After a further measurement uncertainty reduces
And carries on...
33. tamagawa.jp
Key concepts
Probability P(S = si) = P(si) that random variable S takes on value si .
Prior (probability distribution) P(si) models uncertainty before new
data is collected.
Likelihood P(z | si) that sensor measurement takes on value z given
that the robot is at pose si .
Posterior (probability distribution) P(si | z) expresses uncertainty
after measurement.
36. tamagawa.jp
Key concepts
Example
𝑃 𝑧 open = 0.6 𝑃 𝑧 ¬open = 0.3
𝑃 open = P ¬open = 0.5
𝑃 open 𝑧 =
𝑃 𝑧 open 𝑃(open)
𝑃(𝑧)
=
𝑃 𝑧 open 𝑃(open)
𝑃(𝑧|open)𝑃(open) + 𝑃(𝑧|¬open)𝑃(¬open)
37. tamagawa.jp
Key concepts
Example
𝑃 𝑧 open = 0.6 𝑃 𝑧 ¬open = 0.3
𝑃 open = P ¬open = 0.5
𝑃 open 𝑧 =
0.6 ∙ 0.5
0.6 ∙ 0.5 + 0.3 ∙ 0.5
𝑃 open 𝑧 = 0.67
38. tamagawa.jp
The weighted particle representation
map m
𝑠𝑡, 𝑤𝑡 , where 𝑠𝑡 =
𝑥 𝑡
𝑦𝑡
𝜃𝑡
𝜃𝑡
𝑟𝑡
𝑐1 𝑐2
𝑑1
𝑟𝑡 =
𝑐1,𝑦 − 𝑐2,𝑦 𝑐2,𝑥 − 𝑥 𝑡 − (𝑐1,𝑥 − 𝑐2,𝑥)(𝑐2,𝑦 − 𝑦𝑡)
𝑐1,𝑦 − 𝑐2,𝑦 cos 𝜃 − (𝑐1,𝑥 − 𝑐2,𝑥) sin 𝜃
39. tamagawa.jp
The weighted particle representation
Get the likelihood of z given groundtruth rt at state st
The weight of a particle might be calculated as
To avoid some particles disappearing too quickly, we can add a damping
factor
𝑃 𝑧|𝑠𝑡 ∝
1
2𝜋𝜎𝑧
2
𝑒
−
(𝑧−𝑟𝑡)2
2𝜎 𝑧
2
𝑤 ∝ 𝑃 𝑧|𝑟
𝑤 ∝ 𝑃 𝑧|𝑟 + k
40. tamagawa.jp
Particle filter localisation
Use particle distribution to represent uncertainty of robot position
and orientation (state).
Each particle is a hypothesis of the state of the robot.
The particles’ weight indicates the credibility of that hypothesis.
Particle propagation after robot motion considers uncertainty in the
actuators, while particles’ weights consider sensor’s uncertainty.
41. tamagawa.jp
Particle filter localisation
Also known as Montecarlo filters, Condensation, or Factored
Sampling, this method probabilistically estimates where the robot
is.
It is a Bayesian estimator.
Also considered an Evolutionary Algorithm since the fittest
individuals (particles) survive.
54. tamagawa.jp
Particle filter localisation
0. Spread particles uniformly in the virtual map.
1. Motion prediction: Move real robot and each particle inside the map.
2. Particle update: Take a measurement with the real robot and weight
particles according to virtual readings from each particle inside the
virtual world.
3. Re-sampling: Particles with better match between real and virtual
measurement will get higher weight.
4. Go to Step 1 unless the robot is lost, in that case go to Step 0.
57. tamagawa.jp
An introduction to Robot Vision
We consider robot vision a crucial skill for a service robot to meet its expectations and
therefore in this paper we presents a tutorial to computer vision for robotic applications,
so new students can have a clear idea where and how to start.
We first present the basic concepts of image publishers and subscribers in ROS and then
we apply some basic commands to introduce the students to the digital image
processing theory; finally, we present some RGBD and point cloud notions and
applications.
58. tamagawa.jp
Install
You should access to: https://gitlab.com/trcp/introvision
and follow the instructions there. Basically, creat a ROS workspace:
$ cd ~
$ mkdir -p erasers_ws/src
$ cd erasers_ws
$ catkin_make
and clone the repository:
$ cd ~/erasers_ws/src
$ git clone https://gitlab.com/trcp/introvision.git
$ cd ..
$ catkin_make
66. tamagawa.jp
Image Publishers and Subscribers in ROS
We present a series of steps so the learners can start programming in the ROS
environment while they learn the ROS concepts. The templates provided here can serve as
a basic platform for more complex lessons or projects they develop after finishing all the
lessons.
ros::NodeHandle nh;
image_transport::ImageTransport it(nh);
image_transport::Publisher pub = it.advertise("camera/image", 1);
ros::NodeHandle nh;
image_transport::ImageTransport it(nh);
image_transport::Subscriber sub = it.subscribe("camera/image", 1, callback_image);
void callback_image(const sensor_msgs::ImageConstPtr& msg){
…
}
67. tamagawa.jp
RGB Image Processing with OpenCV and ROS
We understand the images as a 2D array, or matrix, where each element (also known as
pixel) in the array has a color value. We use three color channels per element in the array:
Red, Green, and Blue. The origin of this image matrix is at the top-left corner and columns
values increase positively from left to right while rows values increase positively from top to
bottom.
68. tamagawa.jp
RGB Image Processing with OpenCV and ROS
We introduce the students to the basic elements in an image and how to perform some
built-in OpenCV functions. Finally, we show them how to perform their own operations by
accessing to the pixel elements in their image.
69. tamagawa.jp
Point Cloud processing with ROS
We present and introduction to Point Cloud data in ROS and propose a simple task where
the students should track a person moving in front of a RGBD camera mounted in a mobile
robot. We start by introducing what is a Depth image and how to interpret it.
70. tamagawa.jp
Point Cloud processing with ROS
Then, we introduce some concepts on point clouds of 3D points and how to use them to
perform the target task where we divide the 3D space into a series of 2D planes so the
student can interpret and select the appropriate information to perform the task at hand.
71. tamagawa.jp
Summary
In this work we have provided new comers to computer vision and robotics a short guide
with a number of examples and exercises that they can use to solve the proposed task and
extend them to solve their own applications. Moreover, by providing a series of rosbags,
they do not need to have a real robot to start thinking of robot vision. We hope these work
motivates them to continue in this field.
72. tamagawa.jp
Robot Localisation: An Introduction
Luis Angel Contreras-Toledo, PhD
Advance Intelligence and Robotics Research Center
Tamagawa University
https://aibot.jp/
2020
74. RoboCup@Home Education
ONLINE CLASSROOM
Invited Lecture Series
Luis Contreras received his Ph.D. in Computer Science at the Visual
Information Laboratory, in the Department of Computer Vision, University of
Bristol, UK. Currently, he is a research fellow at the Advanced Intelligence &
Robotics Research Center, Tamagawa University, Japan. He has also been
an active member of the Bio-robotics Laboratory at the Faculty of
Engineering, National Autonomous University of Mexico, Mexico. He has
been working on service robots and has tested his latest results at the
RoboCup and similar robot competitions for the last ten years.
World Representation Through Artificial Neural Networks
Speaker: Luis Contreras | Tamagawa University
Time: June 16, 2020 (Tue) 09:00~11:00 (GMT+8)
https://www.robocupathomeedu.org/learn/online-classroom/invited-lecture-series
Highlights
● Artificial Neural Networks and its
application to Object Recognition
● Convolutional Neural Networks