SlideShare une entreprise Scribd logo
1  sur  68
Télécharger pour lire hors ligne
This Bachelor’s Thesis is carried out as a part of the education at the
University of Agder and is therefore approved as a part of this
education. However, this does not imply that the University answers
for the methods that are used or the conclusions that are drawn.
University of Agder, 2016
Faculty of Engineering and Science
Department of Engineering Sciences
Implementing SLAM on a Differential Drive
AGV
Ørjan Langøy Olsen,
Øystein Øihusom
Supervisor
Morten Ottestad
Ørjan Langøy Olsen Øystein Øihusom
Abstract
Autonomous robots have a potential in many different areas. Care of the elderly, warehouses and cleaning
of buildings to mention a few. University of Agder wants to automate mapping of areas for the use of robots
using simultaneous localization and mapping (SLAM). SLAM is a technique for building a map of an unknown
environment while at the same time determine own position. A differential drive robot was built for this project to
execute the SLAM technique. Its design was modelled in Solid Works with rapid prototyping and modularity in
mind. The plastic parts were printed with ABS plastic using a CubePro 3D printer. National Instrument myRIO
was used as the microcontroller and the program code was written in LabVIEW G code. The robot uses sensor
fusion to determine its position by combining the odometry from motor encoders and landmark positions sensed
by a laser scanner. Multiple navigation behaviors were implemented for safe and reliable obstacle avoidance as
well as maneuvering to a goal by employing multiple waypoints. After building and programming of the robot,
tests were performed in both software and hardware to increase the accuracy and robustness during operation.
When performing the full system test, the robot performed as expected. It mapped the assigned area, but when
it returned to a previously visited location, the registered position was not the same. This is because of the
loop-closure problem, which is not handled in this project.
LIST OF FIGURES LIST OF TABLES
List of Figures
1 Global coordinate system and reference frame for the robot. . . . . . . . . . . . . . . . . . . . . . . . 9
2 Estimating position from previous position, motion model, encoder position and observation data. . 10
3 Illustration of quadrature encoder channels. (Robot Builder’s Bonanza, 4th Edition - Application
Notes & Bonus Projects Copyright © 2011, Gordon McComb) . . . . . . . . . . . . . . . . . . . . . 12
4 Block diagram illustration of the complementary filter. . . . . . . . . . . . . . . . . . . . . . . . . . . 13
5 Block diagram illustration of the SLAM process. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
6 Sensor frame {S offset relative to robot frame {R}. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
7 A rotating mirror is used to take distance measurements in the plane. . . . . . . . . . . . . . . . . . 23
8 The effect of outliers on the least squares method. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
9 Line between two randomly selected points. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
10 Amount of inliers are counted for the line. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
11 The correct line will eventually be found. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
12 A line is drawn between the two most distant points. . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
13 The most distant point from the line is within a given threshold, so a split is performed. . . . . . . . 25
14 The contour after no more splits can be made. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
15 Two of the line segments are collinear enough to merge them. . . . . . . . . . . . . . . . . . . . . . . 26
16 A previously visited location is not registered at the same position, illustrating the loop closure problem. 26
17 The number on the nodes tells in which order they were visited. . . . . . . . . . . . . . . . . . . . . . 27
18 A graph being explored by Dijkstras. The second letter is the node it came from. . . . . . . . . . . . 28
19 First robot design. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
20 Second robot design. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
21 Computer generated imagery. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
22 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
23 Inputs and outputs of the motor control VI. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
24 Code for controlling a motor. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
25 LabView code for LIDAR data acquisition. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
26 Lines extracted from sample data with the RANSAC algorithm. . . . . . . . . . . . . . . . . . . . . 40
27 Lines extracted from sample data with the split and merge algorithm. . . . . . . . . . . . . . . . . . 41
28 How landmarks are selected. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
29 Block diagram illustration of the SLAM process. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
30 Graphical representation of the state vector used in EKF SLAM . . . . . . . . . . . . . . . . . . . . 44
31 Dimensions of the state vector and covariance matrix. . . . . . . . . . . . . . . . . . . . . . . . . . . 44
32 Prediction step applied to state vector and covariance matrix. . . . . . . . . . . . . . . . . . . . . . . 46
33 EKF SLAM prediction step, MathScript node. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
34 Update step applied to state vector and covariance matrix. . . . . . . . . . . . . . . . . . . . . . . . 48
35 New landmark expanding the state vector and covariance matrix. . . . . . . . . . . . . . . . . . . . . 49
36 An occupancy grid map represented visually. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
37 PID tuning two motors on a differential drive. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
38 Comparison between RANSAC and Split and Merge. . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
39 Results from testing the robot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
40 Converting an array of cartesian coordinates to polar coordinates using the in-place element. . . . . 58
List of Tables
1 Clockwise rotations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2 Counterclockwise rotations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
3 Motor specifications. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
4 Overview of input and output voltage of the voltage regulators. . . . . . . . . . . . . . . . . . . . . . 35
5 Results from the straight line test. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
4
LIST OF TABLES LIST OF TABLES
6 Results from rotation test. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
7 Results from the square drive test. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
8 Results from the LIDAR test. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
9 RANSAC parameters used during testing. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
10 Split and Merge parameters used during testing. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
Nomenclature
A* A popular pathfinding algorithm using graph traversal.
AGV Automated Guided Vehicle
Algorithm A step-by-step set of operations to be performed to arrive at a solution or complete a task.
Clustering The act of seperating data into clusters.
Differential drive Drive system where two seperately driven wheels placed on either side of the robot controls the
movement.
Graph Mathematical structures used to model pairwise relations between objects.
Heuristic Method for solving problems utilizing a set of rules.
Inlier Points with a residual less than a given threshold.
Landmark (LMK) Features in the map. For example wall, corner or pillar.
LIDAR Portmanteau of light and radar and is a radial distance sensor.
Linestrip A list of vertices where the first issued vertex begins the linestrip and every consecutive vertex
marks a joint in the line.
LiPo Lithium polymer battery.
Occupancy grid A binary map that tells wheter a given space is occupied or not.
Odometry The use of motion sensors to estimate position.
Outliers Points with a residual greater than a given threshold.
PID A controller that tries to minimize the difference between a desired setpoint and a measured
process variable using a gain, integral and derivative term.
Process variable The variable being regulated in a control system.
RANSAC Random sample consensus. An algorithm for approximating a line from a set of points.
Residuals The difference between an observed value and the fitted value provided by a model.
Sensor fusion Combination of sensory data from seperate sources such that the resulting information has less
uncertainty than the individual sources.
Setpoint The desired value for a process variable.
SLAM Simultaenous Localization and Mapping
SubVI Function block in LabVIEW. It can take multiple inputs and return multiple outputs.
Vertex A point in a geometric figure.
5
CONTENTS CONTENTS
Contents
1 Introduction 8
2 Theory 9
2.1 Locomotion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.2 Kinematics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.2.1 Kinematics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.2.2 Probabilistic kinematics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.2.3 Motion models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.3 Complementary filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.4 SLAM with complementary filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2.5 Extended Kalman filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.6 SLAM with extended Kalman filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.7 Laser scanner . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
2.8 Line extraction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
2.8.1 RANSAC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
2.8.2 Split and merge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
2.9 Loop closure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
2.10 A* search algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
2.11 Navigation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
2.11.1 Unicycle model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
2.11.2 Behaviours . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
3 Design 31
3.1 Drive system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
3.2 Chassis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
3.3 Remote control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
4 Components 34
4.1 DC drive motor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
4.2 DC motor controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
4.3 NI myRIO . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
4.4 LIDAR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
4.5 Infrared sensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
4.6 LiPo battery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
4.7 Voltage regulator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
5 Program 36
5.1 Scheduling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
5.2 Motor control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
5.3 Sensor data acquisition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
5.4 Line extraction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
5.5 Mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
5.6 Landmark extraction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
5.7 Data association . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
5.8 EKF SLAM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
5.9 Occupancy grid map . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
6
CONTENTS CONTENTS
6 Testing 51
6.1 White-box testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
6.2 Motor accuracy and performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
6.3 Straight line test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
6.3.1 Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
6.3.2 Test criteria . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
6.3.3 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
6.4 Rotation test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
6.4.1 Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
6.4.2 Test criteria . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
6.4.3 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
6.5 Square drive test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
6.5.1 Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
6.5.2 Test criteria . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
6.5.3 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
6.6 LIDAR test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
6.6.1 Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
6.6.2 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
6.7 Line extraction algorithm test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
6.7.1 Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
6.7.2 Result . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
6.8 System test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
6.8.1 Result . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
7 Wrap-up 57
7.1 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
7.2 Further work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
A Diagrams 60
B 3D models 61
C Datasheets 63
7
1 INTRODUCTION
1 Introduction
The University of Agder wants to automate mapping of areas for the use of robots, and has decided to start this
project, namely UiA SLAM. SLAM is an acronym for simultaneous localization and mapping and is a technique
for building a map of an unknown environment while at the same time determine own position. This could be used
for self-driving cars, autonomous cleaning robots and electronic service dogs among others.
Imagine a person waking up in an unknown location. They will start looking around for familiar signs. Once the
person observes a landmark, they can figure out where they are in relation to it. The more the person observes of
the environment, the better the person’s mental image of the place will be. The same applies for a SLAM robot.
It tries to map an unknown environment while figuring out its position. When it starts, it is just like waking up
at an unknown location without any prior knowledge about the place. The complexity comes from doing both
localization and mapping at once.
It is essentially a chicken-and-egg problem. The robot needs to know its position before answering the question of
what the environment looks like. The robot also has to figure out its position without the benefit of already having
a map.
Two required components for SLAM is a motion sensor and a range sensor. The robot needs to have a good
odometry performance. Odometry is the use of motion sensors to estimate its change in position over time. There
is normally a small margin of error with odometry readings. The odometry readings are not absolute and the error
makes the estimated position drift over the time. A range sensor also needs to be present to sense the sorroundings.
The most common range sensors in a SLAM context is a laser scanner such as a LIDAR. A robot determines its
position through a sensor fusion technique as the complementary filter or the Kalman filter. It will use the odometry
in combination with landmarks extracted from the range measurement device. Landmarks are distinct and static
features in the environment. A robot also needs to be able to identify and associate an observed landmark with a
landmark observed earlier.
SLAM is the mapping of an environment using the continual interplay between the mapping device, the robot
and its position. As the robot interacts with the environment, it maps the area and determines its own position
simultaneously. SLAM is a tool for exploring the environment around the robot and is constantly undergoing
improvement. [1]
8
2 THEORY
2 Theory
Simultaneous Localization and Mapping is a field in robotics concerned about the navigation of a robot in a unknown
environment. The problem is divided into the four following parts: Landmark extraction, data association, state
estimation, and state update.
2.1 Locomotion
Ever since its invention the actively powered wheel has been the go-to locomotion mechanism utilized for unbounded
movement on flat surfaces. Needless to say, this is how the SLAM platform in this paper will be locomoted.
Our specific implementation is a 3-wheeled differential drive system as depicted in figure 1. Where two independent
actively powered wheels are dedicated to locomotion while a third passive castor wheel tags along. This gives us
three points of contact with the ground, resulting in great stability.
The drive wheels are of the standard type of wheels offering two degrees of freedom, rotation about the wheels axle
and rotation about the wheels contact point with the underlying surface. The castor wheel also offers two degrees
of freedom, rotation about its free running axle and rotation around an offset steering joint.
2.2 Kinematics
2.2.1 Kinematics
Gy
Gx
θ
Rx
Ry
Figure 1: Global coordinate system and reference frame for the robot.
Kinematics describe the effect of control actions on the configuration of a robot. The configuration of a rigid mobile
robot is commonly described by six variables, its three-dimensional Cartesian coordinates and its three Euler angles
(roll, pitch, yaw) relative to an external coordinate frame. [2]
9
2.2 Kinematics 2 THEORY
In this paper we are merely interested in kinematics for a differential drive robot operating in a planar environment,
hence its kinematic state can be described by three variables referred to as the robot’s pose.
The pose of the robot thus consist of two-dimensional coordinates (x, y) and its angular rotation (θ), all with respect
to an external coordinate frame. The following vector is often used to describe a pose.


x
y
θ

 (2.1)
The orientation (θ) is often referred to as bearing or heading, while the (x, y) coordinates are referred to as the
robots location.
2.2.2 Probabilistic kinematics
p (xt|ut, xt−1) (2.2)
Where xt and xt−1 are the robots pose. xt is the new pose while xt−1 is the pose at the time when the motion
command ut is given. This is called the probabilistic kinematic model and it describes the posterior distribution
over kinematic states that a robot assumes.
xk-1
xest
zu
ˆxk
Figure 2: Estimating position from previous position, motion model, encoder position and observation data.
2.2.3 Motion models
The equations that mathematically describe the motion of the robot is usually referred to as the motion model.
While being further divided into odometry motion model or velocity motion model which are the predominant
motion models for wheel based mobile robots in literature.
Odometry
To predict position and calculate wheel velocities from odometry we will make use of the rotary encoders integrated
in the electric brushed motors.
10
2.2 Kinematics 2 THEORY
The encoders used are incremental quadrature encoders measuring angular displacement of the motor shaft. In-
cremental meaning the information we obtain is relative, describing the movement of the shaft. As opposed to the
absolute encoder popular in robotic arms that offer precise position of the shaft by use of for instance grey code,
however these units come at an increased cost. Quadrature meaning the encoders has four separate phases where
the change from one phase to the other results in a tick.
Phase Channel A Channel B
1 0 0
2 0 1
3 1 1
4 1 0
Table 1: Clockwise rotations
Phase Channel A Channel B
1 1 0
2 1 1
3 0 1
4 0 0
Table 2: Counterclockwise rotations
The incremental quadrature encoder in our motors produces pulses on two separate channels that are 90° out of
phase. By detecting the order of change we can tell whether the motor shaft is rotating clockwise or counterclockwise.
11
2.2 Kinematics 2 THEORY
Figure 3: Illustration of quadrature encoder channels. (Robot Builder’s Bonanza, 4th Edition - Application Notes
& Bonus Projects Copyright © 2011, Gordon McComb)
Furthermore, we can integrate these encoder ticks to keep track of total angular displacement of the shaft. We may
also observe the amount of ticks over a known period of time to obtain angular velocity as well as the rate of change
to get acceleration. This information can be used in conjunction with physical properties of the robot to calculate
its position, orientation and velocity.
Position update
The equations that mathematically describe the motion of the robot is usually referred to as the motion model.
While being further divided into odometry motion model or velocity motion model which are the predominant
motion models for wheel based mobile robots in literature.
We will implement the odometry motion model to predict the robots pose and hence it will be the only one presented.
The equations are as follows.
x =


x
y
θ

 =


x
y
θ

 +


∆sr+∆sl
2 cos θ + ∆sr−∆sl
2b
∆sr+∆sl
2 sin θ + ∆sr−∆sl
2b
∆sr−∆sl
2b

 (2.3)
x is the predicted pose given by the vector


x
y
θ

.
x is the current pose given by the vector


x
y
θ

.
12
2.3 Complementary filter 2 THEORY
∆sr is the distance traveled by right wheel.
∆sl is the distance traveled by left wheel.
b is the robot wheelbase.
When the robot is initialized x, y and θ are set to 0. Hence x =


0
0
0

 .
b is the wheelbase and a physical characteristic of the robot and we find it by measuring the distance between each
drive wheels contact point with the underlying surface.
∆sr and ∆sl can be found with the following equations.
∆sr = 2πR
∆tickr
N
(2.4)
∆sl = 2πR
∆tickl
N
(2.5)
R is the radius of the wheel.
N is the resolution of the encoder. In our case N = 2797. From that the encoder mounted on the motor gives 2797
ticks per revolution of the gearbox output shaft.
∆tickr is the number of ticks from the right motor encoder since our last measurement.
∆tickl is the number of ticks from the left motor encoder since our last measurement.
Now that we have a motion model and taken care of all the unknowns with respect to position prediction, we are
able to predict the robot’s new position given control input from each motor’s encoder.
2.3 Complementary filter
The complementary filter is in fact two filters where one complements the other. That is they both add up to one
creating an all pass filter. Often illustrated as in the figure below.
S1
S2
Low-Pass Filter
High-Pass Filter
+
+
x
Figure 4: Block diagram illustration of the complementary filter.
13
2.4 SLAM with complementary filter 2 THEORY
ˆx = a · S1 + (1 − a) · S2 0 < a < 1 (2.6)
S1 and S2 are noisy measurements of the variable x. Most often the complementary filter is configured in such a way
that we get a low-pass filter on one measurement and a high-pass filter on the second measurement, resulting in an
all-pass filter giving the estimation ˆx. When the low-pass and high-pass filter are mathematical complements of each
other we get a complete reconstruction of the variable being sensed minus the noise related to each measurement.
[3]
2.4 SLAM with complementary filter
Because there are multiple ways to implement SLAM, this will be but one simple example employing wheel encoders
and a LIDAR. The basic steps that has to be performed are the prediction step and the update step. During the
prediction step the AGV calculates its position and orientation solely from encoder data. When landmarks have
been detected the update step is performed to further refine the accuracy of robot position as well as update
information regarding landmarks. Our example algorithm can be described in pseudo code in figure 5.
14
2.4 SLAM with complementary filter 2 THEORY
Data Association
Extracted Landmarks
Position Prediction
Encoder
Measurements
Observation
Prediction
Estimation
Matched Predictions and Observations
Estimated Pose
Predicted Position
Predicted Landmarks
State Vector
(Pose and Map)
Landmark Positions
Robot Pose
YES
SLAM
Estimated Landmark
New Landmark
Prediction Step
UpdateStep
Sensor
Observations
Figure 5: Block diagram illustration of the SLAM process.
Prediction step:
• Measure wheel displacement given by encoder data, this will be our control input (u).
15
2.5 Extended Kalman filter 2 THEORY
• Calculate pose estimate by use of control input and the motion model for our system. In this case we use the
encoder based motion model for a differential drive robot.
Update step:
• If there are no landmarks detected this iteration, meaning the measurement vector (z) is empty, then this
step is omitted.
• If however data association gives us one or more landmarks this iteration, we perform the update step.
• For each NEW landmark, we add it to the state vector.
• For each ASSOCIATED landmark, we perform the following steps:
1. Calculate the predicted position of the detected landmark, given the LMKs position in the map (state
vector) from the previous iteration and the robots predicted pose. The result (h) should be range and
bearing to the landmark within the robot’s reference frame.
2. Calculate the innovation which is the difference between the measured and the predicted position of the
landmark. (z-h)
3. Apply some initial gain to the innovation as we do not want to correct for the entirety of the error each
iteration due to noise in the measurement.
4. Utilize a complimentary filter to divide the error - Most of the resulting innovation from the previous
step will be used to correct robot position while a small percentage will be used to correct the landmarks
position within the map. For example 98% robot pose and 2% LMK position.
5. Apply a small correction to the current LMK in the map (state vector).
6. Sum the error in robot position with respect to the detected LMK and previous LMK this iteration in
order to take an average after iterating trough all detected LMKs before correcting the robot’s pose.
• Calculate the average of the accumulated errors from previous “for-loop”.
• Utilize a complimentary filter to correct error in pose. Putting a large amount of confidence into the average
of the accumulated error from the update step and a smaller amount of confidence in the predicted state
from the previous prediction step. In our case this complimentary filter governs the influence of the encoder
measurement versus the LIDAR measurement with respect to the robot pose. For example 20% encoder
measurement and 80% LIDAR measurement.
The algorithm loops continuously.
2.5 Extended Kalman filter
The extended Kalman filter is a specific example of a more general technique known as probabilistic estimation, see
Thrun et al, Probabilistic Robotics for more on the subject of probabilistic estimation. By simplifying we can say
that the Kalman filter provides a recursive method of estimating the state of a dynamical system in the presence
of noise. [4]
In the extended Kalman filter, the state transition and observation models don’t need to be linear
functions of the state but may instead be differentiable functions.
xk = f(xk−1, uk) + wk (2.7)
zk = h(xk) + vk (2.8)
16
2.6 SLAM with extended Kalman filter 2 THEORY
Where wk and vk are the process and observation noises which are both assumed to be zero mean
multivariate Gaussian noises with covariance Qk and Rk respectively. uk is the control vector.
The function f can be used to compute the predicted state from the previous estimate and similarly the
function h can be used to compute the predicted measurement from the predicted state. However, f and
h cannot be applied to the covariance directly. Instead a matrix of partial derivatives (the Jacobian) is
computed.
At each time step, the Jacobian is evaluated with current predicted states. These matrices can be used
in the Kalman filter equations. This process essentially linearize the non-linear function around the
current estimate. [5]
2.6 SLAM with extended Kalman filter
The Kalman filter simultaneously maintains an estimate of the state vector (ˆx) and the estimate covariance matrix
(P). This results in the output of the Kalman filter being a Gaussian probability density function with mean (ˆx)
and covariance P. For our purpose of localizing an AGV, the output will be a distribution of likely robot positions.
The equations of the basic EKF is as follows.
¯xk|k−1 = fk−1(xk−1|k−1, uk) (2.9)
¯Pk|k−1 = Fk−1Pk−1FT
k−1 + Gk−1Qk−1GT
k−1 (2.10)
Sk = Rk + Hk
¯Pk|k−1HT
k (2.11)
Kk = ¯Pk|k−1HT
k S−1
k (2.12)
¯xk|k = ˆxk|k−1 + Kk(zk−hk(¯xk|k−1)) (2.13)
¯Pk|k = ¯Pk|k−1−KkSkKT
k (2.14)
The state vector (ˆx) is defined to be the location and orientation of the AGV along with the location of each
landmark.
¯x = xr yr θr xl1 yl1 xl2 yl2 · · · xlnl
ylnl
T
(2.15)
Where (xryrθr) denotes the AGV’s position and orientation in the world frame {W}. xl1 yl1 xl2 yl2 · · · xlnl
ylnl
denotes the position of individual landmarks where (nl) is the total number of landmarks.
xk−1 is the previous state, ¯xk is the new state.
Pk−1 is the previous covariance matrix, ¯Pk is the new covariance matrix.
fk−1 is the motion model, used to predict robot pose.
hk is the measurement model, used to predict landmark position.
zk is the measurement from the range and bearing sensor (LIDAR), used in combination with h to calculate the
innovation.
Qk−1 is the control input error covariance matrix, noise in encoder data.
17
2.6 SLAM with extended Kalman filter 2 THEORY
Rk is the measurement error covariance matrix, noise in LIDAR data.
Gk−1 is the Jacobian of the motion model f with resect to control input.
Fk−1 is the Jacobian of f with respect to the pose x, y and θ and is a sparse matrix employed to update purely
the upper 3x3 matrix in Pk. The rest of Fk−1 is the identity matrix.
Hk is the Jacobian of the measurement model h.
Sk is the innovation covariance matrix.
Kk is the computed Kalman gain.
The input error covariance matrix Qk−1 is a 2x2 matrix.
Qk−1 = covar(∆sr, ∆sl) =
kr|∆sr| 0
0 kl|∆sl|
(2.16)
Where kr and kl are constants ∆sr and ∆sl are the motion increments. We observe that the error is proportional
to the absolute value of the input and that the errors for the two wheels are independent. The constants kr and kl
are parameters that decides how much of the motion increment should be added as error. These can be determined
experimentally trough testing of the wheel encoders accuracy.
The covariance matrix Pk is given by the previous covariance matrix Pk−1, initially given the value 0.
Pinitial =


0 0 0
0 0 0
0 0 0

 (2.17)
¯Pk = Fk−1Pk−1FT
k−1 + Gk−1Qk−1GT
k−1 = pf · Pk−1 · pfT
+ ∆rl
f · Qk−1 · ∆rl
fT
(2.18)
To update the covariance matrix at the end of the prediction step in SLAM, we need the two Jacobian matrices
Fk−1 and Gk−1. Where Fk−1 is the partial derivate of f with respect to the robot pose x, y and θ. Gk−1 is the
partial derivative of f with respect to control input ∆sr and ∆sl.
J =
df
dx
= ∂f
∂x1
· · · ∂f
∂xn
=



∂f1
∂x1
· · · ∂f1
∂xn
...
...
...
∂fm
∂x1
· · · ∂fm
∂xn


 (2.19)
The Jacobian matrix is the matrix of all first order partial derivatives of a vector valued function. [6]
The motion model for a differential drive f has already been described and is given by:
f(x, y, θ, ∆sr, ∆sl) =


x
y
θ

 +


∆sr+∆sl
2 cos θ + ∆sr−∆sl
2b
∆sr+∆sl
2 sin θ + ∆sr−∆sl
2b
∆sr−∆sl
b

 (2.20)
The Jacobian matrix F of the motion model with respect to robot pose:
18
2.6 SLAM with extended Kalman filter 2 THEORY
Fk−1 = pf =


1 0 −∆s sin(θ + ∆θ/2)
0 1 ∆s cos(θ + ∆θ/2)
0 0 1

 (2.21)
The Jacobian matrix G of the motion model with respect to control input
Gk−1 = ∆rl
=


1
2 cos(θ + ∆θ
2 ) − ∆s
2b sin(θ + ∆θ
2 ) 1
2 cos(θ + ∆θ
2 ) + ∆s
2b sin(θ + ∆θ
2 )
1
2 sin(θ + ∆θ
2 ) + ∆s
2b cos(θ + ∆θ
2 ) 1
2 sin(θ + ∆θ
2 ) − ∆s
2b cos(θ + ∆θ
2 )
1
b −1
b

 (2.22)
∆s =
∆sr + ∆sl
2
(2.23)
∆θ =
∆sr − ∆sl
b
(2.24)
We see that the Jacobians F and G has the variables θ, ∆s, ∆θ. We will therefore have to compute these Jacobians
whenever the robot pose or the control input changes. As before, the constant b is the robot’s wheelbase.
This concludes the prediction step of EKF SLAM, we now have a predicted postion given a control input and we
have a predicted covariance matrix to reflect the uncertainty of the prediction.
For the update step we need the measurement model h. This defines how we calculate the range and bearing to
an expected landmark given by our map in the state vector xk. In other words, the filter makes a guess as to what
the measurement from our sensor zk should be based on previously observed landmarks stored in the map and the
robot’s predicted pose.
We obtain zk from data association on the form [ρrange, αbearing]T
. The offset of the sensors’ coordinate frame
relative to the robot’s coordinate frame has already been accounted for in the data association algorithm, thus we
get the measurement zk in the robots coordinate frame R.
Wy
Wx
Sensor frame {S} offset
relative to Robot frame {R}
Figure 6: Sensor frame {S offset relative to robot frame {R}.
19
2.6 SLAM with extended Kalman filter 2 THEORY
The actual measurement z and the predicted measurement h has to be in same coordinate frame, we therefore
construct h to transform the current landmarks position, given in x, y coordinates in the global frame W. To range
and bearing to the landmark given in the robots coordinate frame R.
hk =
√
q
atan2(mj,y − xk,y, mj,x − xk,x) − xk,θ
(2.25)
The Jacobian of the measurement model H.
Hk =
−
mj,x−xk,x
√
q −
mj,y−xk,y
√
q 0
mj,y−xk,y
q −
mj,x−xk,x
q −1
(2.26)
Hk also needs some information with respect to the landmark we are currently indexing. As an example, lets say
we have three landmarks in our state vector and we are currently working on landmark number j = 2. That would
give us this Hk
Hj
k =
−
mj,x−xk,x
√
q −
mj,y−xk,y
√
q 0 0 0
mj,x−xk,x
√
q
mj,y−xk,y
√
q 0 0
mj,y−xk,y
q −
mj,x−xk,x
q −1 0 0 −
mj,y−xk,y
q
mj,x−xk,x
q 0 0
(2.27)
We only fill out the part of the matrix that concerns the robot (left 2x3 matrix) and the part concerning the current
landmark. In this case we are interested in landmark number two. Hence the cells for landmark number one and
three are set to zero. Also note that the cells filled in for the current landmark is simply the leftmost 2x2 matrix
that has been negated. We do not use the entire 2x3 matrix because the landmark has no orientation, only a
position.
q = (mj,x − xk,x)2
+ (mj,y − xk,y)2
(2.28)
Where m is the map (landmark coordinates in the state vector) and j is the index of the re-observed landmark.
Then we simply compute the innovation covariance matrix S.
Sk = HkPkHT
k + Rk (2.29)
Rk =
krange · zr 0
0 kα
(2.30)
Rk is the error model for our observation sensor. krange and kα are constants describing the error in the measurement
from our range and bearing sensor. We observe that the error in range is proportional to the range measured, while
the error in bearing is constant. These error constants can be found experimentally or be found in the data sheet
of the sensor.
The near optimal kalman gain K.
A robot will use
Kk = ¯PkHT
k S−1
k (2.31)
20
2.6 SLAM with extended Kalman filter 2 THEORY
We update the state vector with the computed Kalman gain multiplied by the innovation of the current landmark.
¯xk = ¯xk + Kk(zk − hk) (2.32)
Update the covariance matrix.
¯Pk = (I − KkHk) ¯Pk (2.33)
This entire update process is performed for each re-observed landmark.
Now that the re-observed landmarks have been taken care of, we want to initialize any new landmarks.
We expand the state vector with the landmarks’ estimated global position {W}, using the inverse observation
model.
mn+1,x
mn+1,y
= g(zk, xk) =
zk,range cos(zk,α + xk,θ) + xk,x
zk,range sin(zk,α + xk,θ) + xk,y
(2.34)
Where n is the number of landmarks in the state vector xk.
Then we expand the co variance matrix P, and add some initial values. We have several options on how to go about
this. The simpler option is to initialize the 2x2 covariance matrix for the new landmark with some high values.
Then we let the update step compute the correct values during the next iteration of the filter when the landmark
is re-observed.
Another path is to compute values for all the new cells in the covariance matrix P with respect to the new landmark.
This requires the two jacobians Jxr and Jz.
Jxr =
∂g(zk, xk)
xk,x,y,θ
(2.35)
Jz =
∂g(zk, xk)
zk,ρ,α
(2.36)
2x2 Covariance matrix for the landmark itself.
¯P(n+1)(n+1)
= Jxr
¯Prr
k JT
xr + JzRkJT
z (2.37)
Robot-Landmark covariance.
¯Pr(N+1)
= ¯Prr
JT
xr (2.38)
Landmark-Robot covariance, other side of diagonal.
¯P(N+1)r
= ( ¯Pr(N+1)
)T
(2.39)
21
2.7 Laser scanner 2 THEORY
Landmark-Landmark covariance.
¯P(N+1)i
= Jxr( ¯Pri
)T
(2.40)
Landmark-Landmark covariance, other side of diagonal.
¯Pi(N+1)
= ( ¯P(N+1)i
)T
(2.41)
¯Prr
k is the 3x3 covariance matrix over the robots state.
¯P
r(n+1)
k is the 3x2 robot-landmark covariance.
¯P
(n+1)i
k is the 2x2 landmark-landmark covariance.
i is the ith landmark, meaning the covariance is computed for each landmark.
n is again the number of landmarks.
This concludes the EKF SLAM. The algorithm is now ready to start at the beginning with new control input.
2.7 Laser scanner
The laser scanner used for this project is Hokuyo URG-04LX. It emits a laser beam with a wavelength of 785nm,
which is on the boundary between visible light and infrared light. [7] The light wave moves in a straight line and
reflects off of an obstacle before returning to the laser scanner. The principle behind the distance measurement is
based on calculation of the phase difference between the sent and received signal. This makes it possible to obtain
stable measurements with minimum influence from objects’ color and reflectance. [8] The emitted wave has the
following equation:
A sin(ωt) (2.42)
Where A is the amplitude, ω is the frequency and t is the phase. The equation for the received wave is as follows:
B sin(ω(t − dϕ)) (2.43)
Where B is the new amplitude and dϕ is the travel time. The signals are blended in a signal multiplier and the
wave equation is now:
A · B
2
(cos(ωdϕ) − cos(2ωt − ωdϕ)) (2.44)
A low-pass filter then enables it to get rid of the high-frequency part cos(2ωt − ωdϕ) and keep the constant part
cos(ωdϕ), revealing the phase difference. The distance covered is calculated by:
D =
ct
2
=
cdϕ
2ω
=
λdϕ
2
(2.45)
However, there is still one major problem. With the current setup, the theoretic range of the laser scanner is limited
to half the wavelength of the emitted laser. This is because it can only use phase differences less than 360°. The
signal needs to be modulated with a carrier wave of lower frequency to increase the wavelength. This is done by
turning the laser on and off with the desired frequency.
22
2.8 Line extraction 2 THEORY
The laser scanning method explained above only describes how to perform point distance scans. To take measure-
ments in the plane, the sensor uses a rotating mirror that rotates by 0.36° between each measurement.
Rotating mirror
Laser emitter
Surface
Figure 7: A rotating mirror is used to take distance measurements in the plane.
2.8 Line extraction
Least square fit is a method for linear fitting. It minimizes the sum of squared residuals. The problem with this
method is that the model is heavily penalized by outliers because the residues are squared, hence it can not be used
for line extraction. There are several methods for line extraction that does not have this problem. Split and merge
and RANSAC were explored in this project.
0 5 10 15 20
0
5
10
15
20
Figure 8: The effect of outliers on the least squares method.
2.8.1 RANSAC
RANSAC is an iterative algorithm for line fitting with the presence of outliers. It has the follwing parameters:
d Inlier threshold
T Minimum inliers
N Points left in the dataset for early exit
L Number of lines extracted for early exit
23
2.8 Line extraction 2 THEORY
M Number of trials
The algorithm reads as follows:
1. Select two random points from the dataset and find the line equation between them.
0 5 10 15 20
0
5
10
15
20
Figure 9: Line between two randomly selected points.
2. Calculate the distance between each point and the line. If a point is within the distance d, consider it an
inlier.
0 5 10 15 20
0
5
10
15
20
Figure 10: Amount of inliers are counted for the line.
3. If number of inliers is greater than or equal to T, perform a least square fit on the inliers and accept the line.
Remove the inliers from the dataset.
0 5 10 15 20
0
5
10
15
20
Figure 11: The correct line will eventually be found.
4. Repeat step 1 to 3 M times or until there are N points left in the dataset or L lines has been accepted.
24
2.8 Line extraction 2 THEORY
2.8.2 Split and merge
Split and merge is an algorithm for reducing the number of points in a curve that is approximated by a series of
points. [9] The algorithm reads as follows:
1. Find the line between the two most distant points.
Figure 12: A line is drawn between the two most distant points.
2. Find the point farthest away from the line. If the distance from the point to the line is more than a given
threshold, perform a split.
Figure 13: The most distant point from the line is within a given threshold, so a split is performed.
3. For each new line segment, repeat step 2, and continue until none of the new segments can be split any further.
Figure 14: The contour after no more splits can be made.
4. If two consecutive line segments are collinear enough, merge them.
25
2.9 Loop closure 2 THEORY
Figure 15: Two of the line segments are collinear enough to merge them.
2.9 Loop closure
Loop closure is the problem of determining wheter or not the robot has returned to a previously visited place. This
is one of the hardest problems, but is essential for a robust long term SLAM. In order to solve this problem, the
robot needs to be able to recognise when it has returned to a previously mapped region. When two regions in the
map are found to be the same, but their position is incompatible because of the uncertainty of the map, the robot
needs to calculate the right transformations needed to align these regions to close the loop.
Figure 16: A previously visited location is not registered at the same position, illustrating the loop closure problem.
Three types of solutions to this problem is map-to-map, image-to-image and image-to-map matching. Map-to-map
looks for similarities between the current local map and earlier observed maps. Both visual similarities and distance
between features can be used to find a set of common features between the two maps. Once this set has been
found, the transformation needed to align the two maps can be found. Image-to-image looks for similarities in the
latest captured image and previous images. This obviously requires a camera mounted on the robot. Distinct and
identical features are given a high probability of beeing the same. Image-to-map looks for features in the latest
captured image it can compare with features in the map. [10]
26
2.10 A* search algorithm 2 THEORY
2.10 A* search algorithm
A* (pronounced “a star”) is an algorithm for finding the shortest path between nodes in a graph. It builds on the
principle of breadth-first search and Djikstra’s algorithm.
The breadth-first search algorithm uses a FIFO (First In First Out) data structure or a queue to explore the
nodes.[11] The nodes also stores whether it has been visited or not. This information is being updated by the
algorithm. The search starts at the start node and adds its neighboring nodes to the queue if they have not already
been visited. Then continues with the next node in the queue until the queue is empty. This results in the nodes
being visited in the breadth before the depth as illustrated in figure 17.
1
2 3
4 5 6
7
Figure 17: The number on the nodes tells in which order they were visited.
The number on the nodes tells in which order the nodes were discovered.
Dijkstra’s algorithm is a pathfinding algorithm conceived by the dutch computer scientist Edsger W. Dijkstra in
1956.[12] It uses breadth-first search, but instead of using an ordinary queue, it uses a priority queue. A priority
queue is a queue where the element with the highest priority is picked first. And for each node it visit, it keeps
track of where it came from and how many nodes has been traveled to get to that particular node, or often called
the cost. Initially, the cost is set to be infinite. If the cost for the current node is equal or less than the cost for the
next node, then add the next node to the queue with a priority equal to the cost for the node. The output from
this algorithm is not the path itself, but rather the cost of the route.
27
2.11 Navigation 2 THEORY
A
B, A D , AC, A
E, B
F, C I, E
G , C H , D
J, F
Figure 18: A graph being explored by Dijkstras. The second letter is the node it came from.
To obtain the route, select the goal node and fetch the node it came from, and continue until you reach the start
node. Reverse the path to get the path from the start node to the goal node.
A* is a pathfinding algorithm that uses heuristic to guide its search. [13] Heuristic is the use of guidelines to solve
a problem, contrary to having strict steps like an algorithm. A* only focuses on reaching the goal node from the
current node, not to reach all other nodes. The algorithm computes the following function.
f(n) = g(n) + h(n) (2.46)
Where f(n) is the evaluation cost, g(n) is the cost from the starting node to reach node n and h(n) is the estimate
of the cost of the cheapest route from n to the goal node. A* generates an optimal solution if h(n) is an admissible
heuristic. h(n) is admissible if
∀n, h(n) ≤ h∗
(n) (2.47)
Where h∗
(n) is the actual cost to reach the goal node from node n. Or in other words, the heuristic function is
admissible if it never overestimates the cost to reach the goal node. [14]
The implementation of the A* star algorithm is very similar to Dijkstra’s, but the priority is now being set to the
evaluation cost f(n).
2.11 Navigation
2.11.1 Unicycle model
While we need to control each wheel of a differential drive robot independently, envisioning how the robot behaves
based on independent wheel velocities might be a little tricky. Therefore we might want to simplify the way we
28
2.11 Navigation 2 THEORY
design controllers and making it easier to imagine the robot’s movement and thus easier to write control laws or
navigation algorithms. To achieve this we will employ the unicycle model.
The unicycle model let us design our controllers with respect to transitional velocity v and rotational velocity ω of
the robot frame instead of designing for the rotational velocity of each wheel vl and vr.
The equations that convert motion from the unicycle model to the differential drive are as follows:
vl =
2v − ωL
2R
(2.48)
vr =
2v + ωL
2R
(2.49)
vl is the rotational velocity of the left wheel.
vr is the rotational velocity of the right wheel.
v is the linear velocity of the robot frame.
ω is the rotational velocity of the robot frame.
R is the radius of the wheel.
L is the wheelbase of the robots drive wheels.
These two equations lets us design navigation controllers for v and ω directly. Then use the two parameters as
inputs for the unicycle model to calculate independent rotational wheel velocities which are then handed over to
the motor controller.
2.11.2 Behaviours
Rather than program explicit control sequences for the AGV when it enconters different kind of environments and
obstacles; in example door algorithm, wall algorithm, moving obstacle algorithm, corridor algorithm, open space
algorithm etc. We rather want to give the AGV a set of more general behaviors that may be used for dynamic
navigation in varying environments, where the AGV decides what behavior is to be applied given its perception of
environment.
Fundamental behaviors to implement:
• Avoid Obstacle
• Go To Goal
• Follow Wall Left
• Follow Wall Right
These behaviors essentially compute vectors to be used by the lower level parts of the navigation system. The Avoid
Obstacle vector is simply a vector that points in the opposite direction of the obstacle closest to the AGV. The Go
To Goal vector is a vector that points from the AGV’s current position to the goal position. Follow Wall Left and
Right are vectors perpendicular to the wall sensed on the respective side of the AGV.
29
2.11 Navigation 2 THEORY
The navigation system decides which one of the vectors to use, or if blending some of the vectors is a more effective
solution to reach the goal.
As an example let the AGV sense an obstacle between its current position and the goal position, then the navigation
system may decide to purely use the avoid obstacle vector or a blend of the avoid obstacle and the go to goal vector
to safely and efficiently navigate around the obstacle. This will depend on the proximity of the obstacle detected.
A different example is when the AGV is stuck in a convex obstacle. For instance in a blind corridor or an inside
corner. It will then change its behavior from the current go to goal behavior to follow either the left or right wall.
In this manner, the AGV will be able to clear the obstacle by following a wall until some set amount of progress
towards the goal position has been obtained. At this point the navigation system will return to the go to goal
behavior.
GoVector – Most used behavior for direct navigation and concave obstacles:
1. Rotate and align robot with goal vector.
2. Use PID to drive the robot straight along goal vector.
(a) If obstacle within panic range, stop and reverse robot.
(b) Else If an obstacle is within avoid range, use obstacle avoidance vector.
(c) Else If an obstacle is within blending range, blend goal vector and obstacle avoidance vector.
(d) Else go straight to goal.
3. Stop at goal position and rotate to goal pose orientation.
4. If next Goal Vector is available, use this new goal vector and go back to 1.
Follow Wall (Left/Right):
1. Detect line to follow.
2. Calculate vector parallel to wall in direction of travel.
3. Calculate vector perpendicular to wall from AGV reference frame origin.
4. Employ a PID to control the AGV, keep the direction of travel parallel to the wall. And keep the vector
perpendicular to the wall at a set length.
5. If no progress has been made towards goal position, keep following wall.
Roaming – Initial behavior when searching for the required amount of Landmarks to initialize SLAM:
1. Detect largest gap with LIDAR scan by use of vector field histogram.
2. Orient robot towards gap.
3. Drive towards gap 4. Go back to 1.
30
3 DESIGN
3 Design
The AGV is designed for an indoor environment. The motors and rubber wheels are mounted on an aluminum
base. The outer shell is designed to withstand light impacts, protect against dust and grit collected by the wheels
and of course for the aesthetics.
3.1 Drive system
3.2 Chassis
Initial design idea for the chassis were a vacuum formed cylinder end cap that encompass all the mechanics and
electronics. However, due to the complexity of first milling a negative form that would then be used in the vacuum
forming process, this solution was abandoned. Figure 19 shows the sketch for this design.
Figure 19: First robot design.
Instead a chassis constructed of multiple parts were chosen to stick with the principles of rapid prototyping, keeping
the design modular and easier to modify when needed. The different parts were designed in 3D using SolidWorks.
The models were then exported as STL files. These files were in turn imported into CubePro software suite and
converted to a format that could be fed directly into the 3D Systems CubePro printer. Material chosen for the
chassis is ABS plastic due to its availability and sufficient strength during testing for the task at hand. Final design
can be seen in figure 20.
In the front of the AGV is the bumper and the IR modules. This will protect the LIDAR from impacts. As the
AGV moves around, the module will also hold three infrared sensor aimed down towards the floor. These will detect
31
3.2 Chassis 3 DESIGN
stairs and other cliff features.
Figure 20: Second robot design.
32
3.3 Remote control 3 DESIGN
Figure 21: Computer generated imagery.
3.3 Remote control
In certain situations it would be convenient for the operator to have direct control of the AGV’s movement known
as remote control or tele-operation. A Sony DualShock 3 controller were selected for this task. This controller is
connected by USB to the operator’s computer. A LabVIEW VI running on this host computer converts the control
inputs given by the operator to values stored in network shared variables. The real-time VI running on the AGV’s
myRIO has access to these variables over WIFI and act according to their values.
33
4 COMPONENTS
4 Components
A variety of components are required to solve the SLAM problem. Motor encoders to estimate position and a
LIDAR for mapping are the most essential components. There is also a need for a battery, motors, a motor drive
and a microcontroller to control and drive the AGV. An assortment of components were already supplied for this
project which removed the need to design, dimension and acquire robot hardware. However, certain additions were
deemed necessary for successfull and safe navigation.
4.1 DC drive motor
The RB-Dfr-444 is a 12 volt brushed DC motor. It comes preassembled with a metal spur-gear gearbox and
integrated quadrature encoder.
Rated torque at 12V 0.147 Nm
Gear ratio 43.7:1
Encoder resolution at motor shaft 64 CPR (counts per revolution)
Encoder resolution at gearbox shaft 2797 CPR
Table 3: Motor specifications.
4.2 DC motor controller
Sabertooth 2x25 motor controller has two channels and is rated at a maximum of 25A per channel continuous. It
also offers different modes of controlling it, by traditional RC receiver, 0-5V analog signals for each motor or by
supplying it with a byte (0-255) sent over serial line. The required form of input is selected on a DIP switch on the
motor controller. One will observe that the specifications of this motor controller far outperforms the requirements
posed by the AGV discussed in this paper. If the controller had not already been supplied a lower spec and cheaper
controller would have been a better option seen from an efficient design point of view.
4.3 NI myRIO
myRIO is a microcontroller developed by National Instruments based on their RIO architecture. RIO is an acronym
for reconfigurable input/output and is based on four components: a processor, a reconfigurable FPGA, measurement
I/O hardware, and LabVIEW. [15] The microcontroller run a real-time version of the Linux operating system.[16]
4.4 LIDAR
Hukoyu URG-04LX is a scanning laser range finder. It has a scan angle of 240 degrees and maximum radius of
4000mm. Pitch angle is 0.36 degree (resolution). The sensor outputs a maximum of 683 datapoints per scan at a
maximum scan rate of 10Hz. Rated accuracy is ±1% of measurement in the range 1000mm-4000mm, for ranges less
than 1000mm rated accuracy is ±10mm. The LIDAR scan is read as an array consisting of range with corresponding
angle data. This will be useful for obstacle avoidance, mapping and navigation.
4.5 Infrared sensor
Sharp 2Y0A21 sensor returns an analog voltage representing the range to a detected surface. An LED in the device
emits infrared light which is reflected by the surface that the device is aimed at, a PSD in the device receives
this reflected light resulting in an analog voltage being transmitted from the sensor to a connected controller. An
increase in distance is determined by a drop in voltage. Recommended range for this specific sensor is 10cm to
80cm. This will be used for cliff detection, ie. determine if the AGV is about to drive off a staircase or other
platform.
34
4.6 LiPo battery 4 COMPONENTS
4.6 LiPo battery
A 4-cell 14.8V 5000mAh battery were supplied for the project. Relative to the size of the AGV, this is a large and
heavy battery. Seeing that one of the possible uses for the AGV might be to deliver parcels and documents between
offices in a large building throughout a full working day of 8 hours, a generous battery capacity is advantageous.
4.7 Voltage regulator
The LiPo battery supplies a varying voltage to the system in the range between ~15V to ~12V depending on its
charge state. However, not all components have a built in voltage regulator to accept this range. For instance, the
Hukoyu LIDAR requires an external supply of 800mA at 5V. We also realize that external utilities like lights and
cameras might be added to the system to expand its functionality. Therefore, two voltage regulators are used.
Input Output
VR #1 12V - 24V 5V, 2.1A
VR #2 22.2V - 12V 12V, 1A
Table 4: Overview of input and output voltage of the voltage regulators.
35
5 PROGRAM
5 Program
The software for the AGV is made in LabVIEW in the visual programming language G. G has a flowchart like pro-
gramming model and works by connecting wires, that transfer data, between nodes. LabVIEW supports modularity
with the use of "Sub-VI’s" that lets one create custom nodes. The program consists of the following major parts,
which will be described in better detail in their respective sub chapters. Motor drive, data acquisition, landmark
extraction, landmark association and mapping.
5.1 Scheduling
The program is structured into an initialization, periodic tasks and an termination module. The initialization
module takes care of initializing the LIDAR and defining global constants. The termination module takes care of
turning off the LIDAR and stopping the motors. The periodic tasks module contains the main robot logic and
has multiple parallel loops executing at different frequencies. At 100Hz, the task controlling the motor controller
is executed. At 33, ¯3Hz, the task controlling the navigation system is executed. At 10Hz, the task controlling the
localization and mapping is executed.
5.2 Motor control
Motor control is accomplished by utilizing two PID controllers, one for each motor. Where the set point for each
PID is the reference rotational velocity for the gearbox output shaft. The process variable is the measured shaft
rotational velocity. The PID’s are experimentally tuned during testing to give quick and smooth response to change
in reference velocity within the desired working envelope.
The output of each PID is conditioned before being sent as a byte to the motor hardware controller. The sabertooth
controller offers different modes of signal input in order to control each motor. Among the different modes are analog
signal mode which is the most popular one employing a 0-5 volt signal for each motor. Where 2.5V is stop, 0V is
full reverse and 5V is full forward. During testing, we experienced undesirable behavior in analog mode, when the
myRIO controller was started or restarted there would be a short moment where the analog output channels sent a
0V signal to the motor controller which caused each motor to jerk violently. This could be circumvented by using
a relay to control power between the sabertooth unit and the battery by activating and disabling the relay from
the myRIO unit. However the sabretooth offered a mode where a byte could be sent over UART to control each
motor instead of the traditional analog mode. In this mode, the motors would stay disabled during boot up of the
myRIO.
Reference velocity for the gearbox shaft is calculated by the unicycle model to differential drive VI.
36
5.2 Motor control 5 PROGRAM
v
omega
vl
vr
vl = (2*v - omega*L) / (2*R);
vr = (2*v + omega*L) / (2*R);
1
2
3
L
R
vr
vl
omega
v
R
L
UniToDiff.vi
We can simplify the control, navigation and obstacle avoidance algorithms by employing the
principles of the Unicycle Model. This will let us design navigation controllers with respect to only
two simple to understand variables;Translational velocity (v) and Rotational Velocity (omega)
R:AGV wheel radius
L:AGV wheelbase
v:Translational velocity
omega:Rotational velocity
vl:Rotational velocity of the left motor.
vr:Rotational velocity of the right motor.
Figure 22:
vl (rad/s)
vr (rad/s)
R (m)
0
0,001
100
0
0,001
100
Outputs measured
right wheel (rad/s)
left wheel (rad/s)
right encoder count
left encoder count
right wheel (m/s)
left wheel (m/s)
Figure 23: Inputs and outputs of the motor control VI.
The byte is unsigned and has a range from 0-255, where 0 is stop both motors, 1-127 controls motor 1, 128-255
controls motor two.
The measured shaft rotational velocity is acquired from the wheel encoders. The accumulated ticks since last
iteration of the motor control VI is used to calculate the rotational velocity. For increased precision and response,
this loop is set to run every 10 millisecond, 100 Hz.
ωm1 = encTicks
2π
N
∆t (5.1)
ωm1 is the rotational velocity of motor one.
encTicks are encoder ticks since last iteration.
N is the resolution of the encoder with respect to the gearbox output shaft.
37
5.3 Sensor data acquisition 5 PROGRAM
∆t is the time in seconds since last iteration.
In this example VI, we can see the process where the inputs; encoder ticks, wheel radius and time since last
iteration are used to control the motor velocity. The VI also outputs useful measurements to be used elsewhere in
the program; encoder ticks, rotational velocity of the output shaft and linear velocity of the wheel mounted to the
output shaft. We also see how the PID output is conditioned to a byte ranging from 1-127 to control direction and
velocity of the motor and a midpoint of 64 being the command to stop motor one.
encTicks
Linear Velocity [m/s]
Rotaional Velocity [rad/s]
Ref. vl (rad/s)
Reset Counter
Counter Value
Encoder Left
64
1
127
Characters to Writ
Character Count
UART
PID gains
1000
dt (ms)
Loop delay, used to
calcute velocities
Encoder counts to radians
(x/2797)*2*pi
R (m)
Wheel radius
Measurements
Figure 24: Code for controlling a motor.
5.3 Sensor data acquisition
The Hokuyo URG drivers included with LabVIEW is used to communicate with the device. The initialization
function is called once, at the beginning of the program. It needs to know which device to initialize with an
identifier specified by the Virtual Instrument Software Architecture (VISA) standard. The VISA format is Interface
Type[board index]::Address::INSTR and in the case of the Hokuyo URG-04LX connected to the USB port on the
myRio, it is USB0::0x15D1:. The close function is called once, at the end of the program, and attempts to turn off
the laser before terminating the software connection to the instrument. The data acquisition function takes start
point and end point in degrees and cluster count as parameters. Cluster count refers to the number of measurements
which are clustered together. The functions returns a magnitude and direction array which are associated. To fetch
the data from the first measurement for instance, the 0th index of each array needs to be accessed.
38
5.4 Line extraction 5 PROGRAM
Figure 25: LabView code for LIDAR data acquisition.
This will still not allow to interface with the laser scanner through LabVIEW. The USB module on the Linux
operating system is taking control over the laser scanner when connected to the USB port, preventing LabVIEW
from accessing it. To fix this, we logged into the Linux filesystem and removed the USB module, preventing it from
taking control over the laser scanner. [17]
5.4 Line extraction
A variety of methods were tried to program a fast and robust line extraction algorithm.
RANSAC is one such algorithm and is explained in 2.8.1 on page 23. Instead of returning extracted lines with the
RANSAC algorithm, we return a list of fitted points for the lines. Clustering is then performed on the points for
finding the actual start and end points for the lines. To find gaps, we check the distance between each point and
the next point.
d = (x2 − x1)2 + (y2 − y1)2 (5.2)
If the distance is greater than a threshold, extract a line with a start point at the end of the last gap and an end
point at the gap.
39
5.4 Line extraction 5 PROGRAM
Figure 26: Lines extracted from sample data with the RANSAC algorithm.
Split and merge is an other line extraction algorithm. But it does not have any notion of gaps, so the datapoints
needs to be clustered first. Similar to the implementation of clustering in the RANSAC algorithm, but now we are
out after the clustered points. Each cluster of points are passed to the Split and Merge module.
The Split and Merge module aims to find the corner points making up the extracted lines. The algorithm outline
is explained in chapter 2.8.2 on page 25. Two SubVI’s are defined in the program, one for split and one for merge.
The split SubVI picks the two extreme points and finds the line between them. Then it finds the point which lies
the farthest away from the line using 5.3.
d =
|−ax + y − b|
√
a2 + 1
(5.3)
Where a is the slope of the line, b is where the line intercepts the y-axis, and x and y is the point. If the distance
is big enough, it performs a split. Then recursively call the split SubVI with the left subset of the split and then
inserts the point of split to the list of corner points before it calls the split SubVI on the right subset of the split.
The tasks are done in this exact order to ensure that both the corner points will appear in a consecutive order in
the list and that the right subset is selected for recursion. The recursion will end when there is no more subsets
that can do a split. Now the found line segments are sent into the merge SubVI. This SubVI checks consecutive
lines for collinearity and removes the point between them if the are collinear enough, effectively merging the lines.
The collinearity is found by checking the angle between the lines using 5.4.
α = arctan
m2 − m1
1 + m1 · m2
(5.4)
Where m1 and m2 are the slopes of the two lines. The split and merge module returns a linestrip. A linestrip is a
list of vertices defining one or multiple lines connected together at its vertices.
40
5.5 Mapping 5 PROGRAM
Figure 27: Lines extracted from sample data with the split and merge algorithm.
With the simulated data, both RANSAC and Split and Merge gave promising results, as seen in figure 26 and
figure 27. To decide which algorithm we wanted to use we tested the two algorithms with real data from the LIDAR
and in an real environment. The test is described in 6.7 on page 54.
5.5 Mapping
For the robot to communicate what it has learned to the user, a topological map is constructed. The map is being
updated after each time the line extraction algorithm has found some lines. This means that a single wall will be
drawn multiple times on the same place. The line extraction algorithm finds lines in the robot frame and needs to
be transformed into the global frame to be drawn correctly in the map.
−→
lG =
−→
lR
cos θ − sin θ
sin θ cos θ
+
−→
R (5.5)
Where
−→
lR is a vertex in the robot frame, θ is the robot heading,
−→
R is the robot position and
−→
lG is a vertex in the
global frame.
5.6 Landmark extraction
Corners are good distinct landmarks the robot can easily extract from the environment. The only problem is that
a typical laser scan will contain corners that are not really corners, but just limits in the range of the LIDAR or
walls concealed by other walls. But because Split and Merge returns linestrips, we can just consider all vertices in
the linestrip except for the first and last as corners.
41
5.7 Data association 5 PROGRAM
Landmark
Robot
Not landmark
Extracted line
Wall outside LIDAR range
Figure 28: How landmarks are selected.
5.7 Data association
The robot maintains a list of observed landmarks. This list is being updated when new landmarks are observed.
But it needs some way to distinguish between new landmarks and earlier observed landmarks. We handle this in a
rather simple way. For each landmark in the current observation, it goes through all the earlier observed landmarks
and updates the closest if it is closer than a given threshold. If the landmark is outside the threshold, it adds it as
a new landmark to the list.
5.8 EKF SLAM
SLAM will be implemented in LabVIEW for the myRIO using wheel encoders and LIDAR by following this outline
of SLAM discussed earlier in the theory chapter.
42
5.8 EKF SLAM 5 PROGRAM
Data Association
Extracted Landmarks
Position Prediction
Encoder
Measurements
Observation
Prediction
Estimation
Matched Predictions and Observations
Estimated Pose
Predicted Position
Predicted Landmarks
State Vector
(Pose and Map)
Landmark Positions
Robot Pose
YES
SLAM
Estimated Landmark
New Landmark
Prediction Step
UpdateStep
Sensor
Observations
Figure 29: Block diagram illustration of the SLAM process.
In the implementation used in this paper, we will expand the state vector with a landmark identifier for each
landmark. This identifier will store information about the landmark. Is it a new landmark? Is it re-observed?
What type of feature, corner, line, gap etc.?
43
5.8 EKF SLAM 5 PROGRAM
Wy
Wx
ml,y
ml,x
rX
ry
rθ
m1,x
m1,y
m1,s
m2,x
m2,y
m2,s
.
.
.
mn,x
mn,y
mn,s
Robot pose
Landmarks
Figure 30: Graphical representation of the state vector used in EKF SLAM
The state vector will be represented by a one column matrix of size 3 + N*3, where N is the number of landmarks.
rX
ry
rθ
m1,x
m1,y
m1,s
m2,x
m2,y
m2,s
.
.
.
mn,x
mn,y
mn,s
Robot pose
Landmarks
Prl1
. . . PrlN
Pl1l1
. . . Pl1lN
PlNl1
. . . PlNlN
...
...
Prr
Pl1r
PlNr
...
3 2
3
2
2 2
2
2
State Vector Covariance Matrix
3
3
Figure 31: Dimensions of the state vector and covariance matrix.
44
5.8 EKF SLAM 5 PROGRAM
In the mathscript node, arrays and matrices are indexed differently. An arrays’ first index is 0, but the matrix’s first
index is however 1. It might be convenient to use matrices for all inputs to the mathscript node in order to avoid
indexing confusion. The EKF will be written in a mathscript node as the writer of this code is more familiar with
traditional programming languages like C and C++ rather than the graphical programming language of LabVIEW.
The mathscript node offers some degree of compilation feedback errors and warnings by attaching an indicator to
the error output node.
We will implement the prediction step, the update step as well as addition of new landmarks. We begin with the
prediction step. The inputs for this step is previous state vector, previous covariance matrix and control input.
EKFSLAMprediction(xk−1, Pk−1, uk)
¯xk = fk−1(xk−1, uk) (5.6)
¯Pk = Fk−1Pk−1FT
k−1 + Gk−1Qk−1GT
k−1 (5.7)
First we set up our matrices with initial values and build our error matrices according the error models discussed
in the theory chapter. Initially the state vector is empty apart from the robot pose.
xk−1 =


0
0
0

 (5.8)
The covariance matrix is initialized accordingly, there are no landmarks and we are fully certain of our initial
position, thus we get the initial covariance matrix.
Pk−1 = Prr
k−1 =


0 0 0
0 0 0
0 0 0

 (5.9)
With initialization complete, write the recursive part of the EKF SLAM algorithm.
Receiving control input recorded from wheel encoders since last iteration.
uk =
∆sr
∆sl
(5.10)
f(x, y, θ, ∆sr, ∆sl) =


x
y
θ

 +


∆sr+∆sl
2 cos θ + ∆sr−∆sl
2b
∆sr+∆sl
2 sin θ + ∆sr−∆sl
2b
∆sr−∆sl
b

 (5.11)
We compute the error based on the model in the theory chapter.
Qk−1 = covar(∆sr, ∆sl) =
kr|∆sr| 0
0 kl|∆sl|
(5.12)
The Jacobian matrices F of the motion model with respect to robot pose, and G of the motion model with respect
to control input are defined directly in the mathscript code.
Fk−1 = pf =


1 0 −∆s sin(θ + ∆θ/2)
0 1 ∆s cos(θ + ∆θ/2)
0 0 1

 (5.13)
Gk−1 = ∆rl
=


1
2 cos(θ + ∆θ
2 ) − ∆s
2b sin(θ + ∆θ
2 ) 1
2 cos(θ + ∆θ
2 ) + ∆s
2b sin(θ + ∆θ
2 )
1
2 sin(θ + ∆θ
2 ) + ∆s
2b cos(θ + ∆θ
2 ) 1
2 sin(θ + ∆θ
2 ) − ∆s
2b cos(θ + ∆θ
2 )
1
b −1
b

 (5.14)
∆s =
∆sr + ∆sl
2
(5.15)
45
5.8 EKF SLAM 5 PROGRAM
∆θ =
∆sr − ∆sl
b
(5.16)
Finally, we compute the predicted pose ¯xk and covariance matrix ¯Pk. We are only interested in updating the robot
pose in the state vector, that is the top three rows. Similarly we only update the 3x3 robot covariance, and the
robot-landmark cross variances of the full covariance matrix. This can be described by the EKF sparse prediction
equations used for robot motion.
¯xx,y,θ = f(x, y, θ, ∆sr, ∆sl) (5.17)
¯Prr
= Fk−1Prr
k−1FT
k−1 + Gk−1Qk−1GT
k−1 (5.18)
¯Prl
= Fk−1Prl
(5.19)
¯Plr
= ¯Prl T
(5.20)
Figure 32: Prediction step applied to state vector and covariance matrix.
Here is an example mathscript node of the EKF SLAM prediction step, note the connected error constant and
indicator. These are extremely useful for debugging the code.
46
5.8 EKF SLAM 5 PROGRAM
b = 0.2; %robot wheelsbase [m]
deltaSr = u(1,1); %control inputs [m]
deltaSl = u(2,1);
dS = (deltaSr+deltaSl) / 2; %delta S [m]
dT = (deltaSr-deltaSl) / 2*b; %delta THETA [rad]
%Predict new postion
x_new = [x(1,1);x(2,1);x(3,1);] + [dS*cos(x(3,1) + dT/2);
dS*sin(x(3,1) + dT/2);
dT ];
%Wrap robot orientation/heading to [-PI,PI]
if x_new(3,1) > pi
x_new(3,1) = x_new(3,1) - 2*pi;
end
if x_new(3,1) < -pi
x_new(3,1) = x_new(3,1) + 2*pi;
end
kr = 0.3; %Error constant right wheel
kl = 0.3; %Error constant left wheel
Q = [kr*abs(deltaSr) 0;
0 kl*abs(deltaSl)];
F = [1 0 -dS*sin(x(3,1) + dT/2);
0 1 dS*cos(x(3,1) + dT/2);
0 0 1 ];
G = [0.5*cos(x(3,1)+dT/2)-0.5*(dS/b)*sin(x(3,1)+dT/2) 0.5*cos(x(3,1)+dT/2)+0.5*(dS/b)*sin(x(3,1)+dT/2);
0.5*sin(x(3,1)+dT/2)+0.5*(dS/b)*cos(x(3,1)+dT/2) 0.5*sin(x(3,1)+dT/2)-0.5*(dS/b)*cos(x(3,1)+dT/2);
(1/b) -(1/b) ];
%Predict new covariance matrix using EKF sparse prediction equations
P_new(1:3, 1:3) = F*P(1:3,1:3)*F' + G*Q*G';
P_new(1:3, 4:length(P_new)) = F*P_new(1:3, 4:length(P_new));
P_new(4:length(P_new), 1:3) = (P_new(1:3, 4:length(P_new)))';
x = x_new;
P = P_new;
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
P
x
u
P
x
x
P
u
error out
x2 (Colum
P2
Figure 33: EKF SLAM prediction step, MathScript node.
The update step applies to the full state vector and the full covariance matrix.
47
5.8 EKF SLAM 5 PROGRAM
Figure 34: Update step applied to state vector and covariance matrix.
For each re-observed landmark in zk, compute observation error covariance matrix.
Rk =
krange · zr 0
0 kα
(5.21)
The Jacobian of the observation model.
Hj
k =
−
mj,x−xk,x
√
q −
mj,y−xk,y
√
q 0 0 0
mj,x−xk,x
√
q
mj,y−xk,y
√
q 0 0
mj,y−xk,y
q −
mj,x−xk,x
q −1 0 0 −
mj,y−xk,y
q
mj,x−xk,x
q 0 0
(5.22)
The innovation covariance matrix.
Sk = HkPkHT
k + Rk (5.23)
The Kalman gain.
Kk = PkHT
k S−1
k (5.24)
Apply the Kalman to the state vector.
xk = xk + Kk(zk − hk) (5.25)
Apply the Kalman to the covariance matrix.
¯Pk = (I − KkHk) ¯Pk (5.26)
Repeat until all re-observed landmarks has been handled.
When handling new landmarks, the state vector is expanded, as is the covariance matrix.
48
5.9 Occupancy grid map 5 PROGRAM
Figure 35: New landmark expanding the state vector and covariance matrix.
The mean (location) of the new landmark is computed with the inverse of the observation function.
g(zk, xk) =
zk,range cos(zk,α + xk,θ) + xk,x
zk,range sin(zk,α + xk,θ) + xk,y
(5.27)
The covariance is either set to a high value then computed when the landmark is re-observed. Or an initial covariance
values are calculated with the Jacobians Jxr and Jz
Jxr =
∂g(zk, xk)
xk,x,y,θ
= (5.28)
Jz =
∂g(zk, xk)
zk,ρ,α
= (5.29)
The state vector and covariance matrix is now fully updated. The EKF is ready for the next iteration.
5.9 Occupancy grid map
In order to be able to use common pathfinding algorithms like A*, the map had to be converted into an occupancy
grid map. That is a binary map with a given metric resolution that tells if a space is occupied or not.
We started with a set of corner points, each with a corresponding vector to end of the wall. These are collected
from the line extraction algorithm. For each corner point, make a parameterization of the line between the corner
and the end.
−→
l (t) =
−→
P end − t(
−→
P start −
−→
P end) t [0, 1] (5.30)
Where
−→
P end is the endpoint,
−→
P start is the startpoint and t is the parameter. Traverse the line by increasing t with
a tstep and check which grid cell each point lies on by dividing by the resolution and rounding down to the nearest
integer.
49
5.9 Occupancy grid map 5 PROGRAM
Figure 36: An occupancy grid map represented visually.
The occupancy grid map was not implemented in the robot, but was only simulated on the computer.
50
6 TESTING
6 Testing
6.1 White-box testing
To test the software of the robot, white-box testing was used. White-box testing is a method of testing software
that tests the internal structure of the program. White-box testing can be applied at the unit and integration level.
[18] Unit testing is the testing of individual modules. Does various input yield the expected output? Integration
testing is the testing of modules in groups. Unit tests are used to ensure that code meets its design and behaves as
intended.
6.2 Motor accuracy and performance
A good place to start testing is the performance and accuracy of the motors as this affects how the actuators react
to control inputs. We begin with tuning the PID controllers for each motor. The reference input or set point for
the motor PID controller is the rotational velocity we would like to achieve at the gearbox axle given in radians per
second. Positive rotational direction is counter clockwise when looking straight at the gearbox output shaft. The
feedback or process variable for the PID is the rotational velocity at the gearbox output shaft. Rotational velocity
is computed with data from the motor encoder then accounting for gearbox ratio. We want the motor to achieve
the set velocity as quickly as possible while maintaining smooth jitter free acceleration, little to no overshoot from
the set point and settle at this reference velocity. For safety and peace of mind the AGV is placed on a platform
with the wheels rotating freely without contact with the ground. Keep in mind however that the PID parameters
gained from this initial stage will have to be re tuned once the AGV is placed on the ground as the motors then
have to accelerate the mass of the complete system, not just the wheels spinning freely midair.
Recipe for practical PID tuning:
While this process will not give the perfect PID parameters, it will get you close to it and ensure you get a decent
controller and hence a responsive system. [19]
1. Set Kp = 1, Ki = 0, Kd = 0.
2. Increase Kp (proportional gain) until the motor achieves steady oscillation when given change in input. Then
set Kp to 80% of this value.
3. Increase Ki (Integral gain) to eliminate the steady state error as quickly as possible. Too high Ki will lead
to the motor oscillating around the set point, reduce it.
4. Keep going back and forth between Kp and Ki, making small changes for further improvement to overshoot,
performance and stability.
5. If possible introduce some disturbance to the system, in the case of a wheeled robot, simply grab the wheel
to restrain its motion the let go of the grip. Observe the response and adjust your controller if an unstable
response is observed.
6. Experiment with low values of Kd (derivative gain) for increased responsiveness. Often times a PI controller
will suffice and Kd can be left to 0.
Remember we want stability and reliability first, then performance secondary to that. Below is a result from tuning
two motors to give a smooth response within the working envelope of this specific differential drive. The boundaries
are -6 to 6 radians/second measured on gearbox output shaft.
51
6.3 Straight line test 6 TESTING
7
-7
-5
0
5
7
-7
-5
0
5
Time
26371614
Left
Right
Velocities
100,000proportional gain (Kc)
0,002integral time (Ti, min)
0,000derivative time (Td, min)
Right wheel PID gains
100,000proportional gain (Kc)
0,002integral time (Ti, min)
0,000derivative time (Td, min)
Left wheel PID gains
-13077
right ticks
13196
left ticks
0
right wheel (rad/s)
0
left wheel (rad/s)
-13077
right encoder count
13196
left encoder count
0
right wheel (m/s)
0
left wheel (m/s)
MotorControl Output
STOP
stop
90,9091
Loop Hz
0
Ref. vl (rad/s)
0
Ref. vr (rad/s)
Figure 37: PID tuning two motors on a differential drive.
6.3 Straight line test
We wanted to test how well the robot managed to drive in a straight line. The purpose of this test is to check if
PID parameters are tuned well and if the physical robot parameters are correct.
6.3.1 Method
A starting point was marked on the floor and an end point was marked 3m away from it. The robot was placed on
top of the starting point and its x-axis was aligned with the end point. The robot was then programmed to drive
what it believed would be 3m. The distance between the end mark and where the robot actually stopped was then
measured.
6.3.2 Test criteria
An error of less than or equal to 10mm for the 3m driven is desired.
6.3.3 Results
Distance (cm) Error (cm)
Test 1 302 2
Test 2 302.5 2.5
Test 3 302.6 2.6
Test 4 302.8 2.8
Test 5 302.5 2.5
Table 5: Results from the straight line test.
Assuming there is not a systematic error in the encoders, the consistent nature of the error in straight line distance
points to an error in the measured radius of each wheel, due to the equation.
p = θr (6.1)
The programmed radius of each wheel will have to be increased 302,5−300
300 ∼ 1% from its current value.
52
6.4 Rotation test 6 TESTING
6.4 Rotation test
The radius of the wheels were corrected in the preceding step. Another physical parameter of the robot crucial
to accurate navigation is the wheelbase, that is the distance between each drive wheels contact point with the
underlying surface.
6.4.1 Method
The wheelbase is initially measured with a ruler. Then the robot is commanded to rotate four full revolutions and
the error in final pose is measured.
6.4.2 Test criteria
Ideally, the error should be as close to zero as possible. However, due to soft pneumatic tires and linear slack in
each gearbox this is obviously not feasible.
6.4.3 Results
Three tests in each rotational direction were performed, for a total of six runs.
Angle (degrees) Error (degrees)
Test 1 CW -1430 10
Test 2 CW -1428 12
Test 3 CW -1428 12
Test 4 CCW 1426 -14
Test 5 CCW 1428 -12
Test 6 CCW 1425 -15
Table 6: Results from rotation test.
Results are consistent and suggest an under estimation the robot wheelbase which leaves it to stop shy of its target in
every test by approximately 12 degrees. The distance of the wheelbase will therefor be increased by 1428−1440
1440 ∼ 1%
from its current value.
6.5 Square drive test
We wanted to test how well the robot managed to drive in a rectangle. The purpose of this test is the same as the
straight line test as well as testing how accurate the rotation is.
6.5.1 Method
A plate with the dimensions 1.8m × 0.8m was placed on the floor and the outline was marked with adhesive tape,
resulting in a path with the beforementioned dimensions. The robot was placed in one of the corners and were
programmed to drive in a rectangle of the same dimensions. The distance error and angle was then measured.
6.5.2 Test criteria
A distance error of less than or equal to 10mm and an angular error of less than 5° is desired.
53
6.6 LIDAR test 6 TESTING
6.5.3 Results
Angle error Distance error (cm)
Test 1 12° 27.5
Test 2 17° 38.8
Test 3 17° 44
Test 4 20° 37.2
Test 5 14° 40.5
Table 7: Results from the square drive test.
The square drive test shows the robot driving in the pattern of a skewed rectangle and confirms what was indicated
during the rotation test, again we see a somewhat consistent error over several tests indicating underestimation of
the robots wheelbase.
6.6 LIDAR test
We trust the specifications provided by Hokuyo to be more accurate then we possibly could measure and test, given
they probably have a lot better testing setup. But to ensure there is nothing wrong with this specific sensor unit,
we wanted to perform a simple test.
6.6.1 Method
The LIDAR was placed 1m from a 1.8m long wall. We then performed a scan to test if the point sensed at θ = 0
was 1m from the origin and that the two outermost points were 1.8m apart. The width of the wall is calculated
using6.2.
d = r2
1 + r2
2 − 2r1r2 cos(θ2 − θ1) (6.2)
Where d is the distance between two polar coordinates, r is the magnitude and θ is the direction.
6.6.2 Results
Distance at θ = 0 (mm) The leftmost point (mm) The rightmost point (mm) Width of wall (mm)
Test 1 1018 1279∠ − 40.547° 1356∠43.125° 1758
Test 2 1005 1266∠ − 41.953° 1356∠41.016° 1774
Test 3 1008 1262∠ − 41.25° 1360∠41.719° 1738
Table 8: Results from the LIDAR test.
We observe the results are within the tolerances stated in the data sheet by the manufacturer.
6.7 Line extraction algorithm test
Two algorithms for line extraction has been developed for the project. We want to test which of the implementations
of RANSAC and split and merge performs most accurate and robust.
54
6.7 Line extraction algorithm test 6 TESTING
6.7.1 Method
The robot was placed in a typical environment with multiple openings and walls at different angles. LIDAR data
was then acquired and run through both algorithms. The results were inspected visually. The parameters for
RANSAC were found experimentally and is as follows:
Inlier treshold (d) 20
Minimum points (T) 20
Points left for exit (N) 20
Number of lines for exit (L) 10
Number of attempts for exit (M) 200
Minimum points for cluster 10
Gap threshold for clustering 500
Table 9: RANSAC parameters used during testing.
The parameters used for Split and merge were found experimentally as well and is as follows:
Split threshold 100
Angle difference for merging 0.5
Gap Threshold 300
Minimum points for cluster 50
Table 10: Split and Merge parameters used during testing.
6.7.2 Result
The results returned by RANSAC was very accurate, but not very reliable, while Split and Merge on the contrary
gave reliable results, but with a bit less accuracy.
RANSACSplit and Merge
Figure 38: Comparison between RANSAC and Split and Merge.
55
6.8 System test 6 TESTING
This could have been caused by not having the optimal parameters. RANSAC has a complexity of S×N ×N.Trials
while Split and Merge has a complexity of N × log N, where N is number of points in the dataset, S is the number
of lines extracted and N.Trials is the number of trials for RANSAC. [20] The split and merge algorithm requires
the data to be sorted by angle and that is exactly what the LIDAR returns. This concludes that Split and Merge
should be our obvious choice.
6.8 System test
This is the final test where we test how well the system works together. The robot is placed in a rectangular
coridor that meets itself at the end. It is supposed to drive throught this corridor to see how well it manages to
simultaneous determine own position and map the environment.
6.8.1 Result
Figure 39: Results from testing the robot
The mapping gave promising results, but because of increased uncertainty of in the mapping over time an earlier
visited place was placed wrongly on the map. This is the effect of not handling loop closure as explained in 2.9 on
page 26. To prevent this, a loop closure algorithm has to be implemented.
56
7 WRAP-UP
7 Wrap-up
7.1 Conclusion
This paper presented a possible real life implementation of online SLAM. Two flavors were presented, one of which
employed a complementary filter the other approach used the popular EKF approach. While the complementary
filter is simpler to understand and implement for those less experienced in statistics and probability the EKF on
the other hand does outperform in accuracy when you get it to work.
Much literature were researched in the hunt for the solution to online SLAM EKF, many algorithms studied. An
unfortunate situation that often presented itself was algorithms lacking essential steps, where a simplified version of
SLAM were presented, to simple to be implemented on a real system. Errata in both published articles and highly
regarded literature as well made the algorithms fail when attempts were made to implement them.
To one not familiar in the field of autonomous robots this is the biggest hurdle to overcome and to be able to sift
out the good parts from many sources and combine it into a working solution to the SLAM problem is the path one
must travel when tackling this challenge. In this paper a complete algorithm has been presented. It has not however
been successfully implemented on a physical device, due to unrelated hardware issues. Rest assured however, work
continues.
The beginnings of a navigation system has also been implemented employing among other techniques a vector field
histogram and behavior based navigation the robot is able to navigate safely in its environment, avoid static as well
as dynamic obstacles. Way points may also be sent to the robot where it will attempt to find a safe route to the
various way points in the world. Dealing with convex obstacles remains to be implemented,
A differential drive robot has also been built for this thesis. The drive train and base plate were already given, a
body was yet to be designed. This was initially drawn in SolidWorks then the models were exported for 3D printing.
Lastly a series of test cases were written and performed beginning with unit testing ie. encoders to module testing
ie. drive in a straight line. Full system testing of onlineSLAM was planned but not possible due to reliability and
accuracy issues still to be resolved. Suggestions for further work and improvement will follow.
7.2 Further work
As the project was approaching its end, we still had some ideas that could further improve the performance of the
robot.
The occupancy map is currently stored in a grid data structure. To use the occupancy map together with the A*
module in LabVIEW, it had to be stored in a 64bit double-precision floating-point format. With a resolution of
10mm, a 100m2
map would use 8MB of memory. By using a quadtree structure we could have low resolution where
the robot has not mapped and high resolution where it has, decreasing the memory used by the map. Empty areas
it has visited does not have high resolution. A quadtree is a tree data structure where each node has exactly four
children.
Array manipulation is used very much in the code used by the robot. The standard array manipulation functions in
LabVIEW makes a copy of the array in memory each time it is being altered. Using the In-place element structure,
the operations can be done on the data directly, avoiding making unnecessary copies of the array, thus increasing
performance.
57
7.2 Further work 7 WRAP-UP
Figure 40: Converting an array of cartesian coordinates to polar coordinates using the in-place element.
Landmarks may mistakenly be placed in the wrong place by the robot if the robot pose is determined wrongly.
This would further confuse the localization algorithm. By using some sort of algorithm that removes landmarks if
they seem to not exist, this would be less of a problem. It could also be an idea to
The gear motor that was provided for the project had a good amount of backlash. In addition, the tires were very
soft and flexible. In combination, these two problems made the odometry less accurate. By upgrading the gear
motors and wheels, we predict that the odometry will be improved.
58
REFERENCES REFERENCES
References
[1] R. Maxwell. (2013, January) Robotic mapping: Simultaneous localization and mapping (slam). [Online].
Available: https://www.gislounge.com/robotic-mapping-simultaneous-localization-and-mapping-slam/
[2] S. Thrun, W. Burgard, and D. Fox, Probabilistic Robotics. MIT Press, 2005, ch. 5, p. 118.
[3] P. C. Glasser. (2015, May) An introduction to the use of complementary filters for fusion of sensor data.
[Online]. Available: http://glassercommunications.com/paul/samples/filters_for_fusion.pdf
[4] H. M. Choset, Principles of robot motion: theory, algorithms, and implementation. MIT press, 2005, ch. 8.
[5] Wikipedia. (2016) Extended kalman filter — Wikipedia, the free encyclopedia. [Online]. Available:
https://en.wikipedia.org/wiki/Extended_Kalman_filter
[6] Wolfram MathWorld. (2016, May) Jacobian. [Online]. Available: http://mathworld.wolfram.com/Jacobian.
html
[7] G. Robots. (2016) Various techniques for positioning a mobile robot in a space. [Online]. Available:
http://www.generationrobots.com/en/content/52-localize-with-a-hokuyo-laser-range-finder
[8] Hokuyo. (2005, October) Scanning laser range finder urg-04lx specifications. [Online]. Available:
http://www.hokuyo-aut.jp/02sensor/07scanner/download/pdf/URG-04LX_spec_en.pdf
[9] K. O. Arras. (2009, June) Introduction to mobile robotics. [Online]. Available: http://ais.informatik.uni-
freiburg.de/teaching/ss09/robotics/slides/feature_extraction.pdf
[10] B. Williams, M. Cummins, J. Neira, P. Newman, I. Reid, and J. Tardós, “A comparison of loop closing
techniques in monocular slam,” Robotics and Autonomous Systems, vol. 57, no. 12, pp. 1188–1197, 2009.
[11] U. Uttersrud. (2014, November) Datastrukturer for grafer. [Online]. Available: http://www.cs.hioa.no/~ulfu/
appolonius/kap11/1/kap111.html#11.1.4
[12] M. Yan. (2014, Januar) Dijkstra’s algorithm. [Online]. Available: http://math.mit.edu/~rothvoss/18.304.
3PM/Presentations/1-Melissa.pdf
[13] H. Muñoz-Avila. (2005, October) The a* algorithm. [Online]. Available: http://www.cse.lehigh.edu/~munoz/
CSE497/classes/Astar.ppt
[14] Wikipedia. (2015) Admissible heuristic — Wikipedia, the free encyclopedia. [Online]. Available:
https://en.wikipedia.org/wiki/Admissible_heuristic
[15] National Instruments. (2014, April) From student to engineer: Preparing future innovators with the ni
labview rio architecture. [Online]. Available: http://www.ni.com/white-paper/52093/en/
[16] ——. (2015, Mars) Introduction to ni linux real-time. [Online]. Available: http://www.ni.com/white-
paper/14627/en/
[17] X. Smith and N. Asgari. (2014) Robot localisation, navigation and control using national intruments myrio.
[Online]. Available: https://wiki.csem.flinders.edu.au/pub/CSEMThesisProjects/ProjectSmit0949/Thesis.pdf
[18] Wikipedia. (2016, April) White-box testing — Wikipedia, the free encyclopedia. [Online]. Available:
https://en.wikipedia.org/wiki/White-box_testing
[19] F. Haugen. (2010, April) Tuning of pid controllers. [Online]. Available: http://home.hit.no/~hansha/
documents/control/theory/tuning_pid_controller.pdf
[20] N. T. R. S. Viet Nguyen, Agostino Martinelli. (2005) Comparison of line extraction algorithms using 2d
laser rangefinder for indoor mobile robotics. [Online]. Available: http://infoscience.epfl.ch/record/97571/files/
nguyen_2005_a_comparison_of.pdf
59
A DIAGRAMS
A Diagrams
Wirediagram
-
+
Power
Supply
24V
Sabertooth2x25
M2B
B+
B-
M2A
M1B
M1A
0V
5V
S1
S2
DC-Motor
M a Mb A B V cc Gnd
DC-Motor
Ma Mb A B V cc Gnd
USB
Hokyo
URG-04LX
5V
Gnd
24V->12V V outV in
Adapter
Gnd Gnd
12V->5V V outV in
Adapter
Gnd Gnd
DIO15
DIO14
DGND
DGND
DIO13
DGND
DIO12
DGND
DIO11
DGND
UAR T.TX
DGND
UAR T.RX
DGND
A GND
A O1
A O0
+3.3V
DIO10
DIO9
DIO8
DIO7
DIO6
DIO5
DIO4
DIO3
DIO2
DIO1
DIO0
A I3
A I2
A I1
A I0
+5V
5V
DGND
DIO7
DIO6
DIO5
DIO4
DIO3
DIO2
DIO1
DIO0
A I1-
A I1+
A I0+
A I0-
A GND
A O0
A O1
A GND
+15V
-15V
myRio-1900
A
B
C
DIO15
DIO14
DGND
DGND
DIO13
DGND
DIO12
DGND
DIO11
DGND
UAR T.TX
DGND
UAR T.RX
DGND
A GND
A O1
A O0
+3.3V
DIO10
DIO9
DIO8
DIO7
DIO6
DIO5
DIO4
DIO3
DIO2
DIO1
DIO0
A I3
A I2
A I1
A I0
+5V
USB V in Gnd
60
UiA Slam (Øystein Øihusom & Ørjan l. Olsen)
UiA Slam (Øystein Øihusom & Ørjan l. Olsen)
UiA Slam (Øystein Øihusom & Ørjan l. Olsen)
UiA Slam (Øystein Øihusom & Ørjan l. Olsen)
UiA Slam (Øystein Øihusom & Ørjan l. Olsen)
UiA Slam (Øystein Øihusom & Ørjan l. Olsen)
UiA Slam (Øystein Øihusom & Ørjan l. Olsen)
UiA Slam (Øystein Øihusom & Ørjan l. Olsen)

Contenu connexe

Tendances

development-accurate-position
development-accurate-positiondevelopment-accurate-position
development-accurate-positionJames Petrie
 
TR-CIS-0420-09 BobZigon
TR-CIS-0420-09 BobZigonTR-CIS-0420-09 BobZigon
TR-CIS-0420-09 BobZigonBob Zigon
 
95960910 atoll-getting-started-umts-310-en-v1
95960910 atoll-getting-started-umts-310-en-v195960910 atoll-getting-started-umts-310-en-v1
95960910 atoll-getting-started-umts-310-en-v1Oshin Neeh
 
Maxime Javaux - Automated spike analysis
Maxime Javaux - Automated spike analysisMaxime Javaux - Automated spike analysis
Maxime Javaux - Automated spike analysisMaxime Javaux
 
3D magnetic steering wheel angle and suspension travel detection
3D magnetic steering wheel angle and suspension travel detection3D magnetic steering wheel angle and suspension travel detection
3D magnetic steering wheel angle and suspension travel detectionBruno Sprícigo
 
ME75-2014-myan076-report
ME75-2014-myan076-reportME75-2014-myan076-report
ME75-2014-myan076-reportMicky Yang
 
Performance Evaluation of Path Planning Techniques for Unmanned Aerial Vehicles
Performance Evaluation of Path Planning Techniques for Unmanned Aerial VehiclesPerformance Evaluation of Path Planning Techniques for Unmanned Aerial Vehicles
Performance Evaluation of Path Planning Techniques for Unmanned Aerial VehiclesApuroop Paleti
 
Modern introduction to_grid-generation
Modern introduction to_grid-generationModern introduction to_grid-generation
Modern introduction to_grid-generationRohit Bapat
 
95763406 atoll-3-1-0-user-manual-lte
95763406 atoll-3-1-0-user-manual-lte95763406 atoll-3-1-0-user-manual-lte
95763406 atoll-3-1-0-user-manual-ltearif budiman
 
dB or not dB? Everything you ever wanted to know about decibels but were afra...
dB or not dB? Everything you ever wanted to know about decibels but were afra...dB or not dB? Everything you ever wanted to know about decibels but were afra...
dB or not dB? Everything you ever wanted to know about decibels but were afra...Rohde & Schwarz North America
 
Tesis de posicionamiento
Tesis de posicionamientoTesis de posicionamiento
Tesis de posicionamientojosesocola27
 

Tendances (19)

development-accurate-position
development-accurate-positiondevelopment-accurate-position
development-accurate-position
 
MSC-2013-12
MSC-2013-12MSC-2013-12
MSC-2013-12
 
TR-CIS-0420-09 BobZigon
TR-CIS-0420-09 BobZigonTR-CIS-0420-09 BobZigon
TR-CIS-0420-09 BobZigon
 
95960910 atoll-getting-started-umts-310-en-v1
95960910 atoll-getting-started-umts-310-en-v195960910 atoll-getting-started-umts-310-en-v1
95960910 atoll-getting-started-umts-310-en-v1
 
Maxime Javaux - Automated spike analysis
Maxime Javaux - Automated spike analysisMaxime Javaux - Automated spike analysis
Maxime Javaux - Automated spike analysis
 
3D magnetic steering wheel angle and suspension travel detection
3D magnetic steering wheel angle and suspension travel detection3D magnetic steering wheel angle and suspension travel detection
3D magnetic steering wheel angle and suspension travel detection
 
ME75-2014-myan076-report
ME75-2014-myan076-reportME75-2014-myan076-report
ME75-2014-myan076-report
 
Hoifodt
HoifodtHoifodt
Hoifodt
 
Administrator manual-e2
Administrator manual-e2Administrator manual-e2
Administrator manual-e2
 
Performance Evaluation of Path Planning Techniques for Unmanned Aerial Vehicles
Performance Evaluation of Path Planning Techniques for Unmanned Aerial VehiclesPerformance Evaluation of Path Planning Techniques for Unmanned Aerial Vehicles
Performance Evaluation of Path Planning Techniques for Unmanned Aerial Vehicles
 
Modern introduction to_grid-generation
Modern introduction to_grid-generationModern introduction to_grid-generation
Modern introduction to_grid-generation
 
Introduction to Matlab
Introduction to MatlabIntroduction to Matlab
Introduction to Matlab
 
Sliderfns
SliderfnsSliderfns
Sliderfns
 
95763406 atoll-3-1-0-user-manual-lte
95763406 atoll-3-1-0-user-manual-lte95763406 atoll-3-1-0-user-manual-lte
95763406 atoll-3-1-0-user-manual-lte
 
dB or not dB? Everything you ever wanted to know about decibels but were afra...
dB or not dB? Everything you ever wanted to know about decibels but were afra...dB or not dB? Everything you ever wanted to know about decibels but were afra...
dB or not dB? Everything you ever wanted to know about decibels but were afra...
 
Tesis de posicionamiento
Tesis de posicionamientoTesis de posicionamiento
Tesis de posicionamiento
 
Fundamentals of Oscilloscopes, Version 1.1
Fundamentals of Oscilloscopes, Version 1.1Fundamentals of Oscilloscopes, Version 1.1
Fundamentals of Oscilloscopes, Version 1.1
 
main
mainmain
main
 
MS_Thesis
MS_ThesisMS_Thesis
MS_Thesis
 

Similaire à UiA Slam (Øystein Øihusom & Ørjan l. Olsen)

IMPLEMENTATION OF IMAGE PROCESSING ALGORITHMS ON FPGA HARDWARE.pdf
IMPLEMENTATION OF IMAGE PROCESSING ALGORITHMS ON FPGA HARDWARE.pdfIMPLEMENTATION OF IMAGE PROCESSING ALGORITHMS ON FPGA HARDWARE.pdf
IMPLEMENTATION OF IMAGE PROCESSING ALGORITHMS ON FPGA HARDWARE.pdfvenkatesh231416
 
matconvnet-manual.pdf
matconvnet-manual.pdfmatconvnet-manual.pdf
matconvnet-manual.pdfKhamis37
 
Capstone Final Report
Capstone Final ReportCapstone Final Report
Capstone Final ReportVaibhav Menon
 
Evaluation of conditional images synthesis: generating a photorealistic image...
Evaluation of conditional images synthesis: generating a photorealistic image...Evaluation of conditional images synthesis: generating a photorealistic image...
Evaluation of conditional images synthesis: generating a photorealistic image...SamanthaGallone
 
Tinyos programming
Tinyos programmingTinyos programming
Tinyos programmingssuserf04f61
 
Towards Digital Twin of a Flexible manufacturing system with AGV
Towards Digital Twin of a Flexible manufacturing system with AGV Towards Digital Twin of a Flexible manufacturing system with AGV
Towards Digital Twin of a Flexible manufacturing system with AGV YasmineBelHajsalah
 
Master_Thesis_Jiaqi_Liu
Master_Thesis_Jiaqi_LiuMaster_Thesis_Jiaqi_Liu
Master_Thesis_Jiaqi_LiuJiaqi Liu
 
DCMPL_PROJECT_BOOK_SHAY_ITAMAR
DCMPL_PROJECT_BOOK_SHAY_ITAMARDCMPL_PROJECT_BOOK_SHAY_ITAMAR
DCMPL_PROJECT_BOOK_SHAY_ITAMARshay rubinstain
 
Distributed Traffic management framework
Distributed Traffic management frameworkDistributed Traffic management framework
Distributed Traffic management frameworkSaurabh Nambiar
 

Similaire à UiA Slam (Øystein Øihusom & Ørjan l. Olsen) (20)

Vivarana fyp report
Vivarana fyp reportVivarana fyp report
Vivarana fyp report
 
IMPLEMENTATION OF IMAGE PROCESSING ALGORITHMS ON FPGA HARDWARE.pdf
IMPLEMENTATION OF IMAGE PROCESSING ALGORITHMS ON FPGA HARDWARE.pdfIMPLEMENTATION OF IMAGE PROCESSING ALGORITHMS ON FPGA HARDWARE.pdf
IMPLEMENTATION OF IMAGE PROCESSING ALGORITHMS ON FPGA HARDWARE.pdf
 
thesis
thesisthesis
thesis
 
matconvnet-manual.pdf
matconvnet-manual.pdfmatconvnet-manual.pdf
matconvnet-manual.pdf
 
Internship report
Internship reportInternship report
Internship report
 
Matconvnet manual
Matconvnet manualMatconvnet manual
Matconvnet manual
 
bachelors-thesis
bachelors-thesisbachelors-thesis
bachelors-thesis
 
Capstone Final Report
Capstone Final ReportCapstone Final Report
Capstone Final Report
 
T401
T401T401
T401
 
Thesis Report
Thesis ReportThesis Report
Thesis Report
 
2D ROBOTIC PLOTTER
2D ROBOTIC PLOTTER2D ROBOTIC PLOTTER
2D ROBOTIC PLOTTER
 
Evaluation of conditional images synthesis: generating a photorealistic image...
Evaluation of conditional images synthesis: generating a photorealistic image...Evaluation of conditional images synthesis: generating a photorealistic image...
Evaluation of conditional images synthesis: generating a photorealistic image...
 
Tinyos programming
Tinyos programmingTinyos programming
Tinyos programming
 
Towards Digital Twin of a Flexible manufacturing system with AGV
Towards Digital Twin of a Flexible manufacturing system with AGV Towards Digital Twin of a Flexible manufacturing system with AGV
Towards Digital Twin of a Flexible manufacturing system with AGV
 
Master_Thesis_Jiaqi_Liu
Master_Thesis_Jiaqi_LiuMaster_Thesis_Jiaqi_Liu
Master_Thesis_Jiaqi_Liu
 
Honours_Thesis2015_final
Honours_Thesis2015_finalHonours_Thesis2015_final
Honours_Thesis2015_final
 
DCMPL_PROJECT_BOOK_SHAY_ITAMAR
DCMPL_PROJECT_BOOK_SHAY_ITAMARDCMPL_PROJECT_BOOK_SHAY_ITAMAR
DCMPL_PROJECT_BOOK_SHAY_ITAMAR
 
Cube_Quest_Final_Report
Cube_Quest_Final_ReportCube_Quest_Final_Report
Cube_Quest_Final_Report
 
Aviation Control Unit
Aviation Control UnitAviation Control Unit
Aviation Control Unit
 
Distributed Traffic management framework
Distributed Traffic management frameworkDistributed Traffic management framework
Distributed Traffic management framework
 

UiA Slam (Øystein Øihusom & Ørjan l. Olsen)

  • 1. This Bachelor’s Thesis is carried out as a part of the education at the University of Agder and is therefore approved as a part of this education. However, this does not imply that the University answers for the methods that are used or the conclusions that are drawn. University of Agder, 2016 Faculty of Engineering and Science Department of Engineering Sciences Implementing SLAM on a Differential Drive AGV Ørjan Langøy Olsen, Øystein Øihusom Supervisor Morten Ottestad
  • 2. Ørjan Langøy Olsen Øystein Øihusom
  • 3. Abstract Autonomous robots have a potential in many different areas. Care of the elderly, warehouses and cleaning of buildings to mention a few. University of Agder wants to automate mapping of areas for the use of robots using simultaneous localization and mapping (SLAM). SLAM is a technique for building a map of an unknown environment while at the same time determine own position. A differential drive robot was built for this project to execute the SLAM technique. Its design was modelled in Solid Works with rapid prototyping and modularity in mind. The plastic parts were printed with ABS plastic using a CubePro 3D printer. National Instrument myRIO was used as the microcontroller and the program code was written in LabVIEW G code. The robot uses sensor fusion to determine its position by combining the odometry from motor encoders and landmark positions sensed by a laser scanner. Multiple navigation behaviors were implemented for safe and reliable obstacle avoidance as well as maneuvering to a goal by employing multiple waypoints. After building and programming of the robot, tests were performed in both software and hardware to increase the accuracy and robustness during operation. When performing the full system test, the robot performed as expected. It mapped the assigned area, but when it returned to a previously visited location, the registered position was not the same. This is because of the loop-closure problem, which is not handled in this project.
  • 4. LIST OF FIGURES LIST OF TABLES List of Figures 1 Global coordinate system and reference frame for the robot. . . . . . . . . . . . . . . . . . . . . . . . 9 2 Estimating position from previous position, motion model, encoder position and observation data. . 10 3 Illustration of quadrature encoder channels. (Robot Builder’s Bonanza, 4th Edition - Application Notes & Bonus Projects Copyright © 2011, Gordon McComb) . . . . . . . . . . . . . . . . . . . . . 12 4 Block diagram illustration of the complementary filter. . . . . . . . . . . . . . . . . . . . . . . . . . . 13 5 Block diagram illustration of the SLAM process. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 6 Sensor frame {S offset relative to robot frame {R}. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 7 A rotating mirror is used to take distance measurements in the plane. . . . . . . . . . . . . . . . . . 23 8 The effect of outliers on the least squares method. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 9 Line between two randomly selected points. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 10 Amount of inliers are counted for the line. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 11 The correct line will eventually be found. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 12 A line is drawn between the two most distant points. . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 13 The most distant point from the line is within a given threshold, so a split is performed. . . . . . . . 25 14 The contour after no more splits can be made. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 15 Two of the line segments are collinear enough to merge them. . . . . . . . . . . . . . . . . . . . . . . 26 16 A previously visited location is not registered at the same position, illustrating the loop closure problem. 26 17 The number on the nodes tells in which order they were visited. . . . . . . . . . . . . . . . . . . . . . 27 18 A graph being explored by Dijkstras. The second letter is the node it came from. . . . . . . . . . . . 28 19 First robot design. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 20 Second robot design. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 21 Computer generated imagery. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 22 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 23 Inputs and outputs of the motor control VI. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 24 Code for controlling a motor. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 25 LabView code for LIDAR data acquisition. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 26 Lines extracted from sample data with the RANSAC algorithm. . . . . . . . . . . . . . . . . . . . . 40 27 Lines extracted from sample data with the split and merge algorithm. . . . . . . . . . . . . . . . . . 41 28 How landmarks are selected. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 29 Block diagram illustration of the SLAM process. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 30 Graphical representation of the state vector used in EKF SLAM . . . . . . . . . . . . . . . . . . . . 44 31 Dimensions of the state vector and covariance matrix. . . . . . . . . . . . . . . . . . . . . . . . . . . 44 32 Prediction step applied to state vector and covariance matrix. . . . . . . . . . . . . . . . . . . . . . . 46 33 EKF SLAM prediction step, MathScript node. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 34 Update step applied to state vector and covariance matrix. . . . . . . . . . . . . . . . . . . . . . . . 48 35 New landmark expanding the state vector and covariance matrix. . . . . . . . . . . . . . . . . . . . . 49 36 An occupancy grid map represented visually. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 37 PID tuning two motors on a differential drive. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 38 Comparison between RANSAC and Split and Merge. . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 39 Results from testing the robot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56 40 Converting an array of cartesian coordinates to polar coordinates using the in-place element. . . . . 58 List of Tables 1 Clockwise rotations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 2 Counterclockwise rotations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 3 Motor specifications. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 4 Overview of input and output voltage of the voltage regulators. . . . . . . . . . . . . . . . . . . . . . 35 5 Results from the straight line test. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 4
  • 5. LIST OF TABLES LIST OF TABLES 6 Results from rotation test. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 7 Results from the square drive test. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 8 Results from the LIDAR test. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 9 RANSAC parameters used during testing. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 10 Split and Merge parameters used during testing. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 Nomenclature A* A popular pathfinding algorithm using graph traversal. AGV Automated Guided Vehicle Algorithm A step-by-step set of operations to be performed to arrive at a solution or complete a task. Clustering The act of seperating data into clusters. Differential drive Drive system where two seperately driven wheels placed on either side of the robot controls the movement. Graph Mathematical structures used to model pairwise relations between objects. Heuristic Method for solving problems utilizing a set of rules. Inlier Points with a residual less than a given threshold. Landmark (LMK) Features in the map. For example wall, corner or pillar. LIDAR Portmanteau of light and radar and is a radial distance sensor. Linestrip A list of vertices where the first issued vertex begins the linestrip and every consecutive vertex marks a joint in the line. LiPo Lithium polymer battery. Occupancy grid A binary map that tells wheter a given space is occupied or not. Odometry The use of motion sensors to estimate position. Outliers Points with a residual greater than a given threshold. PID A controller that tries to minimize the difference between a desired setpoint and a measured process variable using a gain, integral and derivative term. Process variable The variable being regulated in a control system. RANSAC Random sample consensus. An algorithm for approximating a line from a set of points. Residuals The difference between an observed value and the fitted value provided by a model. Sensor fusion Combination of sensory data from seperate sources such that the resulting information has less uncertainty than the individual sources. Setpoint The desired value for a process variable. SLAM Simultaenous Localization and Mapping SubVI Function block in LabVIEW. It can take multiple inputs and return multiple outputs. Vertex A point in a geometric figure. 5
  • 6. CONTENTS CONTENTS Contents 1 Introduction 8 2 Theory 9 2.1 Locomotion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 2.2 Kinematics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 2.2.1 Kinematics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 2.2.2 Probabilistic kinematics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 2.2.3 Motion models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 2.3 Complementary filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 2.4 SLAM with complementary filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 2.5 Extended Kalman filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 2.6 SLAM with extended Kalman filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 2.7 Laser scanner . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 2.8 Line extraction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 2.8.1 RANSAC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 2.8.2 Split and merge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 2.9 Loop closure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 2.10 A* search algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 2.11 Navigation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 2.11.1 Unicycle model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 2.11.2 Behaviours . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 3 Design 31 3.1 Drive system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 3.2 Chassis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 3.3 Remote control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 4 Components 34 4.1 DC drive motor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 4.2 DC motor controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 4.3 NI myRIO . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 4.4 LIDAR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 4.5 Infrared sensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 4.6 LiPo battery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 4.7 Voltage regulator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 5 Program 36 5.1 Scheduling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 5.2 Motor control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 5.3 Sensor data acquisition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 5.4 Line extraction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 5.5 Mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 5.6 Landmark extraction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 5.7 Data association . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 5.8 EKF SLAM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 5.9 Occupancy grid map . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 6
  • 7. CONTENTS CONTENTS 6 Testing 51 6.1 White-box testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 6.2 Motor accuracy and performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 6.3 Straight line test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 6.3.1 Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 6.3.2 Test criteria . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 6.3.3 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 6.4 Rotation test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 6.4.1 Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 6.4.2 Test criteria . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 6.4.3 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 6.5 Square drive test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 6.5.1 Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 6.5.2 Test criteria . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 6.5.3 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 6.6 LIDAR test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 6.6.1 Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 6.6.2 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 6.7 Line extraction algorithm test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 6.7.1 Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 6.7.2 Result . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 6.8 System test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56 6.8.1 Result . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56 7 Wrap-up 57 7.1 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 7.2 Further work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 A Diagrams 60 B 3D models 61 C Datasheets 63 7
  • 8. 1 INTRODUCTION 1 Introduction The University of Agder wants to automate mapping of areas for the use of robots, and has decided to start this project, namely UiA SLAM. SLAM is an acronym for simultaneous localization and mapping and is a technique for building a map of an unknown environment while at the same time determine own position. This could be used for self-driving cars, autonomous cleaning robots and electronic service dogs among others. Imagine a person waking up in an unknown location. They will start looking around for familiar signs. Once the person observes a landmark, they can figure out where they are in relation to it. The more the person observes of the environment, the better the person’s mental image of the place will be. The same applies for a SLAM robot. It tries to map an unknown environment while figuring out its position. When it starts, it is just like waking up at an unknown location without any prior knowledge about the place. The complexity comes from doing both localization and mapping at once. It is essentially a chicken-and-egg problem. The robot needs to know its position before answering the question of what the environment looks like. The robot also has to figure out its position without the benefit of already having a map. Two required components for SLAM is a motion sensor and a range sensor. The robot needs to have a good odometry performance. Odometry is the use of motion sensors to estimate its change in position over time. There is normally a small margin of error with odometry readings. The odometry readings are not absolute and the error makes the estimated position drift over the time. A range sensor also needs to be present to sense the sorroundings. The most common range sensors in a SLAM context is a laser scanner such as a LIDAR. A robot determines its position through a sensor fusion technique as the complementary filter or the Kalman filter. It will use the odometry in combination with landmarks extracted from the range measurement device. Landmarks are distinct and static features in the environment. A robot also needs to be able to identify and associate an observed landmark with a landmark observed earlier. SLAM is the mapping of an environment using the continual interplay between the mapping device, the robot and its position. As the robot interacts with the environment, it maps the area and determines its own position simultaneously. SLAM is a tool for exploring the environment around the robot and is constantly undergoing improvement. [1] 8
  • 9. 2 THEORY 2 Theory Simultaneous Localization and Mapping is a field in robotics concerned about the navigation of a robot in a unknown environment. The problem is divided into the four following parts: Landmark extraction, data association, state estimation, and state update. 2.1 Locomotion Ever since its invention the actively powered wheel has been the go-to locomotion mechanism utilized for unbounded movement on flat surfaces. Needless to say, this is how the SLAM platform in this paper will be locomoted. Our specific implementation is a 3-wheeled differential drive system as depicted in figure 1. Where two independent actively powered wheels are dedicated to locomotion while a third passive castor wheel tags along. This gives us three points of contact with the ground, resulting in great stability. The drive wheels are of the standard type of wheels offering two degrees of freedom, rotation about the wheels axle and rotation about the wheels contact point with the underlying surface. The castor wheel also offers two degrees of freedom, rotation about its free running axle and rotation around an offset steering joint. 2.2 Kinematics 2.2.1 Kinematics Gy Gx θ Rx Ry Figure 1: Global coordinate system and reference frame for the robot. Kinematics describe the effect of control actions on the configuration of a robot. The configuration of a rigid mobile robot is commonly described by six variables, its three-dimensional Cartesian coordinates and its three Euler angles (roll, pitch, yaw) relative to an external coordinate frame. [2] 9
  • 10. 2.2 Kinematics 2 THEORY In this paper we are merely interested in kinematics for a differential drive robot operating in a planar environment, hence its kinematic state can be described by three variables referred to as the robot’s pose. The pose of the robot thus consist of two-dimensional coordinates (x, y) and its angular rotation (θ), all with respect to an external coordinate frame. The following vector is often used to describe a pose.   x y θ   (2.1) The orientation (θ) is often referred to as bearing or heading, while the (x, y) coordinates are referred to as the robots location. 2.2.2 Probabilistic kinematics p (xt|ut, xt−1) (2.2) Where xt and xt−1 are the robots pose. xt is the new pose while xt−1 is the pose at the time when the motion command ut is given. This is called the probabilistic kinematic model and it describes the posterior distribution over kinematic states that a robot assumes. xk-1 xest zu ˆxk Figure 2: Estimating position from previous position, motion model, encoder position and observation data. 2.2.3 Motion models The equations that mathematically describe the motion of the robot is usually referred to as the motion model. While being further divided into odometry motion model or velocity motion model which are the predominant motion models for wheel based mobile robots in literature. Odometry To predict position and calculate wheel velocities from odometry we will make use of the rotary encoders integrated in the electric brushed motors. 10
  • 11. 2.2 Kinematics 2 THEORY The encoders used are incremental quadrature encoders measuring angular displacement of the motor shaft. In- cremental meaning the information we obtain is relative, describing the movement of the shaft. As opposed to the absolute encoder popular in robotic arms that offer precise position of the shaft by use of for instance grey code, however these units come at an increased cost. Quadrature meaning the encoders has four separate phases where the change from one phase to the other results in a tick. Phase Channel A Channel B 1 0 0 2 0 1 3 1 1 4 1 0 Table 1: Clockwise rotations Phase Channel A Channel B 1 1 0 2 1 1 3 0 1 4 0 0 Table 2: Counterclockwise rotations The incremental quadrature encoder in our motors produces pulses on two separate channels that are 90° out of phase. By detecting the order of change we can tell whether the motor shaft is rotating clockwise or counterclockwise. 11
  • 12. 2.2 Kinematics 2 THEORY Figure 3: Illustration of quadrature encoder channels. (Robot Builder’s Bonanza, 4th Edition - Application Notes & Bonus Projects Copyright © 2011, Gordon McComb) Furthermore, we can integrate these encoder ticks to keep track of total angular displacement of the shaft. We may also observe the amount of ticks over a known period of time to obtain angular velocity as well as the rate of change to get acceleration. This information can be used in conjunction with physical properties of the robot to calculate its position, orientation and velocity. Position update The equations that mathematically describe the motion of the robot is usually referred to as the motion model. While being further divided into odometry motion model or velocity motion model which are the predominant motion models for wheel based mobile robots in literature. We will implement the odometry motion model to predict the robots pose and hence it will be the only one presented. The equations are as follows. x =   x y θ   =   x y θ   +   ∆sr+∆sl 2 cos θ + ∆sr−∆sl 2b ∆sr+∆sl 2 sin θ + ∆sr−∆sl 2b ∆sr−∆sl 2b   (2.3) x is the predicted pose given by the vector   x y θ  . x is the current pose given by the vector   x y θ  . 12
  • 13. 2.3 Complementary filter 2 THEORY ∆sr is the distance traveled by right wheel. ∆sl is the distance traveled by left wheel. b is the robot wheelbase. When the robot is initialized x, y and θ are set to 0. Hence x =   0 0 0   . b is the wheelbase and a physical characteristic of the robot and we find it by measuring the distance between each drive wheels contact point with the underlying surface. ∆sr and ∆sl can be found with the following equations. ∆sr = 2πR ∆tickr N (2.4) ∆sl = 2πR ∆tickl N (2.5) R is the radius of the wheel. N is the resolution of the encoder. In our case N = 2797. From that the encoder mounted on the motor gives 2797 ticks per revolution of the gearbox output shaft. ∆tickr is the number of ticks from the right motor encoder since our last measurement. ∆tickl is the number of ticks from the left motor encoder since our last measurement. Now that we have a motion model and taken care of all the unknowns with respect to position prediction, we are able to predict the robot’s new position given control input from each motor’s encoder. 2.3 Complementary filter The complementary filter is in fact two filters where one complements the other. That is they both add up to one creating an all pass filter. Often illustrated as in the figure below. S1 S2 Low-Pass Filter High-Pass Filter + + x Figure 4: Block diagram illustration of the complementary filter. 13
  • 14. 2.4 SLAM with complementary filter 2 THEORY ˆx = a · S1 + (1 − a) · S2 0 < a < 1 (2.6) S1 and S2 are noisy measurements of the variable x. Most often the complementary filter is configured in such a way that we get a low-pass filter on one measurement and a high-pass filter on the second measurement, resulting in an all-pass filter giving the estimation ˆx. When the low-pass and high-pass filter are mathematical complements of each other we get a complete reconstruction of the variable being sensed minus the noise related to each measurement. [3] 2.4 SLAM with complementary filter Because there are multiple ways to implement SLAM, this will be but one simple example employing wheel encoders and a LIDAR. The basic steps that has to be performed are the prediction step and the update step. During the prediction step the AGV calculates its position and orientation solely from encoder data. When landmarks have been detected the update step is performed to further refine the accuracy of robot position as well as update information regarding landmarks. Our example algorithm can be described in pseudo code in figure 5. 14
  • 15. 2.4 SLAM with complementary filter 2 THEORY Data Association Extracted Landmarks Position Prediction Encoder Measurements Observation Prediction Estimation Matched Predictions and Observations Estimated Pose Predicted Position Predicted Landmarks State Vector (Pose and Map) Landmark Positions Robot Pose YES SLAM Estimated Landmark New Landmark Prediction Step UpdateStep Sensor Observations Figure 5: Block diagram illustration of the SLAM process. Prediction step: • Measure wheel displacement given by encoder data, this will be our control input (u). 15
  • 16. 2.5 Extended Kalman filter 2 THEORY • Calculate pose estimate by use of control input and the motion model for our system. In this case we use the encoder based motion model for a differential drive robot. Update step: • If there are no landmarks detected this iteration, meaning the measurement vector (z) is empty, then this step is omitted. • If however data association gives us one or more landmarks this iteration, we perform the update step. • For each NEW landmark, we add it to the state vector. • For each ASSOCIATED landmark, we perform the following steps: 1. Calculate the predicted position of the detected landmark, given the LMKs position in the map (state vector) from the previous iteration and the robots predicted pose. The result (h) should be range and bearing to the landmark within the robot’s reference frame. 2. Calculate the innovation which is the difference between the measured and the predicted position of the landmark. (z-h) 3. Apply some initial gain to the innovation as we do not want to correct for the entirety of the error each iteration due to noise in the measurement. 4. Utilize a complimentary filter to divide the error - Most of the resulting innovation from the previous step will be used to correct robot position while a small percentage will be used to correct the landmarks position within the map. For example 98% robot pose and 2% LMK position. 5. Apply a small correction to the current LMK in the map (state vector). 6. Sum the error in robot position with respect to the detected LMK and previous LMK this iteration in order to take an average after iterating trough all detected LMKs before correcting the robot’s pose. • Calculate the average of the accumulated errors from previous “for-loop”. • Utilize a complimentary filter to correct error in pose. Putting a large amount of confidence into the average of the accumulated error from the update step and a smaller amount of confidence in the predicted state from the previous prediction step. In our case this complimentary filter governs the influence of the encoder measurement versus the LIDAR measurement with respect to the robot pose. For example 20% encoder measurement and 80% LIDAR measurement. The algorithm loops continuously. 2.5 Extended Kalman filter The extended Kalman filter is a specific example of a more general technique known as probabilistic estimation, see Thrun et al, Probabilistic Robotics for more on the subject of probabilistic estimation. By simplifying we can say that the Kalman filter provides a recursive method of estimating the state of a dynamical system in the presence of noise. [4] In the extended Kalman filter, the state transition and observation models don’t need to be linear functions of the state but may instead be differentiable functions. xk = f(xk−1, uk) + wk (2.7) zk = h(xk) + vk (2.8) 16
  • 17. 2.6 SLAM with extended Kalman filter 2 THEORY Where wk and vk are the process and observation noises which are both assumed to be zero mean multivariate Gaussian noises with covariance Qk and Rk respectively. uk is the control vector. The function f can be used to compute the predicted state from the previous estimate and similarly the function h can be used to compute the predicted measurement from the predicted state. However, f and h cannot be applied to the covariance directly. Instead a matrix of partial derivatives (the Jacobian) is computed. At each time step, the Jacobian is evaluated with current predicted states. These matrices can be used in the Kalman filter equations. This process essentially linearize the non-linear function around the current estimate. [5] 2.6 SLAM with extended Kalman filter The Kalman filter simultaneously maintains an estimate of the state vector (ˆx) and the estimate covariance matrix (P). This results in the output of the Kalman filter being a Gaussian probability density function with mean (ˆx) and covariance P. For our purpose of localizing an AGV, the output will be a distribution of likely robot positions. The equations of the basic EKF is as follows. ¯xk|k−1 = fk−1(xk−1|k−1, uk) (2.9) ¯Pk|k−1 = Fk−1Pk−1FT k−1 + Gk−1Qk−1GT k−1 (2.10) Sk = Rk + Hk ¯Pk|k−1HT k (2.11) Kk = ¯Pk|k−1HT k S−1 k (2.12) ¯xk|k = ˆxk|k−1 + Kk(zk−hk(¯xk|k−1)) (2.13) ¯Pk|k = ¯Pk|k−1−KkSkKT k (2.14) The state vector (ˆx) is defined to be the location and orientation of the AGV along with the location of each landmark. ¯x = xr yr θr xl1 yl1 xl2 yl2 · · · xlnl ylnl T (2.15) Where (xryrθr) denotes the AGV’s position and orientation in the world frame {W}. xl1 yl1 xl2 yl2 · · · xlnl ylnl denotes the position of individual landmarks where (nl) is the total number of landmarks. xk−1 is the previous state, ¯xk is the new state. Pk−1 is the previous covariance matrix, ¯Pk is the new covariance matrix. fk−1 is the motion model, used to predict robot pose. hk is the measurement model, used to predict landmark position. zk is the measurement from the range and bearing sensor (LIDAR), used in combination with h to calculate the innovation. Qk−1 is the control input error covariance matrix, noise in encoder data. 17
  • 18. 2.6 SLAM with extended Kalman filter 2 THEORY Rk is the measurement error covariance matrix, noise in LIDAR data. Gk−1 is the Jacobian of the motion model f with resect to control input. Fk−1 is the Jacobian of f with respect to the pose x, y and θ and is a sparse matrix employed to update purely the upper 3x3 matrix in Pk. The rest of Fk−1 is the identity matrix. Hk is the Jacobian of the measurement model h. Sk is the innovation covariance matrix. Kk is the computed Kalman gain. The input error covariance matrix Qk−1 is a 2x2 matrix. Qk−1 = covar(∆sr, ∆sl) = kr|∆sr| 0 0 kl|∆sl| (2.16) Where kr and kl are constants ∆sr and ∆sl are the motion increments. We observe that the error is proportional to the absolute value of the input and that the errors for the two wheels are independent. The constants kr and kl are parameters that decides how much of the motion increment should be added as error. These can be determined experimentally trough testing of the wheel encoders accuracy. The covariance matrix Pk is given by the previous covariance matrix Pk−1, initially given the value 0. Pinitial =   0 0 0 0 0 0 0 0 0   (2.17) ¯Pk = Fk−1Pk−1FT k−1 + Gk−1Qk−1GT k−1 = pf · Pk−1 · pfT + ∆rl f · Qk−1 · ∆rl fT (2.18) To update the covariance matrix at the end of the prediction step in SLAM, we need the two Jacobian matrices Fk−1 and Gk−1. Where Fk−1 is the partial derivate of f with respect to the robot pose x, y and θ. Gk−1 is the partial derivative of f with respect to control input ∆sr and ∆sl. J = df dx = ∂f ∂x1 · · · ∂f ∂xn =    ∂f1 ∂x1 · · · ∂f1 ∂xn ... ... ... ∂fm ∂x1 · · · ∂fm ∂xn    (2.19) The Jacobian matrix is the matrix of all first order partial derivatives of a vector valued function. [6] The motion model for a differential drive f has already been described and is given by: f(x, y, θ, ∆sr, ∆sl) =   x y θ   +   ∆sr+∆sl 2 cos θ + ∆sr−∆sl 2b ∆sr+∆sl 2 sin θ + ∆sr−∆sl 2b ∆sr−∆sl b   (2.20) The Jacobian matrix F of the motion model with respect to robot pose: 18
  • 19. 2.6 SLAM with extended Kalman filter 2 THEORY Fk−1 = pf =   1 0 −∆s sin(θ + ∆θ/2) 0 1 ∆s cos(θ + ∆θ/2) 0 0 1   (2.21) The Jacobian matrix G of the motion model with respect to control input Gk−1 = ∆rl =   1 2 cos(θ + ∆θ 2 ) − ∆s 2b sin(θ + ∆θ 2 ) 1 2 cos(θ + ∆θ 2 ) + ∆s 2b sin(θ + ∆θ 2 ) 1 2 sin(θ + ∆θ 2 ) + ∆s 2b cos(θ + ∆θ 2 ) 1 2 sin(θ + ∆θ 2 ) − ∆s 2b cos(θ + ∆θ 2 ) 1 b −1 b   (2.22) ∆s = ∆sr + ∆sl 2 (2.23) ∆θ = ∆sr − ∆sl b (2.24) We see that the Jacobians F and G has the variables θ, ∆s, ∆θ. We will therefore have to compute these Jacobians whenever the robot pose or the control input changes. As before, the constant b is the robot’s wheelbase. This concludes the prediction step of EKF SLAM, we now have a predicted postion given a control input and we have a predicted covariance matrix to reflect the uncertainty of the prediction. For the update step we need the measurement model h. This defines how we calculate the range and bearing to an expected landmark given by our map in the state vector xk. In other words, the filter makes a guess as to what the measurement from our sensor zk should be based on previously observed landmarks stored in the map and the robot’s predicted pose. We obtain zk from data association on the form [ρrange, αbearing]T . The offset of the sensors’ coordinate frame relative to the robot’s coordinate frame has already been accounted for in the data association algorithm, thus we get the measurement zk in the robots coordinate frame R. Wy Wx Sensor frame {S} offset relative to Robot frame {R} Figure 6: Sensor frame {S offset relative to robot frame {R}. 19
  • 20. 2.6 SLAM with extended Kalman filter 2 THEORY The actual measurement z and the predicted measurement h has to be in same coordinate frame, we therefore construct h to transform the current landmarks position, given in x, y coordinates in the global frame W. To range and bearing to the landmark given in the robots coordinate frame R. hk = √ q atan2(mj,y − xk,y, mj,x − xk,x) − xk,θ (2.25) The Jacobian of the measurement model H. Hk = − mj,x−xk,x √ q − mj,y−xk,y √ q 0 mj,y−xk,y q − mj,x−xk,x q −1 (2.26) Hk also needs some information with respect to the landmark we are currently indexing. As an example, lets say we have three landmarks in our state vector and we are currently working on landmark number j = 2. That would give us this Hk Hj k = − mj,x−xk,x √ q − mj,y−xk,y √ q 0 0 0 mj,x−xk,x √ q mj,y−xk,y √ q 0 0 mj,y−xk,y q − mj,x−xk,x q −1 0 0 − mj,y−xk,y q mj,x−xk,x q 0 0 (2.27) We only fill out the part of the matrix that concerns the robot (left 2x3 matrix) and the part concerning the current landmark. In this case we are interested in landmark number two. Hence the cells for landmark number one and three are set to zero. Also note that the cells filled in for the current landmark is simply the leftmost 2x2 matrix that has been negated. We do not use the entire 2x3 matrix because the landmark has no orientation, only a position. q = (mj,x − xk,x)2 + (mj,y − xk,y)2 (2.28) Where m is the map (landmark coordinates in the state vector) and j is the index of the re-observed landmark. Then we simply compute the innovation covariance matrix S. Sk = HkPkHT k + Rk (2.29) Rk = krange · zr 0 0 kα (2.30) Rk is the error model for our observation sensor. krange and kα are constants describing the error in the measurement from our range and bearing sensor. We observe that the error in range is proportional to the range measured, while the error in bearing is constant. These error constants can be found experimentally or be found in the data sheet of the sensor. The near optimal kalman gain K. A robot will use Kk = ¯PkHT k S−1 k (2.31) 20
  • 21. 2.6 SLAM with extended Kalman filter 2 THEORY We update the state vector with the computed Kalman gain multiplied by the innovation of the current landmark. ¯xk = ¯xk + Kk(zk − hk) (2.32) Update the covariance matrix. ¯Pk = (I − KkHk) ¯Pk (2.33) This entire update process is performed for each re-observed landmark. Now that the re-observed landmarks have been taken care of, we want to initialize any new landmarks. We expand the state vector with the landmarks’ estimated global position {W}, using the inverse observation model. mn+1,x mn+1,y = g(zk, xk) = zk,range cos(zk,α + xk,θ) + xk,x zk,range sin(zk,α + xk,θ) + xk,y (2.34) Where n is the number of landmarks in the state vector xk. Then we expand the co variance matrix P, and add some initial values. We have several options on how to go about this. The simpler option is to initialize the 2x2 covariance matrix for the new landmark with some high values. Then we let the update step compute the correct values during the next iteration of the filter when the landmark is re-observed. Another path is to compute values for all the new cells in the covariance matrix P with respect to the new landmark. This requires the two jacobians Jxr and Jz. Jxr = ∂g(zk, xk) xk,x,y,θ (2.35) Jz = ∂g(zk, xk) zk,ρ,α (2.36) 2x2 Covariance matrix for the landmark itself. ¯P(n+1)(n+1) = Jxr ¯Prr k JT xr + JzRkJT z (2.37) Robot-Landmark covariance. ¯Pr(N+1) = ¯Prr JT xr (2.38) Landmark-Robot covariance, other side of diagonal. ¯P(N+1)r = ( ¯Pr(N+1) )T (2.39) 21
  • 22. 2.7 Laser scanner 2 THEORY Landmark-Landmark covariance. ¯P(N+1)i = Jxr( ¯Pri )T (2.40) Landmark-Landmark covariance, other side of diagonal. ¯Pi(N+1) = ( ¯P(N+1)i )T (2.41) ¯Prr k is the 3x3 covariance matrix over the robots state. ¯P r(n+1) k is the 3x2 robot-landmark covariance. ¯P (n+1)i k is the 2x2 landmark-landmark covariance. i is the ith landmark, meaning the covariance is computed for each landmark. n is again the number of landmarks. This concludes the EKF SLAM. The algorithm is now ready to start at the beginning with new control input. 2.7 Laser scanner The laser scanner used for this project is Hokuyo URG-04LX. It emits a laser beam with a wavelength of 785nm, which is on the boundary between visible light and infrared light. [7] The light wave moves in a straight line and reflects off of an obstacle before returning to the laser scanner. The principle behind the distance measurement is based on calculation of the phase difference between the sent and received signal. This makes it possible to obtain stable measurements with minimum influence from objects’ color and reflectance. [8] The emitted wave has the following equation: A sin(ωt) (2.42) Where A is the amplitude, ω is the frequency and t is the phase. The equation for the received wave is as follows: B sin(ω(t − dϕ)) (2.43) Where B is the new amplitude and dϕ is the travel time. The signals are blended in a signal multiplier and the wave equation is now: A · B 2 (cos(ωdϕ) − cos(2ωt − ωdϕ)) (2.44) A low-pass filter then enables it to get rid of the high-frequency part cos(2ωt − ωdϕ) and keep the constant part cos(ωdϕ), revealing the phase difference. The distance covered is calculated by: D = ct 2 = cdϕ 2ω = λdϕ 2 (2.45) However, there is still one major problem. With the current setup, the theoretic range of the laser scanner is limited to half the wavelength of the emitted laser. This is because it can only use phase differences less than 360°. The signal needs to be modulated with a carrier wave of lower frequency to increase the wavelength. This is done by turning the laser on and off with the desired frequency. 22
  • 23. 2.8 Line extraction 2 THEORY The laser scanning method explained above only describes how to perform point distance scans. To take measure- ments in the plane, the sensor uses a rotating mirror that rotates by 0.36° between each measurement. Rotating mirror Laser emitter Surface Figure 7: A rotating mirror is used to take distance measurements in the plane. 2.8 Line extraction Least square fit is a method for linear fitting. It minimizes the sum of squared residuals. The problem with this method is that the model is heavily penalized by outliers because the residues are squared, hence it can not be used for line extraction. There are several methods for line extraction that does not have this problem. Split and merge and RANSAC were explored in this project. 0 5 10 15 20 0 5 10 15 20 Figure 8: The effect of outliers on the least squares method. 2.8.1 RANSAC RANSAC is an iterative algorithm for line fitting with the presence of outliers. It has the follwing parameters: d Inlier threshold T Minimum inliers N Points left in the dataset for early exit L Number of lines extracted for early exit 23
  • 24. 2.8 Line extraction 2 THEORY M Number of trials The algorithm reads as follows: 1. Select two random points from the dataset and find the line equation between them. 0 5 10 15 20 0 5 10 15 20 Figure 9: Line between two randomly selected points. 2. Calculate the distance between each point and the line. If a point is within the distance d, consider it an inlier. 0 5 10 15 20 0 5 10 15 20 Figure 10: Amount of inliers are counted for the line. 3. If number of inliers is greater than or equal to T, perform a least square fit on the inliers and accept the line. Remove the inliers from the dataset. 0 5 10 15 20 0 5 10 15 20 Figure 11: The correct line will eventually be found. 4. Repeat step 1 to 3 M times or until there are N points left in the dataset or L lines has been accepted. 24
  • 25. 2.8 Line extraction 2 THEORY 2.8.2 Split and merge Split and merge is an algorithm for reducing the number of points in a curve that is approximated by a series of points. [9] The algorithm reads as follows: 1. Find the line between the two most distant points. Figure 12: A line is drawn between the two most distant points. 2. Find the point farthest away from the line. If the distance from the point to the line is more than a given threshold, perform a split. Figure 13: The most distant point from the line is within a given threshold, so a split is performed. 3. For each new line segment, repeat step 2, and continue until none of the new segments can be split any further. Figure 14: The contour after no more splits can be made. 4. If two consecutive line segments are collinear enough, merge them. 25
  • 26. 2.9 Loop closure 2 THEORY Figure 15: Two of the line segments are collinear enough to merge them. 2.9 Loop closure Loop closure is the problem of determining wheter or not the robot has returned to a previously visited place. This is one of the hardest problems, but is essential for a robust long term SLAM. In order to solve this problem, the robot needs to be able to recognise when it has returned to a previously mapped region. When two regions in the map are found to be the same, but their position is incompatible because of the uncertainty of the map, the robot needs to calculate the right transformations needed to align these regions to close the loop. Figure 16: A previously visited location is not registered at the same position, illustrating the loop closure problem. Three types of solutions to this problem is map-to-map, image-to-image and image-to-map matching. Map-to-map looks for similarities between the current local map and earlier observed maps. Both visual similarities and distance between features can be used to find a set of common features between the two maps. Once this set has been found, the transformation needed to align the two maps can be found. Image-to-image looks for similarities in the latest captured image and previous images. This obviously requires a camera mounted on the robot. Distinct and identical features are given a high probability of beeing the same. Image-to-map looks for features in the latest captured image it can compare with features in the map. [10] 26
  • 27. 2.10 A* search algorithm 2 THEORY 2.10 A* search algorithm A* (pronounced “a star”) is an algorithm for finding the shortest path between nodes in a graph. It builds on the principle of breadth-first search and Djikstra’s algorithm. The breadth-first search algorithm uses a FIFO (First In First Out) data structure or a queue to explore the nodes.[11] The nodes also stores whether it has been visited or not. This information is being updated by the algorithm. The search starts at the start node and adds its neighboring nodes to the queue if they have not already been visited. Then continues with the next node in the queue until the queue is empty. This results in the nodes being visited in the breadth before the depth as illustrated in figure 17. 1 2 3 4 5 6 7 Figure 17: The number on the nodes tells in which order they were visited. The number on the nodes tells in which order the nodes were discovered. Dijkstra’s algorithm is a pathfinding algorithm conceived by the dutch computer scientist Edsger W. Dijkstra in 1956.[12] It uses breadth-first search, but instead of using an ordinary queue, it uses a priority queue. A priority queue is a queue where the element with the highest priority is picked first. And for each node it visit, it keeps track of where it came from and how many nodes has been traveled to get to that particular node, or often called the cost. Initially, the cost is set to be infinite. If the cost for the current node is equal or less than the cost for the next node, then add the next node to the queue with a priority equal to the cost for the node. The output from this algorithm is not the path itself, but rather the cost of the route. 27
  • 28. 2.11 Navigation 2 THEORY A B, A D , AC, A E, B F, C I, E G , C H , D J, F Figure 18: A graph being explored by Dijkstras. The second letter is the node it came from. To obtain the route, select the goal node and fetch the node it came from, and continue until you reach the start node. Reverse the path to get the path from the start node to the goal node. A* is a pathfinding algorithm that uses heuristic to guide its search. [13] Heuristic is the use of guidelines to solve a problem, contrary to having strict steps like an algorithm. A* only focuses on reaching the goal node from the current node, not to reach all other nodes. The algorithm computes the following function. f(n) = g(n) + h(n) (2.46) Where f(n) is the evaluation cost, g(n) is the cost from the starting node to reach node n and h(n) is the estimate of the cost of the cheapest route from n to the goal node. A* generates an optimal solution if h(n) is an admissible heuristic. h(n) is admissible if ∀n, h(n) ≤ h∗ (n) (2.47) Where h∗ (n) is the actual cost to reach the goal node from node n. Or in other words, the heuristic function is admissible if it never overestimates the cost to reach the goal node. [14] The implementation of the A* star algorithm is very similar to Dijkstra’s, but the priority is now being set to the evaluation cost f(n). 2.11 Navigation 2.11.1 Unicycle model While we need to control each wheel of a differential drive robot independently, envisioning how the robot behaves based on independent wheel velocities might be a little tricky. Therefore we might want to simplify the way we 28
  • 29. 2.11 Navigation 2 THEORY design controllers and making it easier to imagine the robot’s movement and thus easier to write control laws or navigation algorithms. To achieve this we will employ the unicycle model. The unicycle model let us design our controllers with respect to transitional velocity v and rotational velocity ω of the robot frame instead of designing for the rotational velocity of each wheel vl and vr. The equations that convert motion from the unicycle model to the differential drive are as follows: vl = 2v − ωL 2R (2.48) vr = 2v + ωL 2R (2.49) vl is the rotational velocity of the left wheel. vr is the rotational velocity of the right wheel. v is the linear velocity of the robot frame. ω is the rotational velocity of the robot frame. R is the radius of the wheel. L is the wheelbase of the robots drive wheels. These two equations lets us design navigation controllers for v and ω directly. Then use the two parameters as inputs for the unicycle model to calculate independent rotational wheel velocities which are then handed over to the motor controller. 2.11.2 Behaviours Rather than program explicit control sequences for the AGV when it enconters different kind of environments and obstacles; in example door algorithm, wall algorithm, moving obstacle algorithm, corridor algorithm, open space algorithm etc. We rather want to give the AGV a set of more general behaviors that may be used for dynamic navigation in varying environments, where the AGV decides what behavior is to be applied given its perception of environment. Fundamental behaviors to implement: • Avoid Obstacle • Go To Goal • Follow Wall Left • Follow Wall Right These behaviors essentially compute vectors to be used by the lower level parts of the navigation system. The Avoid Obstacle vector is simply a vector that points in the opposite direction of the obstacle closest to the AGV. The Go To Goal vector is a vector that points from the AGV’s current position to the goal position. Follow Wall Left and Right are vectors perpendicular to the wall sensed on the respective side of the AGV. 29
  • 30. 2.11 Navigation 2 THEORY The navigation system decides which one of the vectors to use, or if blending some of the vectors is a more effective solution to reach the goal. As an example let the AGV sense an obstacle between its current position and the goal position, then the navigation system may decide to purely use the avoid obstacle vector or a blend of the avoid obstacle and the go to goal vector to safely and efficiently navigate around the obstacle. This will depend on the proximity of the obstacle detected. A different example is when the AGV is stuck in a convex obstacle. For instance in a blind corridor or an inside corner. It will then change its behavior from the current go to goal behavior to follow either the left or right wall. In this manner, the AGV will be able to clear the obstacle by following a wall until some set amount of progress towards the goal position has been obtained. At this point the navigation system will return to the go to goal behavior. GoVector – Most used behavior for direct navigation and concave obstacles: 1. Rotate and align robot with goal vector. 2. Use PID to drive the robot straight along goal vector. (a) If obstacle within panic range, stop and reverse robot. (b) Else If an obstacle is within avoid range, use obstacle avoidance vector. (c) Else If an obstacle is within blending range, blend goal vector and obstacle avoidance vector. (d) Else go straight to goal. 3. Stop at goal position and rotate to goal pose orientation. 4. If next Goal Vector is available, use this new goal vector and go back to 1. Follow Wall (Left/Right): 1. Detect line to follow. 2. Calculate vector parallel to wall in direction of travel. 3. Calculate vector perpendicular to wall from AGV reference frame origin. 4. Employ a PID to control the AGV, keep the direction of travel parallel to the wall. And keep the vector perpendicular to the wall at a set length. 5. If no progress has been made towards goal position, keep following wall. Roaming – Initial behavior when searching for the required amount of Landmarks to initialize SLAM: 1. Detect largest gap with LIDAR scan by use of vector field histogram. 2. Orient robot towards gap. 3. Drive towards gap 4. Go back to 1. 30
  • 31. 3 DESIGN 3 Design The AGV is designed for an indoor environment. The motors and rubber wheels are mounted on an aluminum base. The outer shell is designed to withstand light impacts, protect against dust and grit collected by the wheels and of course for the aesthetics. 3.1 Drive system 3.2 Chassis Initial design idea for the chassis were a vacuum formed cylinder end cap that encompass all the mechanics and electronics. However, due to the complexity of first milling a negative form that would then be used in the vacuum forming process, this solution was abandoned. Figure 19 shows the sketch for this design. Figure 19: First robot design. Instead a chassis constructed of multiple parts were chosen to stick with the principles of rapid prototyping, keeping the design modular and easier to modify when needed. The different parts were designed in 3D using SolidWorks. The models were then exported as STL files. These files were in turn imported into CubePro software suite and converted to a format that could be fed directly into the 3D Systems CubePro printer. Material chosen for the chassis is ABS plastic due to its availability and sufficient strength during testing for the task at hand. Final design can be seen in figure 20. In the front of the AGV is the bumper and the IR modules. This will protect the LIDAR from impacts. As the AGV moves around, the module will also hold three infrared sensor aimed down towards the floor. These will detect 31
  • 32. 3.2 Chassis 3 DESIGN stairs and other cliff features. Figure 20: Second robot design. 32
  • 33. 3.3 Remote control 3 DESIGN Figure 21: Computer generated imagery. 3.3 Remote control In certain situations it would be convenient for the operator to have direct control of the AGV’s movement known as remote control or tele-operation. A Sony DualShock 3 controller were selected for this task. This controller is connected by USB to the operator’s computer. A LabVIEW VI running on this host computer converts the control inputs given by the operator to values stored in network shared variables. The real-time VI running on the AGV’s myRIO has access to these variables over WIFI and act according to their values. 33
  • 34. 4 COMPONENTS 4 Components A variety of components are required to solve the SLAM problem. Motor encoders to estimate position and a LIDAR for mapping are the most essential components. There is also a need for a battery, motors, a motor drive and a microcontroller to control and drive the AGV. An assortment of components were already supplied for this project which removed the need to design, dimension and acquire robot hardware. However, certain additions were deemed necessary for successfull and safe navigation. 4.1 DC drive motor The RB-Dfr-444 is a 12 volt brushed DC motor. It comes preassembled with a metal spur-gear gearbox and integrated quadrature encoder. Rated torque at 12V 0.147 Nm Gear ratio 43.7:1 Encoder resolution at motor shaft 64 CPR (counts per revolution) Encoder resolution at gearbox shaft 2797 CPR Table 3: Motor specifications. 4.2 DC motor controller Sabertooth 2x25 motor controller has two channels and is rated at a maximum of 25A per channel continuous. It also offers different modes of controlling it, by traditional RC receiver, 0-5V analog signals for each motor or by supplying it with a byte (0-255) sent over serial line. The required form of input is selected on a DIP switch on the motor controller. One will observe that the specifications of this motor controller far outperforms the requirements posed by the AGV discussed in this paper. If the controller had not already been supplied a lower spec and cheaper controller would have been a better option seen from an efficient design point of view. 4.3 NI myRIO myRIO is a microcontroller developed by National Instruments based on their RIO architecture. RIO is an acronym for reconfigurable input/output and is based on four components: a processor, a reconfigurable FPGA, measurement I/O hardware, and LabVIEW. [15] The microcontroller run a real-time version of the Linux operating system.[16] 4.4 LIDAR Hukoyu URG-04LX is a scanning laser range finder. It has a scan angle of 240 degrees and maximum radius of 4000mm. Pitch angle is 0.36 degree (resolution). The sensor outputs a maximum of 683 datapoints per scan at a maximum scan rate of 10Hz. Rated accuracy is ±1% of measurement in the range 1000mm-4000mm, for ranges less than 1000mm rated accuracy is ±10mm. The LIDAR scan is read as an array consisting of range with corresponding angle data. This will be useful for obstacle avoidance, mapping and navigation. 4.5 Infrared sensor Sharp 2Y0A21 sensor returns an analog voltage representing the range to a detected surface. An LED in the device emits infrared light which is reflected by the surface that the device is aimed at, a PSD in the device receives this reflected light resulting in an analog voltage being transmitted from the sensor to a connected controller. An increase in distance is determined by a drop in voltage. Recommended range for this specific sensor is 10cm to 80cm. This will be used for cliff detection, ie. determine if the AGV is about to drive off a staircase or other platform. 34
  • 35. 4.6 LiPo battery 4 COMPONENTS 4.6 LiPo battery A 4-cell 14.8V 5000mAh battery were supplied for the project. Relative to the size of the AGV, this is a large and heavy battery. Seeing that one of the possible uses for the AGV might be to deliver parcels and documents between offices in a large building throughout a full working day of 8 hours, a generous battery capacity is advantageous. 4.7 Voltage regulator The LiPo battery supplies a varying voltage to the system in the range between ~15V to ~12V depending on its charge state. However, not all components have a built in voltage regulator to accept this range. For instance, the Hukoyu LIDAR requires an external supply of 800mA at 5V. We also realize that external utilities like lights and cameras might be added to the system to expand its functionality. Therefore, two voltage regulators are used. Input Output VR #1 12V - 24V 5V, 2.1A VR #2 22.2V - 12V 12V, 1A Table 4: Overview of input and output voltage of the voltage regulators. 35
  • 36. 5 PROGRAM 5 Program The software for the AGV is made in LabVIEW in the visual programming language G. G has a flowchart like pro- gramming model and works by connecting wires, that transfer data, between nodes. LabVIEW supports modularity with the use of "Sub-VI’s" that lets one create custom nodes. The program consists of the following major parts, which will be described in better detail in their respective sub chapters. Motor drive, data acquisition, landmark extraction, landmark association and mapping. 5.1 Scheduling The program is structured into an initialization, periodic tasks and an termination module. The initialization module takes care of initializing the LIDAR and defining global constants. The termination module takes care of turning off the LIDAR and stopping the motors. The periodic tasks module contains the main robot logic and has multiple parallel loops executing at different frequencies. At 100Hz, the task controlling the motor controller is executed. At 33, ¯3Hz, the task controlling the navigation system is executed. At 10Hz, the task controlling the localization and mapping is executed. 5.2 Motor control Motor control is accomplished by utilizing two PID controllers, one for each motor. Where the set point for each PID is the reference rotational velocity for the gearbox output shaft. The process variable is the measured shaft rotational velocity. The PID’s are experimentally tuned during testing to give quick and smooth response to change in reference velocity within the desired working envelope. The output of each PID is conditioned before being sent as a byte to the motor hardware controller. The sabertooth controller offers different modes of signal input in order to control each motor. Among the different modes are analog signal mode which is the most popular one employing a 0-5 volt signal for each motor. Where 2.5V is stop, 0V is full reverse and 5V is full forward. During testing, we experienced undesirable behavior in analog mode, when the myRIO controller was started or restarted there would be a short moment where the analog output channels sent a 0V signal to the motor controller which caused each motor to jerk violently. This could be circumvented by using a relay to control power between the sabertooth unit and the battery by activating and disabling the relay from the myRIO unit. However the sabretooth offered a mode where a byte could be sent over UART to control each motor instead of the traditional analog mode. In this mode, the motors would stay disabled during boot up of the myRIO. Reference velocity for the gearbox shaft is calculated by the unicycle model to differential drive VI. 36
  • 37. 5.2 Motor control 5 PROGRAM v omega vl vr vl = (2*v - omega*L) / (2*R); vr = (2*v + omega*L) / (2*R); 1 2 3 L R vr vl omega v R L UniToDiff.vi We can simplify the control, navigation and obstacle avoidance algorithms by employing the principles of the Unicycle Model. This will let us design navigation controllers with respect to only two simple to understand variables;Translational velocity (v) and Rotational Velocity (omega) R:AGV wheel radius L:AGV wheelbase v:Translational velocity omega:Rotational velocity vl:Rotational velocity of the left motor. vr:Rotational velocity of the right motor. Figure 22: vl (rad/s) vr (rad/s) R (m) 0 0,001 100 0 0,001 100 Outputs measured right wheel (rad/s) left wheel (rad/s) right encoder count left encoder count right wheel (m/s) left wheel (m/s) Figure 23: Inputs and outputs of the motor control VI. The byte is unsigned and has a range from 0-255, where 0 is stop both motors, 1-127 controls motor 1, 128-255 controls motor two. The measured shaft rotational velocity is acquired from the wheel encoders. The accumulated ticks since last iteration of the motor control VI is used to calculate the rotational velocity. For increased precision and response, this loop is set to run every 10 millisecond, 100 Hz. ωm1 = encTicks 2π N ∆t (5.1) ωm1 is the rotational velocity of motor one. encTicks are encoder ticks since last iteration. N is the resolution of the encoder with respect to the gearbox output shaft. 37
  • 38. 5.3 Sensor data acquisition 5 PROGRAM ∆t is the time in seconds since last iteration. In this example VI, we can see the process where the inputs; encoder ticks, wheel radius and time since last iteration are used to control the motor velocity. The VI also outputs useful measurements to be used elsewhere in the program; encoder ticks, rotational velocity of the output shaft and linear velocity of the wheel mounted to the output shaft. We also see how the PID output is conditioned to a byte ranging from 1-127 to control direction and velocity of the motor and a midpoint of 64 being the command to stop motor one. encTicks Linear Velocity [m/s] Rotaional Velocity [rad/s] Ref. vl (rad/s) Reset Counter Counter Value Encoder Left 64 1 127 Characters to Writ Character Count UART PID gains 1000 dt (ms) Loop delay, used to calcute velocities Encoder counts to radians (x/2797)*2*pi R (m) Wheel radius Measurements Figure 24: Code for controlling a motor. 5.3 Sensor data acquisition The Hokuyo URG drivers included with LabVIEW is used to communicate with the device. The initialization function is called once, at the beginning of the program. It needs to know which device to initialize with an identifier specified by the Virtual Instrument Software Architecture (VISA) standard. The VISA format is Interface Type[board index]::Address::INSTR and in the case of the Hokuyo URG-04LX connected to the USB port on the myRio, it is USB0::0x15D1:. The close function is called once, at the end of the program, and attempts to turn off the laser before terminating the software connection to the instrument. The data acquisition function takes start point and end point in degrees and cluster count as parameters. Cluster count refers to the number of measurements which are clustered together. The functions returns a magnitude and direction array which are associated. To fetch the data from the first measurement for instance, the 0th index of each array needs to be accessed. 38
  • 39. 5.4 Line extraction 5 PROGRAM Figure 25: LabView code for LIDAR data acquisition. This will still not allow to interface with the laser scanner through LabVIEW. The USB module on the Linux operating system is taking control over the laser scanner when connected to the USB port, preventing LabVIEW from accessing it. To fix this, we logged into the Linux filesystem and removed the USB module, preventing it from taking control over the laser scanner. [17] 5.4 Line extraction A variety of methods were tried to program a fast and robust line extraction algorithm. RANSAC is one such algorithm and is explained in 2.8.1 on page 23. Instead of returning extracted lines with the RANSAC algorithm, we return a list of fitted points for the lines. Clustering is then performed on the points for finding the actual start and end points for the lines. To find gaps, we check the distance between each point and the next point. d = (x2 − x1)2 + (y2 − y1)2 (5.2) If the distance is greater than a threshold, extract a line with a start point at the end of the last gap and an end point at the gap. 39
  • 40. 5.4 Line extraction 5 PROGRAM Figure 26: Lines extracted from sample data with the RANSAC algorithm. Split and merge is an other line extraction algorithm. But it does not have any notion of gaps, so the datapoints needs to be clustered first. Similar to the implementation of clustering in the RANSAC algorithm, but now we are out after the clustered points. Each cluster of points are passed to the Split and Merge module. The Split and Merge module aims to find the corner points making up the extracted lines. The algorithm outline is explained in chapter 2.8.2 on page 25. Two SubVI’s are defined in the program, one for split and one for merge. The split SubVI picks the two extreme points and finds the line between them. Then it finds the point which lies the farthest away from the line using 5.3. d = |−ax + y − b| √ a2 + 1 (5.3) Where a is the slope of the line, b is where the line intercepts the y-axis, and x and y is the point. If the distance is big enough, it performs a split. Then recursively call the split SubVI with the left subset of the split and then inserts the point of split to the list of corner points before it calls the split SubVI on the right subset of the split. The tasks are done in this exact order to ensure that both the corner points will appear in a consecutive order in the list and that the right subset is selected for recursion. The recursion will end when there is no more subsets that can do a split. Now the found line segments are sent into the merge SubVI. This SubVI checks consecutive lines for collinearity and removes the point between them if the are collinear enough, effectively merging the lines. The collinearity is found by checking the angle between the lines using 5.4. α = arctan m2 − m1 1 + m1 · m2 (5.4) Where m1 and m2 are the slopes of the two lines. The split and merge module returns a linestrip. A linestrip is a list of vertices defining one or multiple lines connected together at its vertices. 40
  • 41. 5.5 Mapping 5 PROGRAM Figure 27: Lines extracted from sample data with the split and merge algorithm. With the simulated data, both RANSAC and Split and Merge gave promising results, as seen in figure 26 and figure 27. To decide which algorithm we wanted to use we tested the two algorithms with real data from the LIDAR and in an real environment. The test is described in 6.7 on page 54. 5.5 Mapping For the robot to communicate what it has learned to the user, a topological map is constructed. The map is being updated after each time the line extraction algorithm has found some lines. This means that a single wall will be drawn multiple times on the same place. The line extraction algorithm finds lines in the robot frame and needs to be transformed into the global frame to be drawn correctly in the map. −→ lG = −→ lR cos θ − sin θ sin θ cos θ + −→ R (5.5) Where −→ lR is a vertex in the robot frame, θ is the robot heading, −→ R is the robot position and −→ lG is a vertex in the global frame. 5.6 Landmark extraction Corners are good distinct landmarks the robot can easily extract from the environment. The only problem is that a typical laser scan will contain corners that are not really corners, but just limits in the range of the LIDAR or walls concealed by other walls. But because Split and Merge returns linestrips, we can just consider all vertices in the linestrip except for the first and last as corners. 41
  • 42. 5.7 Data association 5 PROGRAM Landmark Robot Not landmark Extracted line Wall outside LIDAR range Figure 28: How landmarks are selected. 5.7 Data association The robot maintains a list of observed landmarks. This list is being updated when new landmarks are observed. But it needs some way to distinguish between new landmarks and earlier observed landmarks. We handle this in a rather simple way. For each landmark in the current observation, it goes through all the earlier observed landmarks and updates the closest if it is closer than a given threshold. If the landmark is outside the threshold, it adds it as a new landmark to the list. 5.8 EKF SLAM SLAM will be implemented in LabVIEW for the myRIO using wheel encoders and LIDAR by following this outline of SLAM discussed earlier in the theory chapter. 42
  • 43. 5.8 EKF SLAM 5 PROGRAM Data Association Extracted Landmarks Position Prediction Encoder Measurements Observation Prediction Estimation Matched Predictions and Observations Estimated Pose Predicted Position Predicted Landmarks State Vector (Pose and Map) Landmark Positions Robot Pose YES SLAM Estimated Landmark New Landmark Prediction Step UpdateStep Sensor Observations Figure 29: Block diagram illustration of the SLAM process. In the implementation used in this paper, we will expand the state vector with a landmark identifier for each landmark. This identifier will store information about the landmark. Is it a new landmark? Is it re-observed? What type of feature, corner, line, gap etc.? 43
  • 44. 5.8 EKF SLAM 5 PROGRAM Wy Wx ml,y ml,x rX ry rθ m1,x m1,y m1,s m2,x m2,y m2,s . . . mn,x mn,y mn,s Robot pose Landmarks Figure 30: Graphical representation of the state vector used in EKF SLAM The state vector will be represented by a one column matrix of size 3 + N*3, where N is the number of landmarks. rX ry rθ m1,x m1,y m1,s m2,x m2,y m2,s . . . mn,x mn,y mn,s Robot pose Landmarks Prl1 . . . PrlN Pl1l1 . . . Pl1lN PlNl1 . . . PlNlN ... ... Prr Pl1r PlNr ... 3 2 3 2 2 2 2 2 State Vector Covariance Matrix 3 3 Figure 31: Dimensions of the state vector and covariance matrix. 44
  • 45. 5.8 EKF SLAM 5 PROGRAM In the mathscript node, arrays and matrices are indexed differently. An arrays’ first index is 0, but the matrix’s first index is however 1. It might be convenient to use matrices for all inputs to the mathscript node in order to avoid indexing confusion. The EKF will be written in a mathscript node as the writer of this code is more familiar with traditional programming languages like C and C++ rather than the graphical programming language of LabVIEW. The mathscript node offers some degree of compilation feedback errors and warnings by attaching an indicator to the error output node. We will implement the prediction step, the update step as well as addition of new landmarks. We begin with the prediction step. The inputs for this step is previous state vector, previous covariance matrix and control input. EKFSLAMprediction(xk−1, Pk−1, uk) ¯xk = fk−1(xk−1, uk) (5.6) ¯Pk = Fk−1Pk−1FT k−1 + Gk−1Qk−1GT k−1 (5.7) First we set up our matrices with initial values and build our error matrices according the error models discussed in the theory chapter. Initially the state vector is empty apart from the robot pose. xk−1 =   0 0 0   (5.8) The covariance matrix is initialized accordingly, there are no landmarks and we are fully certain of our initial position, thus we get the initial covariance matrix. Pk−1 = Prr k−1 =   0 0 0 0 0 0 0 0 0   (5.9) With initialization complete, write the recursive part of the EKF SLAM algorithm. Receiving control input recorded from wheel encoders since last iteration. uk = ∆sr ∆sl (5.10) f(x, y, θ, ∆sr, ∆sl) =   x y θ   +   ∆sr+∆sl 2 cos θ + ∆sr−∆sl 2b ∆sr+∆sl 2 sin θ + ∆sr−∆sl 2b ∆sr−∆sl b   (5.11) We compute the error based on the model in the theory chapter. Qk−1 = covar(∆sr, ∆sl) = kr|∆sr| 0 0 kl|∆sl| (5.12) The Jacobian matrices F of the motion model with respect to robot pose, and G of the motion model with respect to control input are defined directly in the mathscript code. Fk−1 = pf =   1 0 −∆s sin(θ + ∆θ/2) 0 1 ∆s cos(θ + ∆θ/2) 0 0 1   (5.13) Gk−1 = ∆rl =   1 2 cos(θ + ∆θ 2 ) − ∆s 2b sin(θ + ∆θ 2 ) 1 2 cos(θ + ∆θ 2 ) + ∆s 2b sin(θ + ∆θ 2 ) 1 2 sin(θ + ∆θ 2 ) + ∆s 2b cos(θ + ∆θ 2 ) 1 2 sin(θ + ∆θ 2 ) − ∆s 2b cos(θ + ∆θ 2 ) 1 b −1 b   (5.14) ∆s = ∆sr + ∆sl 2 (5.15) 45
  • 46. 5.8 EKF SLAM 5 PROGRAM ∆θ = ∆sr − ∆sl b (5.16) Finally, we compute the predicted pose ¯xk and covariance matrix ¯Pk. We are only interested in updating the robot pose in the state vector, that is the top three rows. Similarly we only update the 3x3 robot covariance, and the robot-landmark cross variances of the full covariance matrix. This can be described by the EKF sparse prediction equations used for robot motion. ¯xx,y,θ = f(x, y, θ, ∆sr, ∆sl) (5.17) ¯Prr = Fk−1Prr k−1FT k−1 + Gk−1Qk−1GT k−1 (5.18) ¯Prl = Fk−1Prl (5.19) ¯Plr = ¯Prl T (5.20) Figure 32: Prediction step applied to state vector and covariance matrix. Here is an example mathscript node of the EKF SLAM prediction step, note the connected error constant and indicator. These are extremely useful for debugging the code. 46
  • 47. 5.8 EKF SLAM 5 PROGRAM b = 0.2; %robot wheelsbase [m] deltaSr = u(1,1); %control inputs [m] deltaSl = u(2,1); dS = (deltaSr+deltaSl) / 2; %delta S [m] dT = (deltaSr-deltaSl) / 2*b; %delta THETA [rad] %Predict new postion x_new = [x(1,1);x(2,1);x(3,1);] + [dS*cos(x(3,1) + dT/2); dS*sin(x(3,1) + dT/2); dT ]; %Wrap robot orientation/heading to [-PI,PI] if x_new(3,1) > pi x_new(3,1) = x_new(3,1) - 2*pi; end if x_new(3,1) < -pi x_new(3,1) = x_new(3,1) + 2*pi; end kr = 0.3; %Error constant right wheel kl = 0.3; %Error constant left wheel Q = [kr*abs(deltaSr) 0; 0 kl*abs(deltaSl)]; F = [1 0 -dS*sin(x(3,1) + dT/2); 0 1 dS*cos(x(3,1) + dT/2); 0 0 1 ]; G = [0.5*cos(x(3,1)+dT/2)-0.5*(dS/b)*sin(x(3,1)+dT/2) 0.5*cos(x(3,1)+dT/2)+0.5*(dS/b)*sin(x(3,1)+dT/2); 0.5*sin(x(3,1)+dT/2)+0.5*(dS/b)*cos(x(3,1)+dT/2) 0.5*sin(x(3,1)+dT/2)-0.5*(dS/b)*cos(x(3,1)+dT/2); (1/b) -(1/b) ]; %Predict new covariance matrix using EKF sparse prediction equations P_new(1:3, 1:3) = F*P(1:3,1:3)*F' + G*Q*G'; P_new(1:3, 4:length(P_new)) = F*P_new(1:3, 4:length(P_new)); P_new(4:length(P_new), 1:3) = (P_new(1:3, 4:length(P_new)))'; x = x_new; P = P_new; 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 P x u P x x P u error out x2 (Colum P2 Figure 33: EKF SLAM prediction step, MathScript node. The update step applies to the full state vector and the full covariance matrix. 47
  • 48. 5.8 EKF SLAM 5 PROGRAM Figure 34: Update step applied to state vector and covariance matrix. For each re-observed landmark in zk, compute observation error covariance matrix. Rk = krange · zr 0 0 kα (5.21) The Jacobian of the observation model. Hj k = − mj,x−xk,x √ q − mj,y−xk,y √ q 0 0 0 mj,x−xk,x √ q mj,y−xk,y √ q 0 0 mj,y−xk,y q − mj,x−xk,x q −1 0 0 − mj,y−xk,y q mj,x−xk,x q 0 0 (5.22) The innovation covariance matrix. Sk = HkPkHT k + Rk (5.23) The Kalman gain. Kk = PkHT k S−1 k (5.24) Apply the Kalman to the state vector. xk = xk + Kk(zk − hk) (5.25) Apply the Kalman to the covariance matrix. ¯Pk = (I − KkHk) ¯Pk (5.26) Repeat until all re-observed landmarks has been handled. When handling new landmarks, the state vector is expanded, as is the covariance matrix. 48
  • 49. 5.9 Occupancy grid map 5 PROGRAM Figure 35: New landmark expanding the state vector and covariance matrix. The mean (location) of the new landmark is computed with the inverse of the observation function. g(zk, xk) = zk,range cos(zk,α + xk,θ) + xk,x zk,range sin(zk,α + xk,θ) + xk,y (5.27) The covariance is either set to a high value then computed when the landmark is re-observed. Or an initial covariance values are calculated with the Jacobians Jxr and Jz Jxr = ∂g(zk, xk) xk,x,y,θ = (5.28) Jz = ∂g(zk, xk) zk,ρ,α = (5.29) The state vector and covariance matrix is now fully updated. The EKF is ready for the next iteration. 5.9 Occupancy grid map In order to be able to use common pathfinding algorithms like A*, the map had to be converted into an occupancy grid map. That is a binary map with a given metric resolution that tells if a space is occupied or not. We started with a set of corner points, each with a corresponding vector to end of the wall. These are collected from the line extraction algorithm. For each corner point, make a parameterization of the line between the corner and the end. −→ l (t) = −→ P end − t( −→ P start − −→ P end) t [0, 1] (5.30) Where −→ P end is the endpoint, −→ P start is the startpoint and t is the parameter. Traverse the line by increasing t with a tstep and check which grid cell each point lies on by dividing by the resolution and rounding down to the nearest integer. 49
  • 50. 5.9 Occupancy grid map 5 PROGRAM Figure 36: An occupancy grid map represented visually. The occupancy grid map was not implemented in the robot, but was only simulated on the computer. 50
  • 51. 6 TESTING 6 Testing 6.1 White-box testing To test the software of the robot, white-box testing was used. White-box testing is a method of testing software that tests the internal structure of the program. White-box testing can be applied at the unit and integration level. [18] Unit testing is the testing of individual modules. Does various input yield the expected output? Integration testing is the testing of modules in groups. Unit tests are used to ensure that code meets its design and behaves as intended. 6.2 Motor accuracy and performance A good place to start testing is the performance and accuracy of the motors as this affects how the actuators react to control inputs. We begin with tuning the PID controllers for each motor. The reference input or set point for the motor PID controller is the rotational velocity we would like to achieve at the gearbox axle given in radians per second. Positive rotational direction is counter clockwise when looking straight at the gearbox output shaft. The feedback or process variable for the PID is the rotational velocity at the gearbox output shaft. Rotational velocity is computed with data from the motor encoder then accounting for gearbox ratio. We want the motor to achieve the set velocity as quickly as possible while maintaining smooth jitter free acceleration, little to no overshoot from the set point and settle at this reference velocity. For safety and peace of mind the AGV is placed on a platform with the wheels rotating freely without contact with the ground. Keep in mind however that the PID parameters gained from this initial stage will have to be re tuned once the AGV is placed on the ground as the motors then have to accelerate the mass of the complete system, not just the wheels spinning freely midair. Recipe for practical PID tuning: While this process will not give the perfect PID parameters, it will get you close to it and ensure you get a decent controller and hence a responsive system. [19] 1. Set Kp = 1, Ki = 0, Kd = 0. 2. Increase Kp (proportional gain) until the motor achieves steady oscillation when given change in input. Then set Kp to 80% of this value. 3. Increase Ki (Integral gain) to eliminate the steady state error as quickly as possible. Too high Ki will lead to the motor oscillating around the set point, reduce it. 4. Keep going back and forth between Kp and Ki, making small changes for further improvement to overshoot, performance and stability. 5. If possible introduce some disturbance to the system, in the case of a wheeled robot, simply grab the wheel to restrain its motion the let go of the grip. Observe the response and adjust your controller if an unstable response is observed. 6. Experiment with low values of Kd (derivative gain) for increased responsiveness. Often times a PI controller will suffice and Kd can be left to 0. Remember we want stability and reliability first, then performance secondary to that. Below is a result from tuning two motors to give a smooth response within the working envelope of this specific differential drive. The boundaries are -6 to 6 radians/second measured on gearbox output shaft. 51
  • 52. 6.3 Straight line test 6 TESTING 7 -7 -5 0 5 7 -7 -5 0 5 Time 26371614 Left Right Velocities 100,000proportional gain (Kc) 0,002integral time (Ti, min) 0,000derivative time (Td, min) Right wheel PID gains 100,000proportional gain (Kc) 0,002integral time (Ti, min) 0,000derivative time (Td, min) Left wheel PID gains -13077 right ticks 13196 left ticks 0 right wheel (rad/s) 0 left wheel (rad/s) -13077 right encoder count 13196 left encoder count 0 right wheel (m/s) 0 left wheel (m/s) MotorControl Output STOP stop 90,9091 Loop Hz 0 Ref. vl (rad/s) 0 Ref. vr (rad/s) Figure 37: PID tuning two motors on a differential drive. 6.3 Straight line test We wanted to test how well the robot managed to drive in a straight line. The purpose of this test is to check if PID parameters are tuned well and if the physical robot parameters are correct. 6.3.1 Method A starting point was marked on the floor and an end point was marked 3m away from it. The robot was placed on top of the starting point and its x-axis was aligned with the end point. The robot was then programmed to drive what it believed would be 3m. The distance between the end mark and where the robot actually stopped was then measured. 6.3.2 Test criteria An error of less than or equal to 10mm for the 3m driven is desired. 6.3.3 Results Distance (cm) Error (cm) Test 1 302 2 Test 2 302.5 2.5 Test 3 302.6 2.6 Test 4 302.8 2.8 Test 5 302.5 2.5 Table 5: Results from the straight line test. Assuming there is not a systematic error in the encoders, the consistent nature of the error in straight line distance points to an error in the measured radius of each wheel, due to the equation. p = θr (6.1) The programmed radius of each wheel will have to be increased 302,5−300 300 ∼ 1% from its current value. 52
  • 53. 6.4 Rotation test 6 TESTING 6.4 Rotation test The radius of the wheels were corrected in the preceding step. Another physical parameter of the robot crucial to accurate navigation is the wheelbase, that is the distance between each drive wheels contact point with the underlying surface. 6.4.1 Method The wheelbase is initially measured with a ruler. Then the robot is commanded to rotate four full revolutions and the error in final pose is measured. 6.4.2 Test criteria Ideally, the error should be as close to zero as possible. However, due to soft pneumatic tires and linear slack in each gearbox this is obviously not feasible. 6.4.3 Results Three tests in each rotational direction were performed, for a total of six runs. Angle (degrees) Error (degrees) Test 1 CW -1430 10 Test 2 CW -1428 12 Test 3 CW -1428 12 Test 4 CCW 1426 -14 Test 5 CCW 1428 -12 Test 6 CCW 1425 -15 Table 6: Results from rotation test. Results are consistent and suggest an under estimation the robot wheelbase which leaves it to stop shy of its target in every test by approximately 12 degrees. The distance of the wheelbase will therefor be increased by 1428−1440 1440 ∼ 1% from its current value. 6.5 Square drive test We wanted to test how well the robot managed to drive in a rectangle. The purpose of this test is the same as the straight line test as well as testing how accurate the rotation is. 6.5.1 Method A plate with the dimensions 1.8m × 0.8m was placed on the floor and the outline was marked with adhesive tape, resulting in a path with the beforementioned dimensions. The robot was placed in one of the corners and were programmed to drive in a rectangle of the same dimensions. The distance error and angle was then measured. 6.5.2 Test criteria A distance error of less than or equal to 10mm and an angular error of less than 5° is desired. 53
  • 54. 6.6 LIDAR test 6 TESTING 6.5.3 Results Angle error Distance error (cm) Test 1 12° 27.5 Test 2 17° 38.8 Test 3 17° 44 Test 4 20° 37.2 Test 5 14° 40.5 Table 7: Results from the square drive test. The square drive test shows the robot driving in the pattern of a skewed rectangle and confirms what was indicated during the rotation test, again we see a somewhat consistent error over several tests indicating underestimation of the robots wheelbase. 6.6 LIDAR test We trust the specifications provided by Hokuyo to be more accurate then we possibly could measure and test, given they probably have a lot better testing setup. But to ensure there is nothing wrong with this specific sensor unit, we wanted to perform a simple test. 6.6.1 Method The LIDAR was placed 1m from a 1.8m long wall. We then performed a scan to test if the point sensed at θ = 0 was 1m from the origin and that the two outermost points were 1.8m apart. The width of the wall is calculated using6.2. d = r2 1 + r2 2 − 2r1r2 cos(θ2 − θ1) (6.2) Where d is the distance between two polar coordinates, r is the magnitude and θ is the direction. 6.6.2 Results Distance at θ = 0 (mm) The leftmost point (mm) The rightmost point (mm) Width of wall (mm) Test 1 1018 1279∠ − 40.547° 1356∠43.125° 1758 Test 2 1005 1266∠ − 41.953° 1356∠41.016° 1774 Test 3 1008 1262∠ − 41.25° 1360∠41.719° 1738 Table 8: Results from the LIDAR test. We observe the results are within the tolerances stated in the data sheet by the manufacturer. 6.7 Line extraction algorithm test Two algorithms for line extraction has been developed for the project. We want to test which of the implementations of RANSAC and split and merge performs most accurate and robust. 54
  • 55. 6.7 Line extraction algorithm test 6 TESTING 6.7.1 Method The robot was placed in a typical environment with multiple openings and walls at different angles. LIDAR data was then acquired and run through both algorithms. The results were inspected visually. The parameters for RANSAC were found experimentally and is as follows: Inlier treshold (d) 20 Minimum points (T) 20 Points left for exit (N) 20 Number of lines for exit (L) 10 Number of attempts for exit (M) 200 Minimum points for cluster 10 Gap threshold for clustering 500 Table 9: RANSAC parameters used during testing. The parameters used for Split and merge were found experimentally as well and is as follows: Split threshold 100 Angle difference for merging 0.5 Gap Threshold 300 Minimum points for cluster 50 Table 10: Split and Merge parameters used during testing. 6.7.2 Result The results returned by RANSAC was very accurate, but not very reliable, while Split and Merge on the contrary gave reliable results, but with a bit less accuracy. RANSACSplit and Merge Figure 38: Comparison between RANSAC and Split and Merge. 55
  • 56. 6.8 System test 6 TESTING This could have been caused by not having the optimal parameters. RANSAC has a complexity of S×N ×N.Trials while Split and Merge has a complexity of N × log N, where N is number of points in the dataset, S is the number of lines extracted and N.Trials is the number of trials for RANSAC. [20] The split and merge algorithm requires the data to be sorted by angle and that is exactly what the LIDAR returns. This concludes that Split and Merge should be our obvious choice. 6.8 System test This is the final test where we test how well the system works together. The robot is placed in a rectangular coridor that meets itself at the end. It is supposed to drive throught this corridor to see how well it manages to simultaneous determine own position and map the environment. 6.8.1 Result Figure 39: Results from testing the robot The mapping gave promising results, but because of increased uncertainty of in the mapping over time an earlier visited place was placed wrongly on the map. This is the effect of not handling loop closure as explained in 2.9 on page 26. To prevent this, a loop closure algorithm has to be implemented. 56
  • 57. 7 WRAP-UP 7 Wrap-up 7.1 Conclusion This paper presented a possible real life implementation of online SLAM. Two flavors were presented, one of which employed a complementary filter the other approach used the popular EKF approach. While the complementary filter is simpler to understand and implement for those less experienced in statistics and probability the EKF on the other hand does outperform in accuracy when you get it to work. Much literature were researched in the hunt for the solution to online SLAM EKF, many algorithms studied. An unfortunate situation that often presented itself was algorithms lacking essential steps, where a simplified version of SLAM were presented, to simple to be implemented on a real system. Errata in both published articles and highly regarded literature as well made the algorithms fail when attempts were made to implement them. To one not familiar in the field of autonomous robots this is the biggest hurdle to overcome and to be able to sift out the good parts from many sources and combine it into a working solution to the SLAM problem is the path one must travel when tackling this challenge. In this paper a complete algorithm has been presented. It has not however been successfully implemented on a physical device, due to unrelated hardware issues. Rest assured however, work continues. The beginnings of a navigation system has also been implemented employing among other techniques a vector field histogram and behavior based navigation the robot is able to navigate safely in its environment, avoid static as well as dynamic obstacles. Way points may also be sent to the robot where it will attempt to find a safe route to the various way points in the world. Dealing with convex obstacles remains to be implemented, A differential drive robot has also been built for this thesis. The drive train and base plate were already given, a body was yet to be designed. This was initially drawn in SolidWorks then the models were exported for 3D printing. Lastly a series of test cases were written and performed beginning with unit testing ie. encoders to module testing ie. drive in a straight line. Full system testing of onlineSLAM was planned but not possible due to reliability and accuracy issues still to be resolved. Suggestions for further work and improvement will follow. 7.2 Further work As the project was approaching its end, we still had some ideas that could further improve the performance of the robot. The occupancy map is currently stored in a grid data structure. To use the occupancy map together with the A* module in LabVIEW, it had to be stored in a 64bit double-precision floating-point format. With a resolution of 10mm, a 100m2 map would use 8MB of memory. By using a quadtree structure we could have low resolution where the robot has not mapped and high resolution where it has, decreasing the memory used by the map. Empty areas it has visited does not have high resolution. A quadtree is a tree data structure where each node has exactly four children. Array manipulation is used very much in the code used by the robot. The standard array manipulation functions in LabVIEW makes a copy of the array in memory each time it is being altered. Using the In-place element structure, the operations can be done on the data directly, avoiding making unnecessary copies of the array, thus increasing performance. 57
  • 58. 7.2 Further work 7 WRAP-UP Figure 40: Converting an array of cartesian coordinates to polar coordinates using the in-place element. Landmarks may mistakenly be placed in the wrong place by the robot if the robot pose is determined wrongly. This would further confuse the localization algorithm. By using some sort of algorithm that removes landmarks if they seem to not exist, this would be less of a problem. It could also be an idea to The gear motor that was provided for the project had a good amount of backlash. In addition, the tires were very soft and flexible. In combination, these two problems made the odometry less accurate. By upgrading the gear motors and wheels, we predict that the odometry will be improved. 58
  • 59. REFERENCES REFERENCES References [1] R. Maxwell. (2013, January) Robotic mapping: Simultaneous localization and mapping (slam). [Online]. Available: https://www.gislounge.com/robotic-mapping-simultaneous-localization-and-mapping-slam/ [2] S. Thrun, W. Burgard, and D. Fox, Probabilistic Robotics. MIT Press, 2005, ch. 5, p. 118. [3] P. C. Glasser. (2015, May) An introduction to the use of complementary filters for fusion of sensor data. [Online]. Available: http://glassercommunications.com/paul/samples/filters_for_fusion.pdf [4] H. M. Choset, Principles of robot motion: theory, algorithms, and implementation. MIT press, 2005, ch. 8. [5] Wikipedia. (2016) Extended kalman filter — Wikipedia, the free encyclopedia. [Online]. Available: https://en.wikipedia.org/wiki/Extended_Kalman_filter [6] Wolfram MathWorld. (2016, May) Jacobian. [Online]. Available: http://mathworld.wolfram.com/Jacobian. html [7] G. Robots. (2016) Various techniques for positioning a mobile robot in a space. [Online]. Available: http://www.generationrobots.com/en/content/52-localize-with-a-hokuyo-laser-range-finder [8] Hokuyo. (2005, October) Scanning laser range finder urg-04lx specifications. [Online]. Available: http://www.hokuyo-aut.jp/02sensor/07scanner/download/pdf/URG-04LX_spec_en.pdf [9] K. O. Arras. (2009, June) Introduction to mobile robotics. [Online]. Available: http://ais.informatik.uni- freiburg.de/teaching/ss09/robotics/slides/feature_extraction.pdf [10] B. Williams, M. Cummins, J. Neira, P. Newman, I. Reid, and J. Tardós, “A comparison of loop closing techniques in monocular slam,” Robotics and Autonomous Systems, vol. 57, no. 12, pp. 1188–1197, 2009. [11] U. Uttersrud. (2014, November) Datastrukturer for grafer. [Online]. Available: http://www.cs.hioa.no/~ulfu/ appolonius/kap11/1/kap111.html#11.1.4 [12] M. Yan. (2014, Januar) Dijkstra’s algorithm. [Online]. Available: http://math.mit.edu/~rothvoss/18.304. 3PM/Presentations/1-Melissa.pdf [13] H. Muñoz-Avila. (2005, October) The a* algorithm. [Online]. Available: http://www.cse.lehigh.edu/~munoz/ CSE497/classes/Astar.ppt [14] Wikipedia. (2015) Admissible heuristic — Wikipedia, the free encyclopedia. [Online]. Available: https://en.wikipedia.org/wiki/Admissible_heuristic [15] National Instruments. (2014, April) From student to engineer: Preparing future innovators with the ni labview rio architecture. [Online]. Available: http://www.ni.com/white-paper/52093/en/ [16] ——. (2015, Mars) Introduction to ni linux real-time. [Online]. Available: http://www.ni.com/white- paper/14627/en/ [17] X. Smith and N. Asgari. (2014) Robot localisation, navigation and control using national intruments myrio. [Online]. Available: https://wiki.csem.flinders.edu.au/pub/CSEMThesisProjects/ProjectSmit0949/Thesis.pdf [18] Wikipedia. (2016, April) White-box testing — Wikipedia, the free encyclopedia. [Online]. Available: https://en.wikipedia.org/wiki/White-box_testing [19] F. Haugen. (2010, April) Tuning of pid controllers. [Online]. Available: http://home.hit.no/~hansha/ documents/control/theory/tuning_pid_controller.pdf [20] N. T. R. S. Viet Nguyen, Agostino Martinelli. (2005) Comparison of line extraction algorithms using 2d laser rangefinder for indoor mobile robotics. [Online]. Available: http://infoscience.epfl.ch/record/97571/files/ nguyen_2005_a_comparison_of.pdf 59
  • 60. A DIAGRAMS A Diagrams Wirediagram - + Power Supply 24V Sabertooth2x25 M2B B+ B- M2A M1B M1A 0V 5V S1 S2 DC-Motor M a Mb A B V cc Gnd DC-Motor Ma Mb A B V cc Gnd USB Hokyo URG-04LX 5V Gnd 24V->12V V outV in Adapter Gnd Gnd 12V->5V V outV in Adapter Gnd Gnd DIO15 DIO14 DGND DGND DIO13 DGND DIO12 DGND DIO11 DGND UAR T.TX DGND UAR T.RX DGND A GND A O1 A O0 +3.3V DIO10 DIO9 DIO8 DIO7 DIO6 DIO5 DIO4 DIO3 DIO2 DIO1 DIO0 A I3 A I2 A I1 A I0 +5V 5V DGND DIO7 DIO6 DIO5 DIO4 DIO3 DIO2 DIO1 DIO0 A I1- A I1+ A I0+ A I0- A GND A O0 A O1 A GND +15V -15V myRio-1900 A B C DIO15 DIO14 DGND DGND DIO13 DGND DIO12 DGND DIO11 DGND UAR T.TX DGND UAR T.RX DGND A GND A O1 A O0 +3.3V DIO10 DIO9 DIO8 DIO7 DIO6 DIO5 DIO4 DIO3 DIO2 DIO1 DIO0 A I3 A I2 A I1 A I0 +5V USB V in Gnd 60