SlideShare a Scribd company logo
1 of 13
Download to read offline
M.ENG RESEARCH PROJECT APRIL 2015 1
ο€ 
Abstractβ€” A prototype autonomous airship and control system
was developed and tested. Despite being among the earliest types
of unmanned aerial vehicles to receive research attention,
airships remain attractive as a platform because they can attain
longer flight times and are inherently more stable in air than
equivalent helicopters or multirotors. Previous autonomous
blimps have required a connection to a computer on the ground,
or carefully calibrated multi-camera rigs in order to calculate
their pose relative to their target. This project instead utilizes
currently available low-cost lightweight hardware and open-
source software to perform all processing on board the blimp.
The controller, running on a Linux microcomputer, combines
pose estimates from a downward looking camera, motion vector
estimates from video and an eleven degree-of-freedom inertial
measurement unit to control the vehicle altitude, heading and
speed in station-keeping and follow-me tests.
The system was tuned and evaluated using indoor test flights.
Index Termsβ€” Autonomous airship, blimp, pose estimation,
unmanned aerial vehicle, visual servoing.
I. INTRODUCTION
NMANNED AERIAL VEHICLES, popularly referred to as
β€˜drones’ or UAVs, first emerged in the 1950s, but in the
last 15 years have become a topic of growing research,
commercial and public interest.
Applications of UAV technology are widespread and ever
increasing, from military and surveillance uses to agriculture
and package delivery. Two different perspectives on civilian
UAV usage are provided by [1] and [2]; numerous other
technology reviews have been written exploring uses in
specific fields [3].
Several factors have combined to drive this expansion. Key
technologies (including System-on-a-Chip (SoC) processors,
lightweight lithium polymer batteries and microelectronic
sensors) have matured and become mass-market as a result of
widespread adoption of smartphone-like devices. These
components are now able to meet the twin requirements for
UAV technologies: real-time performance and small physical
size and weight [4], [5]. Building UAVs was initially only
feasible for well-resourced organisations but today the
open-source, crowd-funding and hobbyist movements have
lowered the barrier to entry. Smaller companies, researchers
and individuals can now share their systems, designs, research
and code with the community [6].
Capitalising on the now widespread availability of UAV
components and associated computing platforms, this project
set out to deliver a blimp system suitable for developing visual
controls, with applications in wide area persistent surveillance
and mapping missions.
The indoor prototype, presented in Figure 1, has a flight
duration of four hours and uses imagery and an Inertial
Measurement Unit (IMU) to station-keep relative to an image
target beneath. Major components are all commercially
available items, resulting in a low cost, lightweight system.
Figure 1 – Prototype blimp system in flight in the lab.
Blimp is 3.5m long, flying around 4m above ground
In the rest of this section, prior work in UAV airships is
briefly reviewed and the objectives for this project are
outlined. The main sections of this paper then present the
blimp, review a dynamic model suitable for closed-loop
control and outline the autopilot software. The chosen control
techniques are evaluated on the blimp platform before
applications and areas for future work are suggested.
A. Airships and blimps
Airships are traditionally defined as craft whose main
source of lift is buoyancy from an envelope of lighter than air
gas, i.e. Helium. Blimps are airships without a rigid skeleton.
An ideal blimp is neutrally buoyant in air, only requiring
powered thrust to move or change direction. As a result,
compared to multirotor drones of equivalent payloads, an ideal
blimp has far smaller inflight power consumption.
Untethered Autonomous Flight Using Visual
and Inertial Sensing on an Indoor Blimp
Andrew Mathieson supervised by Dr Toby Breckon
U
M.ENG RESEARCH PROJECT APRIL 2015 2
Lithium Polymer batteries, the current de-facto standard for
UAV applications, have an energy storage density of
~200 Wh/kg [7], so reduced power consumption directly
corresponds to longer flight durations, mitigating one of the
major drawbacks of current research UAVs.
B. Review of selected previous UAV blimps
Blimps as UAVs have received research interest since the
late-1990s, when AURORA, a 9 m long blimp carrying radios,
a forward-looking camera and a PC104 format computer was
built and used for environmental monitoring [8]. The autopilot
for that blimp was developed in a Matlab environment using a
detailed model of the non-linear blimp dynamics
For navigation, AURORA primarily used differential-GPS
to fly between waypoints but the forward-looking camera was
also utilised to follow parallel lines such as road markings [9].
That project was successful and contributed important
applications of control theory to blimps.
The complications of outdoor flight, in particular wind
gusts and aviation regulations, meant that, to date, most
research blimps have been designed for indoor flight and have
used the well-established model of a ground controller to
perform computations, relaying instructions to the in-flight
hardware.
The usual position location method for drones, GPS or
similar, is known to be unreliable indoors [10], so research has
instead explored vision based navigation.
In [11] a blimp was localised using data from static
cameras, this work was repeated in [12] using a roof mounted
camera that looked down on the blimp to estimate its pose
relative to the floor. Both achieved a stable closed loop
response.
Building on this, the multicamera motion-capture system
used in [13] was able to achieve path-planning around
multiple objects and through doors, maintaining knowledge of
the blimp position to within Β±30 mm. Whilst such fine
accuracy is very desirable in an indoor blimp, outside of the
carefully controlled environments navigation is impossible.
The other approach, as favoured by this project and many
others, is to place a small lightweight camera on-board the
blimp and estimate the pose of the blimp based on features in
the environment.
In [14], a 1.5 m long remote-controlled blimp platform able
to carry a pan/tilt wireless camera was built and flown. That
project included consideration of component weight and lift
capacity. Low-cost servomotors, required due to weight
constraints, caused some electrical noise problems.
C. Objectives for his project
This project built on the extensive literature to deliver a
blimp platform suitable for control engineering and UAV
vision development.
The test case for the blimp is station-keeping relative to a
target. If the target is stationary this behaviour corresponds to
position holding and is an important prerequisite for docking
and rendezvousing [15]. If the target moves, ability to follow
it is useful for tasks such as wildlife monitoring and crowd
observation. Following a moving target is also a prerequisite
for more complex behaviours including search and vehicle
tracking.
Previous projects relied on communication to a ground
station to augment their on-board systems and the blimp itself
was unable to fly without the help of equipment on the
ground. In this project, the control system is realised
completely on board the blimp. Moving away from reliance on
ground-based computation/location will allow β€˜untethered’
teleoperation, operation in potentially hazardous
environments, and operation in environments where good
positioning may be hard to acquire.
II. THE BLIMP
The contribution of this project has been to demonstrate
untethered autonomous flight of a blimp using entry-level
components. The work culminated in building a prototype
whose specifications are summarised in Table 1:
Envelope
3.5 m x 2 m βŒ€ x 0.8 mm thick PVC, weight
3.73 kg. Encloses 4.5 m
3
, net lift 1 kg.
Propulsion &
steering
Two-off 200 mm βŒ€ propellers, each with
8 V brushless dc motor, vectored thrust.
Power supply 5500 mAh 11.1V Lithium Polymer battery
Control Raspberry Pi 2 teleoperated via Wi-Fi
Sensing &
Electronics
IMU; 720p HD camera; Custom PCB
stackup integrating all components.
Optical isolation to reduce electrical noise
Table 1 – Blimp as-built specification
Using commodity parts where possible resulted in a total
component cost of Β£140, and the gas-retaining envelope cost a
further Β£200. A graphic breakdown is found in Appendix A.
A. Envelope and weight budget
In the literature, most blimps built for indoor research were
around 2 m in size. Procurement issues with small blimp
envelopes required this project to use a slightly larger 3.5 m
envelope, which weighs 3.73 kg and can carry a 1 kg payload.
Variable atmospheric conditions can affect the lifting capacity
by Β±75 g between flights so ballast is employed to trim the
blimp to neutral buoyancy prior to each flight.
Components were selected to minimise weight, with some
custom work where off-the-shelf parts would have been
unacceptably bulky. The payload weight (excluding envelope
self-weight) is broken down by component in Figure 2.
Figure 2 –Payload weight breakdown, units kg.
0.45
0.31
0.05
0.07
0.16
0.06
0.01 0.02
5500mAh LiPo battery
Frame; bolts; other
Raspberry Pi; Wi-Fi; Camera
Motor; servos
Case; other ABS parts
PCBs; electronics
IMU; PWM driver
Ballast
M.ENG RESEARCH PROJECT APRIL 2015 3
B. Microcomputer choice
Numerous lightweight embedded platforms are now
available. A recent review of some of these platforms with
relevance to UAVs may be found in [4]. This project selected
the Raspberry Pi 2 Linux microcomputer, primarily for its
matching camera module and hardware support. The software
architecture, described in Section IV, is split into four
concurrent threads to utilise the quad-core ARM CPU.
The camera module connects directly to the Broadcom SoC
using a wide SCI bus, so some of the video processing
workload can be handled in the GPU. In Section V this
capability is used to estimate the motion of the blimp
alongside the main CPU-bound pose estimation routine. Other
peripherals are connected to an I2C bus and General Purpose
Input/Outputs (GPIOs).
C. Electronics
The payload electronics design for this project is shown as a
block diagram in Figure 3. The camera module and
microcomputer are located at the bottom of the stackup and
receive 5 V power from the logic power regulator. Above this,
a custom PCB integrates the IMU and Pulse Width
Modulation (PWM) output driver. A second custom PCB
locates the optical isolation and buffering required for control
of the motors and servos.
Figure 3 – Electronics block diagram
The two primary propulsion units are brushless DC motor
driven propellers as used in quadcopters. Directional control is
achieved by vectoring the thrust direction using servo motors
with a range of Β±90Β° A secondary board provides sixteen extra
channels of 12 bit PWM output for the microcontroller
(NXP Semiconductor PCA9685).
The 11 degree-of-freedom inertial measurement unit
(Adafruit Industries, [17]) integrates three microelectronic
sensors: L3DG20H 3-axis gyroscope; LSM303DLHC 3-axis
accelerometer and 3-axis magnetometer; BMP180 barometric
pressure & temperature sensors. Both boards communicate
with the microcomputer via an I2C bus.
Logic systems are supplied with a regulated 5 V supply to
prevent interference from electrically noisy actuators.
The microcomputer takes around 30 s to reboot after an
interruption, during which time actuator control cannot be
assured, loss of power in flight is something to be avoided.
It was noted in [14] that electrical noise, particularly from
micro servos could interfere with effective video capture
under certain circumstances. To prevent this, this project
implemented optocouplers on the PWM and GPIO signal
lines. Six twin-channel optocouplers are included; each
buffered to the correct voltage for driving servos and motors
using an op-amp. Component values were chosen to achieve
an almost linear throttle response; a circuit diagram is in
Appendix A. Optocouplers also prevent excess current
damage to the microcontroller pins which could occur in fault
conditions such as an accidental short circuit of the motors.
Blimps flown outdoors frequently utilise additional
actuators on the tail for greater manoeuvrability [18]. In
readiness for future work where these may be required, the
design also included two isolated bidirectional DC drivers.
D. Power
The main flight battery, powering the motors and servos, is
a 5500 mAh Lithium Polymer battery, weight 0.448 kg. A
secondary battery (capacity 1000 mAh, weight 58 g) was
provided to allow use of the logic systems independently of
the motors, allowing communication and remote control
functionality to be confirmed before the motors are powered.
The main battery can power the electronics for a theoretical
maximum duration of greater than 7 hrs. During flight tests of
manoeuvring and active station keeping, a low level of motor
thrust was maintained continuously, so the estimated real-
world battery life of the system is around 4 hrs.
Current mid-market multirotors, by comparison, typically
have quoted flight times less than 1 hr with little ability to
carry spare batteries so in this respect the advantage of lighter-
than-air platforms remains clear. The complete system is
housed in a 3D-printed case, with aluminium arms that
support the propeller units, as shown in Figure 4.
Figure 4 – Blimp payload.
Inset – Logic PCB showing IMU and PWM breakouts
III. BLIMP DYNAMICS
Developing a controller for the blimp begins with
understanding its flight characteristics. In this section, the
coordinate frames used are defined and then the method
of [15] is followed to formulate an equation of motion for the
blimp. The focus throughout is on the sources of simplifying
assumptions and how they affect the controller design.
vectoring
servos
frame attaches
to envelope
downward
looking camera
brushless
motors
3D-printed case
PCBs inside
IMU
PWM
M.ENG RESEARCH PROJECT APRIL 2015 4
A. Coordinate frames
A blimp has a minimum of six degrees of freedom: the
𝑋, π‘Œ, 𝑍 position and π‘¦π‘Žπ‘€, π‘π‘–π‘‘π‘β„Ž, π‘Ÿπ‘œπ‘™π‘™ rotations relative to the
ground; plus any extra degrees of freedom required to model
changes in shape during flight. These extra deformational
degrees of freedom can be neglected by assuming that the
envelope of the blimp is inflated taught so does not deform,
and that the payload is rigidly affixed to the envelope.
For the rigid body motions, an ego-centric coordinate
system is used, with its origin at the centre of mass of the
payload. Instead of tracking the motions of the blimp relative
to the world, the system tracks the motion of objects in the
world relative to itself. In the egocentric frame, 𝑋 direction is
positive forwards, 𝑍 is positive upwards, and π‘Œ is positive to
the right. π‘¦π‘Žπ‘€, π‘π‘–π‘‘π‘β„Ž and π‘Ÿπ‘œπ‘™π‘™ are defined in the conventional
way (Figure 5). The principal axes of the IMU are aligned
with the egocentric coordinate system; transformation of
camera pose estimates into the ego frame is discussed below.
Figure 5 – Blimp coordinate frame and envelope dimensions
Hereafter, the pose of the blimp, these six coordinates, is
denoted by the 6x1 vector {x}.
B. Aerodynamics
The blimp is equipped with two main propellers which are
driven at variable speeds (from 0 to 8000 rpm), and directed at
an angle between Β±90Β° to provide vectored thrust. Common
mode thrust provides forward/backward and up/down control;
differential thrust provides control of π‘¦π‘Žπ‘€. The settings of
these four actuators are the output of the controller.
This blimp therefore has more degrees of freedom than
actuators – it is under-actuated. To resolve this, further
simplifying assumptions are required before developing a
controller. The first of these is that π‘π‘–π‘‘π‘β„Ž is decoupled from 𝑋
velocity, and effectively constant. This assumption amounts to
neglecting the effect of aerodynamic lift or downforce as the
blimp flies forward. Aero forces are proportional to velocity
squared, so at the low speeds of flight (typically less than
1 m/s indoors) the symmetrical shape of the blimp envelope
means lift is small and so this assumption is valid. At higher
speeds and in strong prevailing winds, as experienced by
outdoor blimps, these forces do become considerable, so
aerilons and rudders are used on commercial outdoor blimps.
The blimp envelope used here was originally intended for
static advertising use so its four inflatable fins serve no
aerodynamic purpose.
An aerodynamic effect which is not negligible, however, is
drag. The literature on airship drag has used wind tunnel data
to estimate drag coefficients for large, outdoor flying
envelopes of various aspect ratios at various Reynolds
numbers [19]. Authors investigating small indoor blimps such
as this have assumed that drag forces can be assumed to be a
simple function of velocity, in effect assuming that any
transitional, turbulent or form drags can be neglected and that
the only important source of drag is skin friction [15]. As this
blimp is larger than the 2 m model used in [15], this
assumption has been tested by comparing the flight Reynolds
number (i) of this blimp to an estimated critical Reynolds
number 𝑅𝑒 π‘π‘Ÿπ‘–π‘‘for blimps in air.
π‘…π‘’π‘“π‘™π‘–π‘”β„Žπ‘‘ =
𝜌 𝑒 π‘₯ 𝐿
πœ‡
=
(1.22 π‘˜π‘” π‘š3⁄ )(1 π‘š 𝑠⁄ )(3.5 π‘š)
(1.84 Γ— 10βˆ’5 π‘ƒπ‘Žπ‘ )
= 2.3 Γ— 105
(i)
Wind tunnel tests of the HALROP stratospheric blimp [19]
suggested that transition from laminar to turbulent boundary
layer (and hence from skin friction dominated to pressure
dominated drag) occurred at 𝑅𝑒 π‘π‘Ÿπ‘–π‘‘ = 7.5 Γ— 105
. That blimp,
like those analysed in the literature related to transitional flows
around blimp hulls, has a streamlined shape with an aspect
ratio of 1
/6. In comparison, 𝑅𝑒 π‘π‘Ÿπ‘–π‘‘ for a sphere (aspect ratio 1)
is 3.8 Γ— 105
[20]: compared to a sphere, the streamlined blimp
shape reduces the adverse pressure gradient experienced by
the boundary layer, and prevents it transitioning until a higher
Reynolds number.
When inflated, the blimp in this project has an aspect ratio
closer to Β½, with an almost hemispherical front. As the aspect
ratio of this blimp lies between that of a sphere and that of
HALROP, it is reasonable to assume their respective critical
Reynolds numbers are lower and upper bounds for 𝑅𝑒 π‘π‘Ÿπ‘–π‘‘ of
this blimp. In any case, as π‘…π‘’π‘“π‘™π‘–π‘”β„Žπ‘‘ is less than the assumed
lower bound for 𝑅𝑒 π‘π‘Ÿπ‘–π‘‘, the boundary layer may be assumed to
remain laminar, and the assumption made by [15] of drag
being proportional only to velocity squared holds.
For larger blimps, blimps that fly faster than this or blimps
with additional aerilons the laminar drag assumption should be
re-examined.
Laminar drag coefficients for planar and rotational motions
of a model blimp (with aspect ratio 1
/3, π‘…π‘’π‘“π‘™π‘–π‘”β„Žπ‘‘ = 1 Γ— 105
)
were obtained experimentally in [18] – in the absence of any
more applicable low speed data, these values may be utilised.
Neglecting drag coefficients which represent interactions
between degrees of freedom, (justified by the low flight
speed), and absorbing constant factors such as the blimp cross-
sectional area and air density into the terms, a 6x6 drag
coefficient matrix, [𝐢 𝑑] may be obtained. Multiplying this by
the velocities squared ({ẋ}2
) gives the 6x1 vector of drag
forces in the ego frame.
The second simplifying assumption required is that roll and
rolling motion is constant and negligible. This assumption is
validated by concentrating the centre of mass of the
blimp/payload below the centre of buoyancy which helps
return the blimp to zero π‘Ÿπ‘œπ‘™π‘™ and π‘π‘–π‘‘π‘β„Ž.
β„Ž
Payload
centre
envelopeβŒ€2m
Port
Stbd
M.ENG RESEARCH PROJECT APRIL 2015 5
C. Mass, inertia and centripetal effects
The all-up mass, π‘š, of this blimp is 4.6 kg: 3.73 kg
envelope material π‘š 𝑒, 0.874 kg payload, plus ballast to
balance. Neglecting gas leakage which is very slow, this mass
is assumed to be constant throughout each flight.
Moments of inertia about the for each of the three rotational
motions ( 𝐽 π‘¦π‘Žπ‘€, 𝐽 π‘π‘–π‘‘π‘β„Ž, π½π‘Ÿπ‘œπ‘™π‘™) can be calculated using standard
formulae if the blimp is approximated as an ellipsoid with
major axis 3.5 m and minor axes 2 m. A real blimp also
displaces a large volume of air as it flies or rotates so its
effective mass and inertia are much greater than would be
expected from the formulae [15]. Coefficients for this
additional mass and inertia ( π‘š π‘Ž π‘₯,𝑦,𝑧
, 𝐽 π‘Ž π‘¦π‘Žπ‘€,π‘π‘–π‘‘π‘β„Ž,π‘Ÿπ‘œπ‘™π‘™
) are given by
Figure 6.10 in [21], with b/a, the aspect ratio = 2/3.5.
Due to the symmetry of modelling the blimp as an ellipsoid,
π‘š π‘Ž 𝑦
= π‘š π‘Ž 𝑧
, 𝐽 π‘¦π‘Žπ‘€ = 𝐽 π‘π‘–π‘‘π‘β„Ž, likewise 𝐽 π‘Ž π‘¦π‘Žπ‘€
= 𝐽 π‘Ž π‘π‘–π‘‘π‘β„Ž
. 𝐽 π‘Ž π‘Ÿπ‘œπ‘™π‘™
is
considered negligible. These form the diagonal terms of the
6x6 mass matrix [𝑀]. The effect of off-diagonal terms, which
would represent the inertia on coupling between degrees of
freedom, and the effect of centripetal forces, has been ignored
since flight speed is low.
D. External forces
Buoyancy, gravity, forces and moments from the thrusters,
and any other disturbances (e.g. gusts) act as external forces
on the blimp. For an envelope of volume 𝑉𝑒and air and helium
at laboratory conditions the lift is calculated with (ii):
Payload, kg = 𝑉𝑒(𝜌 π‘Žπ‘–π‘Ÿ βˆ’ 𝜌 𝐻𝑒) βˆ’ π‘š 𝑒
4.5 m3(1.22 kg/m3
βˆ’ 0.17 kg/m3) βˆ’ 3.73 kg = 1 kg
(ii)
Ideally, the weight is balanced by the buoyancy. During
flights this is rarely perfect, so the thrusters either provide a
constant downforce (if too heavy), or the blimp is held at a
constant height using ballast (if too buoyant). By controlling
the vertical component of the thrust, closed-loop regulation of
height is achieved during flights.
The thrusters create a turning moment if a differential thrust
is applied, allowing control of the π‘¦π‘Žπ‘€ and π‘Ÿπ‘œπ‘™π‘™ degrees of
freedom. In common with most airships, there is no ability to
directly actuate in the π‘Œ direction, however the non-linearity
present in the blimp response to thrust means this can usually
be overcome with a combination of 𝑋 and π‘¦π‘Žπ‘€ motions.
In indoor flight gusts can usually be prevented; the
remaining forces are denoted as the 6x1 vector {𝐹𝑒π‘₯𝑑}.
E. Dynamic model
The assumptions related to each term in the blimp dynamics
equation (iii) have now been examined.
[𝑀]{ẍ} + [𝐢 𝑑]{xΜ‡}2
+ {𝐹𝑒π‘₯𝑑} = {0} , from [15] (iii)
Even this simplified model of the blimp is non-linear. In
section VI the real world stability of the blimp is compared to
predictions of classical control methods using this model.
IV. SOFTWARE
Software for controlling blimps was first explored in detail
by the AURORA project [9], which, in common with many
robots, used a hierarchical control model.
In this project, a similar controller has been implemented,
with the enhancement that an additional PC on the ground is
not required. All control work, including handling pilot input
and providing a GUI, is carried out by the microcomputer on-
board the blimp.
In the current implementation, the blimp may be piloted
either by sending instructions from a (Bluetooth) keyboard, or
by connecting wirelessly using Virtual Network Computing
(VNC) to view the GUI. For routine flights VNC is preferred
as it gives the pilot a full overview of the software; however
important quality-of-service and flight safety concerns arise
when relying on a wireless network for control, [22], so direct
control using the keyboard is retained as a fall-back option.
A. Software architecture
The software, written in Python, is structured into four
threads, each of which has a scope of responsibility as outlined
in Table 2. Threads interact by setting and polling flags and
data structures in the global namespace. Mutual exclusion
operators control access to shared data and resources.
thread initialisation runtime termination
Graphical
User Interface
Thread
Load data
Initialise GUI
Start other
threads
Respond to
pilot input,
Queue jobs for
hardware
Verify
termination;
Save log
files.
Hardware
Abstraction
Thread
Set motor
home states;
Get initial
IMU pose
Control motors
as required by
job queue
Poll the IMU
Stop and
home
motors
Autopilot
Thread
Fuse pose
information
from camera
and IMU;
Generate
autopilot jobs
Vision
Thread
Open
camera, start
recording
Visual and
motion pose
estimation
Close
camera;
save video.
Table 2 – Four-thread software architecture
In operational conditions the GUI Thread is responsible for
starting each of the other threads. When all threads have
successfully initialised, operation may begin. Interactions
between the threads are shown in a flowchart in Appendix B.
B. Hardware abstraction and error handling
Actuators are represented by objects containing state
variables for direction and thrust. These reside in the
namespace of the Hardware Abstraction Thread and only that
thread gets direct access to the hardware to manage race
conditions and prevent actuators receiving conflicting
instructions. Other code, such as the GUI, keyboard and
autopilot, generates jobs (instructions describing what
hardware changes are required when) into a queue managed
by the Hardware Abstraction Thread.
Jobs may have start times immediately or scheduled for the
future; jobs may have variable priorities; jobs may be blocking
(having a defined execution time) or non-blocking
M.ENG RESEARCH PROJECT APRIL 2015 6
(assumed to take place as soon as they are due) and they may
or may not recur again at a future time. When jobs are added
to the queue, the logic in Figure 6 is applied to decide if a
hardware update is required; otherwise, the thread polls the
IMU for new pose measurements.
Figure 6 – Job prioritisation logic
This design ensures good pose estimates are available when
required, and also ensures a high-priority job (e.g. an
emergency stop) is able to override any lower priority jobs
that might otherwise block the queue.
In the event of an unexpected runtime problem that is not
caught by its parent thread, the main thread issues stop
instructions to the hardware job queue and terminates the other
threads. This ensures that the actuator outputs (particularly the
brushless motors) are not left in an undetermined state
post-termination.
C. Multitasking performance
By splitting compute-intensive work such as vision
processing into separate threads the job queue can remain
responsive to pilot input even if another thread is busy.
The four-threaded structure is also well suited to the quad-
core processor of this microcomputer allowing the system to
achieve a maximum throughput of 5.8 fps using 320 x 240 px
images. In comparison, the single core predecessor Model B+
could only achieve around 1 fps at this image size.
V. POSE ESTIMATION & CONTROL
In the previous three sections the prototype vehicle has been
presented, a dynamic model for blimps in flight outlined, and
the high level software stack described. The challenge of
controlling the prototype is now addressed. In the context of
the four-thread architecture, the following methods are
implemented by the Vision Thread and the Autopilot Thread.
A. Pose estimation from camera
For any control problem, reliable information about the
current system state is essential. Here, this is the pose vector
{x} = {𝑋, π‘Œ, 𝑍, π‘¦π‘Žπ‘€, π‘π‘–π‘‘π‘β„Ž, π‘Ÿπ‘œπ‘™π‘™} 𝑇
.
As noted above, this blimp is not equipped with GPS
(unreliable indoors) or a differential location method (useless
without calibrated equipment independent of the blimp).
Instead, the most effective sensor for pose estimation is the
on-board camera. The camera was calibrated using the
technique of [23] to calculate its distortion matrix – before
each flight this is loaded from a file. Incoming video frames
are converted to grayscale and de-distorted using the
preloaded camera matrix. The image is then checked for the
presence of the target – a planar chessboard pattern. Detection
of the chessboard during flight uses the same method as in
calibration – if found, the pose may then be estimated.
The camera is located in the centre of the payload facing
straight down onto the floor and target, allowing easy
calculation of the rotation [𝑅] and translation {𝑑} relative to
the target using the perspective-n-point solution of [24].
Requiring the floor and image plane to remain parallel is
equivalent to assuming that π‘Ÿπ‘œπ‘™π‘™ and π‘π‘–π‘‘π‘β„Ž disturbances can
be neglected. In Section III it was seen that slow flight speeds
and high inertia of the blimp ensures this is usually the case.
Expressing the rotation as a 3x3 Rodrigues form matrix, the
rotational pose coordinates are recovered using(iv) and the
positional pose coordinates {𝑋, π‘Œ, 𝑍} using (v):
{
π‘¦π‘Žπ‘€
π‘π‘–π‘‘π‘β„Ž
π‘Ÿπ‘œπ‘™π‘™
} =
{
tanβˆ’1
(
𝑅1,2
𝑅1,1
)
βˆ’tanβˆ’1
(
𝑅3,2
𝑅3,3
)
tanβˆ’1
(
βˆ’π‘…3,1
√(𝑅3,2)2 + (𝑅3,3)2
)
}
{𝑋, π‘Œ, 𝑍} 𝑇
= βˆ’[𝑅] 𝑇
{𝑑}
(iv)
(v)
Where 𝑅1,2 etc. refer to terms of the Rodrigues matrix.
The pose estimation method is run on a small image,
320x240 px, with two iterations of subpixel refinement to
improve reliability of the pose data supplied to the autopilot
controller. The trade-off between reliability and framerate in a
bench test is assessed in Table 3. In this test, a β€˜bad pose
estimate’ is defined as a disagreement of greater than 20Β° from
ground truth in any of the three Euler angles, while the blimp
is stationary over the target for one minute.
As expected, larger images give superior accuracy, however
a very slow frame rate is undesirable for controller stability.
Equally, the marginal decrease in bad pose rate using
unlimited refinement does not justify the frame rate penalty
incurred. The selected configuration is highlighted in bold; it
allows refined images to be processed at 4.8 fps. Of the good
estimates, 90% are within Β±5Β° of ground truth.
To try to counter this 5Β° variability by continuously
functioning actuators would be inefficient and shorten
component lifetimes, so instead controllers ignore fluctuations
in this small range. Once the blimp has been controlled to be
within tolerance of the target heading the natural stability of
the blimp in the air helps it to remain in place.
M.ENG RESEARCH PROJECT APRIL 2015 7
Rate of bad pose estimates and frame rate
No subpixel
refinement
Two iterations
of refinement
Unlimited
iterations
Small image
320 x 240px
0.72 bad
poses/s
at 5.8 fps
0.43 bad
poses/s
at 4.8 fps
0.39 bad
poses/s
at 2.2 fps
Large image
640 x 480px
0.11 bad
poses/s
at 1.8 fps
0.09 bad
poses/s
at 1.8 fps
0.08 bad
poses/s
at 0.7 fps
Table 3 – Reducing rate of bad pose estimates by changing image
size and number of refinement iterations. In the β€˜unlimited
iterations’ case, refinement stops when the change in corner
location is less than 0.001 px.
B. Pose estimation from Inertial Measurement Unit
The IMU, as described above, contains five sensors:
accelerometer, magnetometer (compass), gyroscope (each
with 3 orthogonal axes), barometric pressure and temperature.
The pressure sensor gives a direct reading the height above
sea level of the blimp, using the ISA barometric formula. The
nominal resolution of this sensor is Β±0.1 m, although in test
conditions this variability was found to be Β±0.25 m. Tests on
the bench indicated that height readings are normally
distributed about the true height, so in flight, variability of
height estimates is smoothed by taking the mean of the most
recent ten samples. Flight altitude 𝑍 is calculated by
subtracting the current height above sea level from the height
above sea level at takeoff.
Similarly, with the magnetometer giving a measurement of
π‘¦π‘Žπ‘€ relative to the North direction and the accelerometer
giving measurements for π‘π‘–π‘‘π‘β„Ž and π‘Ÿπ‘œπ‘™π‘™ relative to gravity, the
current flight pose relative to the start is also calculated by the
IMU software [24]. Calculating positional coordinates by
double-integrating IMU accelerations is generally considered
to be ineffective because measurement noise is greatly
exacerbated by the integration process, so the IMU is currently
only used for rotational pose estimation.
C. Velocity estimation using H.264
The final source of pose information is the optic flow in the
scene, which in steady flight is correlated to the motion of the
camera over the scene. Unlike the direct pose estimation, this
method does not require a specific target in the scene, instead
it can work using features in the natural environment.
The camera used natively records video in H.264 (MPEG
AVC) format. An intermediate step in the H.264 encoding
process is to calculate the optic flow in each 16x16 pixel
macroblock. In common with many modern SoCs, this
encoding process is implemented using the graphic processing
unit instead of the CPU, happily, the camera drivers allow
access to these intermediates, permitting analysis of the
motion domain as well as the image domain.
Each time the camera encodes a video frame, the motion
field is copied to memory using a HSV colour space to
represent the direction of the motion field, sum-of-absolute-
differences measure and magnitude of the motion field at each
pixel, in a similar manner as used in [25]. In this application it
was found that the GPU can work about five times faster than
the CPU so to avoid unnecessary computation HSV motion
arrays are only processed when camera pose data is also ready.
The aim of the motion processing is to reduce the incoming
motion field into four estimates: the direction and magnitude
of the foreground and background motion.
To achieve station keeping relative to a target in the
foreground, the controller tries to minimise the motion relative
to the foreground, and keep the motion relative to the
background constant. In the case of a stationary target, the
background motion should additionally be zero. The MPEG
block-matcher that achieves this motion encoding is not as
robust as a specialised optic flow routine, so the produced
motion fields tend to be very noisy. Furthermore, the level of
motion detected is not always independent of scene features.
This is demonstrated in Figure 7, where the brightest colours
indicate largest motion and the mapping from motion direction
to hue is given in the key. The ground truth here is the camera
moving forwards over the chessboard. In the HSV array, the
predominant hue is red, corresponding to a direction of around
0Β° as expected. However, in some macroblocks the motion has
been incorrectly classified the motion incorrectly (green).
Figure 7 – Captured image with located chessboard axes overlaid;
Motion HSV representation of same frame, strongest motion is
detected in the chessboard area and along the bottom edge; Key.
In many frames the signal-to-noise ratio is high so a
clustering approach would frequently misclassify noisy
background pixels as foreground and vice versa. Instead, the
components of the motion field are considered, and the noise
is modelled as Gaussian white noise present at a low level in
every macroblock.
Otsu’s thresholding [26] is used to binarise the magnitude
of the motion field, removing small-magnitude noise from the
frame, leaving the larger magnitude motion which must be
classified as either foreground or background. A histogram of
the directions of these motions with bin widths corresponding
to 5 degree intervals is computed, and the modal class is taken
to be the average direction of the foreground’s motion. For the
HSV motion array shown above, this method estimated the
foreground motion direction to be at 353 Β° which is in good
agreement with the ground truth.
The performance of this motion segmenter implementation
on the blimp is evaluated over a larger number of tests in
Table 4. In general, performance is satisfactory, although
better performance is seen when the blimp is moving relative
to a stationary target than when both the target and the
background are experiencing large magnitude motion. In the
final column of Table 4, if the target has already been located,
a region-of-interest around the target is used to denote the
foreground. In the problematic moving background case, this
method helps, however it relies on the target being in view.
M.ENG RESEARCH PROJECT APRIL 2015 8
Scenario
Ground truth
motion
direction
Number of frames
where motion
estimate is within
Β±10Β° of ground truth
Fore-
ground
Back-
ground
Fore-
ground
Back-
ground
Using
ROI
Flight over
stationary target
0Β° None
86/
177
91/
177
38/
41
Flight over
moving target
None 180Β°
53
/124
66
/124
20/
53
Table 4 – Experimental verification of H.264 motion direction
segmentation, acquired in the lab.
A weakness of this current implementation of motion
direction estimation is that it is not able to correctly detect or
compensate for out-of-plane motion. The motion classifier
presented in [25] solves this by compared the incoming data to
a large library of pre-computed motion fields.
D. Sensor Fusion
Once all sources of pose estimates have been resolved into
the ego frame, the simplest way to combine them into a single
best guess is to take a weighted average. Here, the weights
awarded to each component were optimised experimentally to
minimise the disagreement between the different pose sources
over a large number of samples in controlled conditions.
In autopilot mode sensor fusion runs at 5 Hz, limited by the
vision system refresh rate. Improved techniques for combining
sensor data have been widely explored in the literature, where
Kalman Filtering is a commonly used technique. In [26], two
filter formulations are combined to fuse IMU and chessboard
derived pose information. An opportunity for future work
would be to apply that work to this blimp, and extend it by
incorporating the motion directions estimates and their
estimated uncertainties into state transitions. Such a β€˜motion
augmented’ fusion method should give more reliable pose
estimates than the current system.
E. De-coupled control
The system now has an estimate of its current pose which is
good to within measurement error. The final step in realising
autonomous station-keeping is to transform the error between
the current pose estimate and the target pose into the actuator
movements required to get to the target.
In Section III the dynamics of the blimp were considered,
and the simplifying assumptions permitted by low speed flight
were explored. In particular, coupling between degrees of
freedom due to aerodynamics and moment effects was
neglected. Furthermore, motion in several degrees of freedom,
notable π‘Ÿπ‘œπ‘™π‘™ and π‘π‘–π‘‘π‘β„Ž, was also neglected due to the flight
characteristics of the large blimp envelope and concentrated
payload mass. These simplifications mean that a separate
controller can be implemented for each actuatable degree of
freedom. Following the model of [9], higher-level controllers
may then be implemented to adjust the setpoints of these
controllers in accordance with mission requirements.
F. 𝑍 controller
Regulates the steady-state height of the blimp. Primary
input source is the barometric pressure sensor, supplemented
by camera pose distance-to-image information when
chessboard target is in view. Output is the vertical component
of the common mode thrust required to balance the weight of
the blimp with the buoyancy.
Good control of the flying height is required to ensure that
the planar motion detection works properly. This is primarily
provided by the neutral buoyancy of the envelope; using the 𝑍
controller at a low thrust level (10%) is sufficient to control
height and also helps improve responsiveness of the other
controllers, as the motors are already spinning continuously.
G. π‘¦π‘Žπ‘€ controller
Regulates the heading of the blimp. Primary input source is
the sensor fusion angle provided by the IMU and vision
system. When the chessboard target is in view, stable heading
hold relative to the chessboard with no operator intervention
was recorded. Without the vision system periodically
correcting the target heading, stability was limited by the
tendency for the IMU π‘¦π‘Žπ‘€ measurement to drift. Once the
blimp had lost the original target heading, oscillation between
Β±60Β° of the target direction was observed.
The π‘¦π‘Žπ‘€ controller was tested and tuned with the blimp
tethered at a fixed height above the chessboard target, but
allowing it to yaw freely. With autopilot engaged, the target
was displaced by angles between 0Β° and 60Β°. The system step
response was calculated from the recorded log data. In the
next section the tuning of this controller is outlined.
H. 𝑋 controller, 𝑋 velocity controller
Regulates the position of the blimp relative to the target, or,
in the case of a moving target, tries to match the speed of the
target. Primary input source for 𝑋 controller is the location of
the target centre relative to the image. If H.264 motion
directions are available, these are used to determine whether to
speed up or slow down. As there are no absolute or inertial
sources of reliable 𝑋 pose information, this controller can only
function when the target is in view. Output is the horizontal
component of the common-mode thrust.
The 𝑋 controller was tested by ballasting the envelope to
neutral buoyancy and flying over a chessboard moving
forwards below. In Figure 8 the 𝑋 position of the target points
(whose location in the first frame are shown in the inset)
during one of these tests is plotted. The controller is able to
keep the position of each target point in the frame within
Β±25 pixels of the starting value, which, given the properties of
the camera and distance to the target, equates to~ Β±90 mm in
the real world. The lines do not remain parallel at all times
because the blimp does not move purely in the 𝑋 direction; in
particular, if 𝑍 and π‘¦π‘Žπ‘€ fluctuations were not also controlled,
the accuracy of the 𝑋 controller was reduced to ~Β±200 mm.
The large inertia of the blimp envelope when flying forward
made this controller more susceptible to overshoot than the
other controllers.
M.ENG RESEARCH PROJECT APRIL 2015 9
Figure 8 – Position of target points during 𝑋 controller test
VI. RESULTS & DISCUSSION
Each controller has a proportional and integral gain. As in
the conventional PI controller, proportional gain 𝐾𝑝 controls
the magnitude of the thrust output and the gains have been
selected to provide a sub-critical response for good stability.
Multirotors require fast responses to errors for aerobatic flight,
however in a blimp the low speed of flight and large inertia
means that there is no great advantage to trading off stability
for a quicker response.
Unlike in a conventional controller which is evaluated at a
fixed update frequency, this application requires that the
controller can be updated irregularly whenever new pose
information becomes available. Furthermore, thruster
instructions put into the job queue by the autopilot may be
interrupted and overridden by other higher priority
instructions. For these reasons, the integration time in the PI
controller is implemented as the runtime of each instruction.
Large errors therefore result in a large correcting force
being applied for a long time; if, as the blimp approaches its
set point it receives a new pose estimate, it refines the
instructions to the actuators accordingly. Even if no new
information is received, the actuators will at the latest be
stopped at the first estimated end time, which will be closer to
the target than before. Once the blimp has settled to be within
tolerance of the target it returns to its usual hovering state.
A. Tuning by experiment
To verify this approach experimentally, the normalised
time-domain unit step response of the π‘¦π‘Žπ‘€ controller in the
tuning tests is plotted in Figure 9. It is seen that whilst there is
good convergence to the target, during the rise period the
variability between tests (shown by the grey Β±2Οƒ bounds) is
more than double that during steady state.
With 𝐾𝑝 = βˆ’5, this is a subcritical response. The median
5 to 95% rise time of the responses is 4.94 s.
The autopilot was configured to ignore pose estimates
containing Euler angles outside of expected ranges to mitigate
against incorrect pose estimates that could cause very large
actuator commands to be issued. It was noticed that bad pose
estimations were more likely immediately after a rapid motion
of the servos (e.g. going from hard left to hard right) when the
reaction caused the blimp gondola to briefly swing in the π‘Ÿπ‘œπ‘™π‘™
direction. In test flights, this out-of-plane motion was damped
by the large inertia of the blimp envelope so did not affect
control stability. Performing Kalman filtering on pose
estimates prior to their use in the autopilot would further help
protect against this source of errors.
B. Comparison with analytical methods
Standard methods for controller tuning require that the
system transfer function be linear and second-order [27]. Even
after the simplification process outlined above was applied to
the blimp dynamics the result was non-linear; a complete
treatment which does not omit coupling between degrees of
freedom would also likely be higher-than-seccond order.
Tuning results from standard methods must therefore be
approached with caution; this project used them to inform the
range of values of 𝐾𝑝 to test.
Approximating the system transfer function as second order
with time delay using Harriott’s method (see [27]) allowed
Nyquist analysis to find the gain margin from the step
response tests. The gain margin of the median line in Figure 9
was 26 db, suggesting 𝐾𝑝 should be set to 19 for a critical
response.
Figure 9 – Normalised time domain unit step response of π‘¦π‘Žπ‘€ controller at 𝐾 𝑝 = βˆ’5. Responses have been normalised by
timeshifting such that the step input occurrs at t=0, and scaling such that the average π‘¦π‘Žπ‘€ before the step = 0 and average π‘¦π‘Žπ‘€
after step = 1. Median and Β±2 standard deviation bounds of the normalised responses are also plotted at each time instant.
0
40
80
120
160
200
240
0 10 20 30 40 50 60 time, s
x1 x2 x3 x4
X
Y
initial point positions
ego frame
directions
trackedXpositions,px
-0.2
0.0
0.2
0.4
0.6
0.8
1.0
1.2
-6 -4 -2 0 2 4 6 8 10 12 14 16 18
normalisedyawresponse
time relative to step, s
Unit Step at t=0 Median Norm. response
Median+2Οƒ Median-2Οƒ
Test 1 Normalised response Test 2 Normalised response
Test 3 Normalised response Test 4 Normalised response
Test 5 Normalised response Test 6 Normalised response
Test 7 Normalised response Test 8 Normalised response
M.ENG RESEARCH PROJECT APRIL 2015 10
In reality this was found to be an over-estimate, frequently
causing the system to overshoot the target or develop unstable
oscilations which required manual intervention to correct.
In contrast, the widespread Zeigler-Nichols time-domain
tuning method underpredicted 𝐾𝑝. Z-N tuning suggested 𝐾𝑝
should be set to 5.71 for a critical response. Gains up to 15
were tested and found to be stable, albeit with some overshoot
causing oscilation about the target.
Both analyical tuning methods estimate system properties
from the transient part of the state response, so their accuracy
will have suffered from the high variablity of the data during
this important period. Reducing this variabilty proved difficult
due to the non-linear dynamics of the real blimp and
enviromental disturbances such as gusts which, although
considered to be small, were not controlled for.
The final weakness in this tuning method is that interactions
between degrees of freedom are not well-modeled by using
independent controllers for π‘¦π‘Žπ‘€, 𝑋 and 𝑍.
Nevertheless, setting 𝐾𝑝 to 10 resulted in much improved
performance. The step response was twice as fast as in tuning
(2 s median rise time), and overshoot oscilation was < Β±10Β°.
The long-term performance of the tuned π‘¦π‘Žπ‘€ controller
system is plotted in Figure 10. In these tests, the blimp was
positioned at rest out-of alignment with the target, started up
and autopilot engauged.
As desired, the estimated π‘¦π‘Žπ‘€ moves towards the setpoint,
and then good control stability is seen over the rest of the test
duration. Corrective action, shown by the Actuator Differental
Thrust, was only required when the blimp drifted more than 5Β°
away from the target.
Figure 10 – π‘¦π‘Žπ‘€ station-keeping long-duration test, 𝐾𝑝 = 10, following a step change in target position of 55Β°
VII. CONCLUSION
An indoor blimp capable of untethered autonomous flight
has been designed, built and tuned. Development of the
control scheme began by considering the dynamics of the
blimp, and utilised pose estimates from an IMU and a visual
target. The controller was tuned, and the experimental values
compared to values suggested by analytical tuning methods.
Indoor flight testing proved that the system works well.
Manual control of the blimp is sufficiently dexterous to pilot it
towards the target, and, once the target is in visual range the
stabilising autopilot holds the blimp position within Β±10Β° and
Β±0.2 m. Provided the target remains in view of the blimp, the
system also realises β€˜follow me’ behaviour if the target moves.
To achieve these results, the approach of [15] was followed:
the main actuatable degrees of freedom were decoupled to
allow a separate controller to be implemented for each one.
This approach proved to be simple but effective – once
tuned, it achieved stable performance in the test cases for this
project. With the groundwork of a usable platform now in
place, opportunities to extend it abound, both in controls
engineering and beyond. Improving the motion classification
and better integrating it with the other pose estimate sources,
e.g. by implementing Kalman filtering on board the blimp
should lead to better stability when the target is not in view.
A method of adjusting controller gains adaptively as
described by [28] could replace the manually tuned controllers
used here to further improve the stability of the blimp in
autopilot mode by making it more resilient to disturbances.
In wider applications, prior to this project Durham
University did not have a blimp so work requiring a persistent
aerial viewpoint was performed by other types of UAV. In
some of these applications the advantages of long flight
durations and aerodynamic stability possessed by a blimp may
be beneficial.
This project has included optical isolation and segregated
power supplies to prevent problems of electrical noise suffered
by some similar previous projects, at negligible extra cost.
Similarly, using off-the-shelf and open-source components in
preference to custom solutions should prolong the useful life
of the system by enabling continuous improvement.
Unlike in previous work, all pose estimation and control
processing is performed on board the blimp, demonstrating
that autonomous station keeping of a UAV blimp no longer
requires vast resources or reliance on ground-based equipment.
ACKNOWLEDGMENT
The author would like to thank Dr T Breckon for his
support throughout; I Hutchinson, N Clarey and the
Electronics Workshop staff for their assistance procuring
electronics and fabricating PCBs; C Wintrip and the
Mechanical Workshop staff for 3D printing work; J Gibson for
arranging helium supply; and S Apps and the Thermo Lab
staff for assistance with filling the blimp before flights.
20
40
60
80
100
yaw,Β°
Setpoint
SSF Yaw
-100
0
100
0 20 40 60 80 100 120 140 160 180 200 220
thrust,%
time, s
Actuator
Differential
Thrust
M.ENG RESEARCH PROJECT APRIL 2015 11
NOMENCLATURE
GPIO General Purpose Input/Outputs
BLDC Brushless DC Motor
𝑋, π‘Œ, 𝑍 Blimp positional coordinates
π‘¦π‘Žπ‘€, π‘π‘–π‘‘π‘β„Ž, π‘Ÿπ‘œπ‘™π‘™ Blimp rotational coordinates [Euler angles]
{π‘₯} Pose vector = {𝑋, π‘Œ, 𝑍, π‘¦π‘Žπ‘€, π‘π‘–π‘‘π‘β„Ž, π‘Ÿπ‘œπ‘™π‘™} 𝑇
{π‘₯Μ‡}, {π‘₯̈} Velocity vector, Acceleration vector
π‘š, 𝐽 Total mass, moment of inertia
π‘š 𝑒, 𝑉𝑒 Mass of envelope, Volume of envelope
𝜌 π‘Žπ‘–π‘Ÿ, 𝜌 𝐻𝑒 Standard densities for Air and Helium
π‘š π‘Ž 𝑋,π‘Œ,𝑍 Added mass in 𝑋, π‘Œ, 𝑍 directions respectively
𝐽 π‘Ž π‘¦π‘Žπ‘€,π‘π‘–π‘‘π‘β„Ž,π‘Ÿπ‘œπ‘™π‘™ Added inertia in π‘¦π‘Žπ‘€, π‘π‘–π‘‘π‘β„Ž, π‘Ÿπ‘œπ‘™π‘™ directions
[𝑀] Matrix of masses, inertias & added masses
[𝐢 𝑑] Matrix of drag coefficients, 𝜌 π‘Žπ‘–π‘Ÿ, etc.
{𝐹𝑒π‘₯𝑑} Vector of external forces e.g. gravity etc.
ISA International Standard Atmosphere
[𝑅] 3x3 Rodrigues form rotation matrix
{𝑑} Translation vector, = {Δ𝑋, Ξ”π‘Œ, Δ𝑍, } 𝑇
H.264 MPEG Advanced Video Codec
REFERENCES
[1] J. Irizarry and E. N. Johnson, β€œFeasibility Study to Determine the
Economic and Operational Benefits of Utilizing Unmanned Aerial
Vehicles,” Georgia Institute of Technology, Atlanta, USA, 2014.
[2] D. Jenkins and B. Vasigh, β€œThe economic impact of unmanned
aircraft systems integration in the United States,” Association for
Unmanned Vehicle Systems International, Arlington, VA, USA,
2013.
[3] A. Mathieson, β€œLiterature Reveiw - An Autotonous Flying Blimp,”
Durham University, Durham, UK, 2015.
[4] A. Bjalemarkl and H. Bergkvist, β€œQuadcopter control using Android
based sensing,” Lund University, Lund, Denmark, 2014.
[5] R. Brockers, M. Humenberger, S. Weiss and L. Matthies, β€œTowards
autonomous navigation of miniature UAV,” in IEEE Conference on
Computer Vision and Pattern Recognition Workshops, Columbus
Ohio, USA, 2014.
[6] S. Shah, β€œReal-time Image Processing on Low Cost Embedded
Computers,” EECS Department UCBC , Berkeley, USA, 2014.
[7] C. S. Clark and E. Simon, β€œEvaluation of Lithium Polymer
Technology for Small Satellite Applications,” in Small Satellites,
AIAA/USU 21st Ann. Conf. on, North Logan, Utah, USA, 2007.
[8] A. Elfes, S. S. Bueno, M. Bergerman and J. G. Ramos, β€œA semi-
autonomous robotic airship for environmental monitoring missions,”
in 1998 IEEE International Conference on Robotics and Automation,
Leuven, 1998.
[9] A. Elfes, S. S. Bueno1, J. G. Ramos, E. C. d. Paiva, M. Bergerman,
J. R. H. Carvalho, S. M. Maeta, L. G. B. Mirisola, B. G. Faria and J.
R. Azinheira, β€œModelling, Control and Perception for an
Autonomous Robotic Airship,” in Sensor Based Intelligent Robots,
Int. Workshop on, Dagstuhl, Germany, 2000.
[10] J. Pestana, J. Sanchez-Lopez, P. Campoy and S. Saripalli, β€œVision
based GPS-denied Object Tracking and Following for Unmanned
Aerial Vehicles,” in IEEE International Symposium on Safety,
Security, and Rescue Robotics (SSRR), Linkoping, Sweden, 2013.
[11] Y. Kawai, S. Kitagawa, S. Izoe and M. Fujita, β€œAn Unmanned
Planar Blimp on Visual Feedback Control: Experimental Results,”
Faculty of Engineering, Kanazawa University, Kanazawa, Japan,
2004.
[12] L. M. Alkurdi and R. B. Fisher, β€œVisual Control of an Autonomous
Indoor Robotic Blimp,” Edinburgh University, Edinburgh, UK,
2011.
[13] J. Muller, N. Kohler and W. Burgard, β€œAutonomous Miniature
Blimp Navigation with Online Motion Planning and Re-planning,”
in IEEE/RSJ International Conference on Inteligent Robots and
Systems, San Francisco, USA, 2011.
[14] C. Shucksmith, β€œDesign and Implementation of an Airbourne
Surveillance Vehicle,” Oxford University, Oxford, UK, 2006.
[15] S. van der Zwaan, A. Bernardino and J. Santos-Victor, β€œVision based
station keeping and docking for an aerial blimp,” in IEEE/RSJ
International Conference on Intelligent Robots and Systems (IROS
2000) (Volume:1 ) , Takamatsu, 2000.
[16] Adafruit Industries, β€œAdafruit 10-DOF IMU Breakout - L3GD20H +
LSM303 + BMP180,” 15 10 2014. [Online]. Available:
http://www.adafruit.com/product/1604. [Accessed 20 10 2014].
[17] S. B. V. Gomes, β€œAn Investigation of the Flight Dynamics of
Airships,” Cranfeild Institute of Technology Aero Department,
Cranfeild, UK, 1990.
[18] N. Yamamura, K. Matsuuchi, M. Onda, S. Yamazaki and A. Saski,
β€œDrag Reduction of High Altitude Airships by Active Boundary
Layer Control,” Fluids and Thermal Engineering, JSME Int. Journal
Series B, vol. 42, no. 1, pp. 230-237, 2008.
[19] E. L. Houghton, P. W. Carpenter, S. Collicott and D. Valentine,
β€œTurbulence on Spheres,” in Aerodynamics for Engineering
Students, Elsevier, 2012, p. 507.
[20] P. Naaijen, β€œRigid Body Dynamics in TU Deflt OCW,” 2014.
[Online]. Available:
http://ocw.tudelft.nl/fileadmin/ocw/courses/OffshoreHydromechanic
s/res00032/!506172742032204f666673686f726520487964726f6d65
6368616e696373.pdf. [Accessed 03 06 2015].
[21] J. J. G. Ramos, S. M. Maeta, L. G. B. Mirisola, S. S. Bueno, M.
Bergerman, B. G. Faria, G. E. M. Pinto and A. H. Bruciapaglia,
β€œInternet-Based Solutions in the Development and Operation of an
Unmanned Robotic Airship,” Proceedings of the IEEE, vol. 91, no.
3, pp. 463-475, 03 2003.
[22] Z. Zhang, β€œFlexible camera calibration by viewing a plane from
unknown orientations,” in Computer Vision, Proc. 7th IEEE Int.
Conf on, Kerkyra, Greece, 1999.
[23] D. F. DeMenthon and L. S. Davis, β€œModel-Based Object Pose in 25
Lines of Code,” Computer Vision, Int. Jrn of,, vol. 23, no. 1, pp. 123-
141, 1995.
[24] R. Barnett, β€œRTIMULib - a versatile C++ and Python 9-dof and 10-
dof IMU library,” 11 12 2014. [Online]. Available:
https://github.com/richards-tech/RTIMULib/. [Accessed 01 01
2015].
[25] M. Narayana, A. Hanson and E. Learned-Miller, β€œCoherent Motion
Segmentation in Moving Camera Videos using Optical Flow
Orientations,” in Computer Vision ICCV, 2013 IEEE Int. Conf on,
Sydney, Australia, 2013.
[26] N. Otsu, β€œA Threshold Selection Method from Gray-Level
Histograms,” Systems, Man & Cybernetics, IEEE Trans. on, vol. 9,
no. 1, pp. 62-66, 1999.
[27] G. Ligorio and A. M. Sabatini, β€œExtended Kalman Filter-Based
Methods for Pose Estimation Using Visual, Inertial and Magnetic
Sensors: Comparative Analysis,” MDPI Sensors, vol. 13, no. 2, pp.
1919-1941, 2013.
[28] I. P. Jakoubek, β€œExperimental Identification of Stable Nonoscillatory
Systems from Step-Responses by Selected Methods,” 2008.
[29] J. Ko, D. Klein, D. Fox and D. Haehnel, β€œGaussian Processes and
Reinforcement Learning for Identification and Control of an
Autonomous Blimp,” in IEEE Internation Conference on Robotics
and Automation, Rome, 2009.
[30] Adafruit Industries, β€œAdafruit 16-Channel 12-bit PWM/Servo Driver
- I2C interface,” 16 02 2015. [Online]. Available:
http://www.adafruit.com/product/815. [Accessed 20 10 2014].
M.ENG RESEARCH PROJECT APRIL 2015 12
APPENDICES – SUPPLEMENTARY FIGURES
A. The Blimp
Figure 11 – Cost breakdown.
Figure 12 – Schematic for two channels of optoisolation/buffering, with signal flow from PWM output to servo highlighted.
The blimp has twelve channels of optically isolated output available.
B. Software
Figure 13 – Thread interactions during startup. Bold lines show flow of execution, dashed lines represent data exchange between functions.
Β£197
Β£30
Β£28
Β£33
Β£41
Β£10
Blimp envelope
3D printed enclosure *
Raspberry Pi; Wi-Fi; Camera
BLDC motors; ESC; battery *
IMU; PWM driver
Electronics; other *
* denotes free issue items
PWM signal
twin optocoupler
MCT61, DIP
Logic side:
5.0 V dc
supply
Motor side:
7.4 V dc
supply
Thrust vector
servos
Servo
signal
Twin op amps
LM358A
M.ENG RESEARCH PROJECT APRIL 2015 13
Figure 14 – Thread interaction during runtime. Bold lines show flow of execution, dashed lines represent data exchange between functions.
At termination, Hardware Abstraction Thread stops all motors, GUI Thread writes log files to disc, and Vision Thread closes camera.
C. Performance Results
Figure 15 – Comparison of step response predicted by 2nd
order with time delay modelled transfer function (Harriot’s method) and
median step response of real blimp. Inset – correlation plot between the two step responses. Spearman’s 𝑅2
= 0.967.
-0.2
0
0.2
0.4
0.6
0.8
1
1.2
-4 -2 0 2 4 6 8 10
Stepresponse
time relative to step, s
Harriott approx.
Median response
RΒ² = 0.9679
Medianpoints
Harriott points
Correlation plot

More Related Content

What's hot

20130520 sar-anuphao-products-final-s
20130520 sar-anuphao-products-final-s20130520 sar-anuphao-products-final-s
20130520 sar-anuphao-products-final-sVipaporn Chim
Β 
ELSA_Symposium_Poster_Final
ELSA_Symposium_Poster_FinalELSA_Symposium_Poster_Final
ELSA_Symposium_Poster_FinalDarren Combs
Β 
AIAA Exhibition Presentation_Final
AIAA Exhibition Presentation_FinalAIAA Exhibition Presentation_Final
AIAA Exhibition Presentation_FinalRohan Deshmukh
Β 
UAV Presentation
UAV PresentationUAV Presentation
UAV Presentationalex2neo
Β 
637124main staehle presentation
637124main staehle presentation637124main staehle presentation
637124main staehle presentationClifford Stone
Β 
Comparative Study of Indoor Navigation Systems for Autonomous Flight
Comparative Study of Indoor Navigation Systems for Autonomous FlightComparative Study of Indoor Navigation Systems for Autonomous Flight
Comparative Study of Indoor Navigation Systems for Autonomous FlightTELKOMNIKA JOURNAL
Β 
Diffoot fayful
Diffoot fayfulDiffoot fayful
Diffoot fayfulNASAPMC
Β 
1 IGARSS 2011 JPSS Monday Goldberg.pptx
1 IGARSS 2011 JPSS Monday Goldberg.pptx1 IGARSS 2011 JPSS Monday Goldberg.pptx
1 IGARSS 2011 JPSS Monday Goldberg.pptxgrssieee
Β 
"Click here" to build your UAV
"Click here" to build your UAV"Click here" to build your UAV
"Click here" to build your UAVDirk Gorissen
Β 
Unmanned Aircraft System Fundamentals
 Unmanned Aircraft System Fundamentals Unmanned Aircraft System Fundamentals
Unmanned Aircraft System FundamentalsJim Jenkins
Β 
Aerospace defensetechs
Aerospace  defensetechsAerospace  defensetechs
Aerospace defensetechsalancabe
Β 
UAV Presentation
UAV PresentationUAV Presentation
UAV PresentationRuyyan
Β 
Drone Traffic Management over Mobile Networks - Attila Takacs, VTC 2017 Fall,...
Drone Traffic Management over Mobile Networks - Attila Takacs, VTC 2017 Fall,...Drone Traffic Management over Mobile Networks - Attila Takacs, VTC 2017 Fall,...
Drone Traffic Management over Mobile Networks - Attila Takacs, VTC 2017 Fall,...Attila Takacs
Β 
The visual intelligence iris one airborne digital camera systems petrie geo...
The visual intelligence iris one airborne digital camera systems   petrie geo...The visual intelligence iris one airborne digital camera systems   petrie geo...
The visual intelligence iris one airborne digital camera systems petrie geo...Armando Guevara
Β 
Airships as an Earth and Space Science Platform - Jason Rhodes
Airships as an Earth and Space Science Platform - Jason RhodesAirships as an Earth and Space Science Platform - Jason Rhodes
Airships as an Earth and Space Science Platform - Jason RhodesAdvanced-Concepts-Team
Β 
Ocean Optics: Fundamentals & Naval Applications Technical Training Short Cour...
Ocean Optics: Fundamentals & Naval Applications Technical Training Short Cour...Ocean Optics: Fundamentals & Naval Applications Technical Training Short Cour...
Ocean Optics: Fundamentals & Naval Applications Technical Training Short Cour...Jim Jenkins
Β 
Study o horizontal flows in solar active regions
Study o horizontal flows in solar active regionsStudy o horizontal flows in solar active regions
Study o horizontal flows in solar active regionsastrosanti
Β 

What's hot (17)

20130520 sar-anuphao-products-final-s
20130520 sar-anuphao-products-final-s20130520 sar-anuphao-products-final-s
20130520 sar-anuphao-products-final-s
Β 
ELSA_Symposium_Poster_Final
ELSA_Symposium_Poster_FinalELSA_Symposium_Poster_Final
ELSA_Symposium_Poster_Final
Β 
AIAA Exhibition Presentation_Final
AIAA Exhibition Presentation_FinalAIAA Exhibition Presentation_Final
AIAA Exhibition Presentation_Final
Β 
UAV Presentation
UAV PresentationUAV Presentation
UAV Presentation
Β 
637124main staehle presentation
637124main staehle presentation637124main staehle presentation
637124main staehle presentation
Β 
Comparative Study of Indoor Navigation Systems for Autonomous Flight
Comparative Study of Indoor Navigation Systems for Autonomous FlightComparative Study of Indoor Navigation Systems for Autonomous Flight
Comparative Study of Indoor Navigation Systems for Autonomous Flight
Β 
Diffoot fayful
Diffoot fayfulDiffoot fayful
Diffoot fayful
Β 
1 IGARSS 2011 JPSS Monday Goldberg.pptx
1 IGARSS 2011 JPSS Monday Goldberg.pptx1 IGARSS 2011 JPSS Monday Goldberg.pptx
1 IGARSS 2011 JPSS Monday Goldberg.pptx
Β 
"Click here" to build your UAV
"Click here" to build your UAV"Click here" to build your UAV
"Click here" to build your UAV
Β 
Unmanned Aircraft System Fundamentals
 Unmanned Aircraft System Fundamentals Unmanned Aircraft System Fundamentals
Unmanned Aircraft System Fundamentals
Β 
Aerospace defensetechs
Aerospace  defensetechsAerospace  defensetechs
Aerospace defensetechs
Β 
UAV Presentation
UAV PresentationUAV Presentation
UAV Presentation
Β 
Drone Traffic Management over Mobile Networks - Attila Takacs, VTC 2017 Fall,...
Drone Traffic Management over Mobile Networks - Attila Takacs, VTC 2017 Fall,...Drone Traffic Management over Mobile Networks - Attila Takacs, VTC 2017 Fall,...
Drone Traffic Management over Mobile Networks - Attila Takacs, VTC 2017 Fall,...
Β 
The visual intelligence iris one airborne digital camera systems petrie geo...
The visual intelligence iris one airborne digital camera systems   petrie geo...The visual intelligence iris one airborne digital camera systems   petrie geo...
The visual intelligence iris one airborne digital camera systems petrie geo...
Β 
Airships as an Earth and Space Science Platform - Jason Rhodes
Airships as an Earth and Space Science Platform - Jason RhodesAirships as an Earth and Space Science Platform - Jason Rhodes
Airships as an Earth and Space Science Platform - Jason Rhodes
Β 
Ocean Optics: Fundamentals & Naval Applications Technical Training Short Cour...
Ocean Optics: Fundamentals & Naval Applications Technical Training Short Cour...Ocean Optics: Fundamentals & Naval Applications Technical Training Short Cour...
Ocean Optics: Fundamentals & Naval Applications Technical Training Short Cour...
Β 
Study o horizontal flows in solar active regions
Study o horizontal flows in solar active regionsStudy o horizontal flows in solar active regions
Study o horizontal flows in solar active regions
Β 

Viewers also liked

Substance Abuse Montcalm, Michigan
Substance Abuse Montcalm, MichiganSubstance Abuse Montcalm, Michigan
Substance Abuse Montcalm, Michiganrecoveryrestart2
Β 
Dr. Andreas Lattner - Aufsetzen skalierbarer Prognose- und Analysedienste mit...
Dr. Andreas Lattner - Aufsetzen skalierbarer Prognose- und Analysedienste mit...Dr. Andreas Lattner - Aufsetzen skalierbarer Prognose- und Analysedienste mit...
Dr. Andreas Lattner - Aufsetzen skalierbarer Prognose- und Analysedienste mit...AboutYouGmbH
Β 
1 7 прСзСнтация domena
1 7 прСзСнтация  domena1 7 прСзСнтация  domena
1 7 прСзСнтация domenakolesikmixer
Β 
Cts project 1 brief
Cts project 1 briefCts project 1 brief
Cts project 1 briefCrystal Chia
Β 
Substance Abuse Lapeer, Michigan
Substance Abuse Lapeer, MichiganSubstance Abuse Lapeer, Michigan
Substance Abuse Lapeer, Michiganrecoveryrestart2
Β 
VW Writing Sample Clr_Concise Writing
VW Writing Sample Clr_Concise WritingVW Writing Sample Clr_Concise Writing
VW Writing Sample Clr_Concise WritingVera Wallace
Β 
Substance Abuse Washtenaw, Michigan
Substance Abuse Washtenaw, MichiganSubstance Abuse Washtenaw, Michigan
Substance Abuse Washtenaw, Michiganrecoveryrestart2
Β 
аксСссуары
аксСссуарыаксСссуары
аксСссуарыkolesikmixer
Β 
8 1 прСзСнтация Π³ΠΎΡ€ΠΈΠ·ΠΎΠ½Ρ‚
8 1 прСзСнтация Π³ΠΎΡ€ΠΈΠ·ΠΎΠ½Ρ‚8 1 прСзСнтация Π³ΠΎΡ€ΠΈΠ·ΠΎΠ½Ρ‚
8 1 прСзСнтация Π³ΠΎΡ€ΠΈΠ·ΠΎΠ½Ρ‚kolesikmixer
Β 
Π€Π°Π±Ρ€ΠΈΠΊΠ° Bellisimo
Π€Π°Π±Ρ€ΠΈΠΊΠ° BellisimoΠ€Π°Π±Ρ€ΠΈΠΊΠ° Bellisimo
Π€Π°Π±Ρ€ΠΈΠΊΠ° Bellisimokolesikmixer
Β 
LinkedIn Singapore Office by M Moser
LinkedIn Singapore Office by M MoserLinkedIn Singapore Office by M Moser
LinkedIn Singapore Office by M MoserNirmala Srinivasa
Β 
pdu-study-2015-1
pdu-study-2015-1pdu-study-2015-1
pdu-study-2015-1Teona surmava
Β 
1 1 прСзСнтация romo
1 1 прСзСнтация romo1 1 прСзСнтация romo
1 1 прСзСнтация romokolesikmixer
Β 
Substance Abuse Lake, Michigan
Substance Abuse Lake, MichiganSubstance Abuse Lake, Michigan
Substance Abuse Lake, Michiganrecoveryrestart2
Β 
LASPP Final Term Paper - Accesibility to Basic Education
LASPP Final Term Paper - Accesibility to Basic EducationLASPP Final Term Paper - Accesibility to Basic Education
LASPP Final Term Paper - Accesibility to Basic EducationBeverly Samayoa
Β 
DISC-Christian_Almodin
DISC-Christian_AlmodinDISC-Christian_Almodin
DISC-Christian_Almodinchristian almodin
Β 
Cts final project brief
Cts final project briefCts final project brief
Cts final project briefCrystal Chia
Β 
itd object 1
itd object 1itd object 1
itd object 1Crystal Chia
Β 

Viewers also liked (20)

Substance Abuse Montcalm, Michigan
Substance Abuse Montcalm, MichiganSubstance Abuse Montcalm, Michigan
Substance Abuse Montcalm, Michigan
Β 
Dr. Andreas Lattner - Aufsetzen skalierbarer Prognose- und Analysedienste mit...
Dr. Andreas Lattner - Aufsetzen skalierbarer Prognose- und Analysedienste mit...Dr. Andreas Lattner - Aufsetzen skalierbarer Prognose- und Analysedienste mit...
Dr. Andreas Lattner - Aufsetzen skalierbarer Prognose- und Analysedienste mit...
Β 
1 7 прСзСнтация domena
1 7 прСзСнтация  domena1 7 прСзСнтация  domena
1 7 прСзСнтация domena
Β 
Cts project 1 brief
Cts project 1 briefCts project 1 brief
Cts project 1 brief
Β 
Substance Abuse Lapeer, Michigan
Substance Abuse Lapeer, MichiganSubstance Abuse Lapeer, Michigan
Substance Abuse Lapeer, Michigan
Β 
VW Writing Sample Clr_Concise Writing
VW Writing Sample Clr_Concise WritingVW Writing Sample Clr_Concise Writing
VW Writing Sample Clr_Concise Writing
Β 
Substance Abuse Washtenaw, Michigan
Substance Abuse Washtenaw, MichiganSubstance Abuse Washtenaw, Michigan
Substance Abuse Washtenaw, Michigan
Β 
аксСссуары
аксСссуарыаксСссуары
аксСссуары
Β 
8 1 прСзСнтация Π³ΠΎΡ€ΠΈΠ·ΠΎΠ½Ρ‚
8 1 прСзСнтация Π³ΠΎΡ€ΠΈΠ·ΠΎΠ½Ρ‚8 1 прСзСнтация Π³ΠΎΡ€ΠΈΠ·ΠΎΠ½Ρ‚
8 1 прСзСнтация Π³ΠΎΡ€ΠΈΠ·ΠΎΠ½Ρ‚
Β 
Π€Π°Π±Ρ€ΠΈΠΊΠ° Bellisimo
Π€Π°Π±Ρ€ΠΈΠΊΠ° BellisimoΠ€Π°Π±Ρ€ΠΈΠΊΠ° Bellisimo
Π€Π°Π±Ρ€ΠΈΠΊΠ° Bellisimo
Β 
LinkedIn Singapore Office by M Moser
LinkedIn Singapore Office by M MoserLinkedIn Singapore Office by M Moser
LinkedIn Singapore Office by M Moser
Β 
JouwToekomst2016
JouwToekomst2016JouwToekomst2016
JouwToekomst2016
Β 
pdu-study-2015-1
pdu-study-2015-1pdu-study-2015-1
pdu-study-2015-1
Β 
1 1 прСзСнтация romo
1 1 прСзСнтация romo1 1 прСзСнтация romo
1 1 прСзСнтация romo
Β 
Substance Abuse Lake, Michigan
Substance Abuse Lake, MichiganSubstance Abuse Lake, Michigan
Substance Abuse Lake, Michigan
Β 
LASPP Final Term Paper - Accesibility to Basic Education
LASPP Final Term Paper - Accesibility to Basic EducationLASPP Final Term Paper - Accesibility to Basic Education
LASPP Final Term Paper - Accesibility to Basic Education
Β 
DISC-Christian_Almodin
DISC-Christian_AlmodinDISC-Christian_Almodin
DISC-Christian_Almodin
Β 
IDJ 4
IDJ 4IDJ 4
IDJ 4
Β 
Cts final project brief
Cts final project briefCts final project brief
Cts final project brief
Β 
itd object 1
itd object 1itd object 1
itd object 1
Β 

Similar to Autonomous Blimp Station Keeps Using Visual and Inertial Sensing

Helicopter With Gps
Helicopter With GpsHelicopter With Gps
Helicopter With Gpsesthershiang88
Β 
Design-and-Development-of-the-Hardware-for-Vision-based-UAV-Autopilot
Design-and-Development-of-the-Hardware-for-Vision-based-UAV-AutopilotDesign-and-Development-of-the-Hardware-for-Vision-based-UAV-Autopilot
Design-and-Development-of-the-Hardware-for-Vision-based-UAV-AutopilotCraig Jee
Β 
digital_heritage
digital_heritagedigital_heritage
digital_heritageElioth Fraijo
Β 
UAV Final Poster
UAV Final PosterUAV Final Poster
UAV Final PosterKaleo Norman
Β 
End of Semester Design Report Final Version
End of Semester Design Report Final VersionEnd of Semester Design Report Final Version
End of Semester Design Report Final VersionDaniel Worts
Β 
Rapid Development of a Rotorcraft UAV System - AHS Tech Specialists Meeting 2005
Rapid Development of a Rotorcraft UAV System - AHS Tech Specialists Meeting 2005Rapid Development of a Rotorcraft UAV System - AHS Tech Specialists Meeting 2005
Rapid Development of a Rotorcraft UAV System - AHS Tech Specialists Meeting 2005Mark Hardesty
Β 
IJSRED-V2I2P68
IJSRED-V2I2P68IJSRED-V2I2P68
IJSRED-V2I2P68IJSRED
Β 
Overview of Design and Integration of Unmanned Aerial Vehicle Aircraft for Su...
Overview of Design and Integration of Unmanned Aerial Vehicle Aircraft for Su...Overview of Design and Integration of Unmanned Aerial Vehicle Aircraft for Su...
Overview of Design and Integration of Unmanned Aerial Vehicle Aircraft for Su...IRJET Journal
Β 
A Review on Longitudinal Control Law Design for a Small Fixed-Wing UAV
A Review on Longitudinal Control Law Design for a Small Fixed-Wing UAVA Review on Longitudinal Control Law Design for a Small Fixed-Wing UAV
A Review on Longitudinal Control Law Design for a Small Fixed-Wing UAVIRJET Journal
Β 
Control of aircraft from the base station using eog siganl transmission
Control of aircraft from the base station using eog siganl transmissionControl of aircraft from the base station using eog siganl transmission
Control of aircraft from the base station using eog siganl transmissiontheijes
Β 
Control of aircraft from the base station using eog siganl transmission
Control of aircraft from the base station using eog siganl transmissionControl of aircraft from the base station using eog siganl transmission
Control of aircraft from the base station using eog siganl transmissiontheijes
Β 
Application of Drones for Mining Ooperations
Application of Drones for Mining OoperationsApplication of Drones for Mining Ooperations
Application of Drones for Mining OoperationsDr. Alex Vyazmensky
Β 
Microsat Proximity and Docking Operations
Microsat Proximity and Docking OperationsMicrosat Proximity and Docking Operations
Microsat Proximity and Docking OperationsJeffrey Robinson
Β 
Microsat Ground Test Vehicle
Microsat  Ground Test VehicleMicrosat  Ground Test Vehicle
Microsat Ground Test VehicleJeffrey Robinson
Β 
ADROIT_IJAERD
ADROIT_IJAERDADROIT_IJAERD
ADROIT_IJAERDKAJAL PANDA
Β 
Mechatronics case study on Wireless Survillence Balloon
Mechatronics case study on Wireless Survillence BalloonMechatronics case study on Wireless Survillence Balloon
Mechatronics case study on Wireless Survillence BalloonVishnu RC Vijayan
Β 
Timmins how would you like to use ua vs
Timmins how would you like to use ua vsTimmins how would you like to use ua vs
Timmins how would you like to use ua vsGeCo in the Rockies
Β 
Design of Coaxial Rotor Micro Air Vehicle
Design of Coaxial Rotor Micro Air Vehicle Design of Coaxial Rotor Micro Air Vehicle
Design of Coaxial Rotor Micro Air Vehicle IJMER
Β 
anaFLY01-05: A MULTIPURPOSE QUADCOPTER WITH DUALITY FEATURES
anaFLY01-05: A MULTIPURPOSE QUADCOPTER WITH DUALITY FEATURESanaFLY01-05: A MULTIPURPOSE QUADCOPTER WITH DUALITY FEATURES
anaFLY01-05: A MULTIPURPOSE QUADCOPTER WITH DUALITY FEATURESNathaniel A. ADEWOLE
Β 

Similar to Autonomous Blimp Station Keeps Using Visual and Inertial Sensing (20)

Helicopter With Gps
Helicopter With GpsHelicopter With Gps
Helicopter With Gps
Β 
Design-and-Development-of-the-Hardware-for-Vision-based-UAV-Autopilot
Design-and-Development-of-the-Hardware-for-Vision-based-UAV-AutopilotDesign-and-Development-of-the-Hardware-for-Vision-based-UAV-Autopilot
Design-and-Development-of-the-Hardware-for-Vision-based-UAV-Autopilot
Β 
digital_heritage
digital_heritagedigital_heritage
digital_heritage
Β 
UAV Final Poster
UAV Final PosterUAV Final Poster
UAV Final Poster
Β 
End of Semester Design Report Final Version
End of Semester Design Report Final VersionEnd of Semester Design Report Final Version
End of Semester Design Report Final Version
Β 
Rapid Development of a Rotorcraft UAV System - AHS Tech Specialists Meeting 2005
Rapid Development of a Rotorcraft UAV System - AHS Tech Specialists Meeting 2005Rapid Development of a Rotorcraft UAV System - AHS Tech Specialists Meeting 2005
Rapid Development of a Rotorcraft UAV System - AHS Tech Specialists Meeting 2005
Β 
IJSRED-V2I2P68
IJSRED-V2I2P68IJSRED-V2I2P68
IJSRED-V2I2P68
Β 
Drone
DroneDrone
Drone
Β 
Overview of Design and Integration of Unmanned Aerial Vehicle Aircraft for Su...
Overview of Design and Integration of Unmanned Aerial Vehicle Aircraft for Su...Overview of Design and Integration of Unmanned Aerial Vehicle Aircraft for Su...
Overview of Design and Integration of Unmanned Aerial Vehicle Aircraft for Su...
Β 
A Review on Longitudinal Control Law Design for a Small Fixed-Wing UAV
A Review on Longitudinal Control Law Design for a Small Fixed-Wing UAVA Review on Longitudinal Control Law Design for a Small Fixed-Wing UAV
A Review on Longitudinal Control Law Design for a Small Fixed-Wing UAV
Β 
Control of aircraft from the base station using eog siganl transmission
Control of aircraft from the base station using eog siganl transmissionControl of aircraft from the base station using eog siganl transmission
Control of aircraft from the base station using eog siganl transmission
Β 
Control of aircraft from the base station using eog siganl transmission
Control of aircraft from the base station using eog siganl transmissionControl of aircraft from the base station using eog siganl transmission
Control of aircraft from the base station using eog siganl transmission
Β 
Application of Drones for Mining Ooperations
Application of Drones for Mining OoperationsApplication of Drones for Mining Ooperations
Application of Drones for Mining Ooperations
Β 
Microsat Proximity and Docking Operations
Microsat Proximity and Docking OperationsMicrosat Proximity and Docking Operations
Microsat Proximity and Docking Operations
Β 
Microsat Ground Test Vehicle
Microsat  Ground Test VehicleMicrosat  Ground Test Vehicle
Microsat Ground Test Vehicle
Β 
ADROIT_IJAERD
ADROIT_IJAERDADROIT_IJAERD
ADROIT_IJAERD
Β 
Mechatronics case study on Wireless Survillence Balloon
Mechatronics case study on Wireless Survillence BalloonMechatronics case study on Wireless Survillence Balloon
Mechatronics case study on Wireless Survillence Balloon
Β 
Timmins how would you like to use ua vs
Timmins how would you like to use ua vsTimmins how would you like to use ua vs
Timmins how would you like to use ua vs
Β 
Design of Coaxial Rotor Micro Air Vehicle
Design of Coaxial Rotor Micro Air Vehicle Design of Coaxial Rotor Micro Air Vehicle
Design of Coaxial Rotor Micro Air Vehicle
Β 
anaFLY01-05: A MULTIPURPOSE QUADCOPTER WITH DUALITY FEATURES
anaFLY01-05: A MULTIPURPOSE QUADCOPTER WITH DUALITY FEATURESanaFLY01-05: A MULTIPURPOSE QUADCOPTER WITH DUALITY FEATURES
anaFLY01-05: A MULTIPURPOSE QUADCOPTER WITH DUALITY FEATURES
Β 

Autonomous Blimp Station Keeps Using Visual and Inertial Sensing

  • 1. M.ENG RESEARCH PROJECT APRIL 2015 1 ο€  Abstractβ€” A prototype autonomous airship and control system was developed and tested. Despite being among the earliest types of unmanned aerial vehicles to receive research attention, airships remain attractive as a platform because they can attain longer flight times and are inherently more stable in air than equivalent helicopters or multirotors. Previous autonomous blimps have required a connection to a computer on the ground, or carefully calibrated multi-camera rigs in order to calculate their pose relative to their target. This project instead utilizes currently available low-cost lightweight hardware and open- source software to perform all processing on board the blimp. The controller, running on a Linux microcomputer, combines pose estimates from a downward looking camera, motion vector estimates from video and an eleven degree-of-freedom inertial measurement unit to control the vehicle altitude, heading and speed in station-keeping and follow-me tests. The system was tuned and evaluated using indoor test flights. Index Termsβ€” Autonomous airship, blimp, pose estimation, unmanned aerial vehicle, visual servoing. I. INTRODUCTION NMANNED AERIAL VEHICLES, popularly referred to as β€˜drones’ or UAVs, first emerged in the 1950s, but in the last 15 years have become a topic of growing research, commercial and public interest. Applications of UAV technology are widespread and ever increasing, from military and surveillance uses to agriculture and package delivery. Two different perspectives on civilian UAV usage are provided by [1] and [2]; numerous other technology reviews have been written exploring uses in specific fields [3]. Several factors have combined to drive this expansion. Key technologies (including System-on-a-Chip (SoC) processors, lightweight lithium polymer batteries and microelectronic sensors) have matured and become mass-market as a result of widespread adoption of smartphone-like devices. These components are now able to meet the twin requirements for UAV technologies: real-time performance and small physical size and weight [4], [5]. Building UAVs was initially only feasible for well-resourced organisations but today the open-source, crowd-funding and hobbyist movements have lowered the barrier to entry. Smaller companies, researchers and individuals can now share their systems, designs, research and code with the community [6]. Capitalising on the now widespread availability of UAV components and associated computing platforms, this project set out to deliver a blimp system suitable for developing visual controls, with applications in wide area persistent surveillance and mapping missions. The indoor prototype, presented in Figure 1, has a flight duration of four hours and uses imagery and an Inertial Measurement Unit (IMU) to station-keep relative to an image target beneath. Major components are all commercially available items, resulting in a low cost, lightweight system. Figure 1 – Prototype blimp system in flight in the lab. Blimp is 3.5m long, flying around 4m above ground In the rest of this section, prior work in UAV airships is briefly reviewed and the objectives for this project are outlined. The main sections of this paper then present the blimp, review a dynamic model suitable for closed-loop control and outline the autopilot software. The chosen control techniques are evaluated on the blimp platform before applications and areas for future work are suggested. A. Airships and blimps Airships are traditionally defined as craft whose main source of lift is buoyancy from an envelope of lighter than air gas, i.e. Helium. Blimps are airships without a rigid skeleton. An ideal blimp is neutrally buoyant in air, only requiring powered thrust to move or change direction. As a result, compared to multirotor drones of equivalent payloads, an ideal blimp has far smaller inflight power consumption. Untethered Autonomous Flight Using Visual and Inertial Sensing on an Indoor Blimp Andrew Mathieson supervised by Dr Toby Breckon U
  • 2. M.ENG RESEARCH PROJECT APRIL 2015 2 Lithium Polymer batteries, the current de-facto standard for UAV applications, have an energy storage density of ~200 Wh/kg [7], so reduced power consumption directly corresponds to longer flight durations, mitigating one of the major drawbacks of current research UAVs. B. Review of selected previous UAV blimps Blimps as UAVs have received research interest since the late-1990s, when AURORA, a 9 m long blimp carrying radios, a forward-looking camera and a PC104 format computer was built and used for environmental monitoring [8]. The autopilot for that blimp was developed in a Matlab environment using a detailed model of the non-linear blimp dynamics For navigation, AURORA primarily used differential-GPS to fly between waypoints but the forward-looking camera was also utilised to follow parallel lines such as road markings [9]. That project was successful and contributed important applications of control theory to blimps. The complications of outdoor flight, in particular wind gusts and aviation regulations, meant that, to date, most research blimps have been designed for indoor flight and have used the well-established model of a ground controller to perform computations, relaying instructions to the in-flight hardware. The usual position location method for drones, GPS or similar, is known to be unreliable indoors [10], so research has instead explored vision based navigation. In [11] a blimp was localised using data from static cameras, this work was repeated in [12] using a roof mounted camera that looked down on the blimp to estimate its pose relative to the floor. Both achieved a stable closed loop response. Building on this, the multicamera motion-capture system used in [13] was able to achieve path-planning around multiple objects and through doors, maintaining knowledge of the blimp position to within Β±30 mm. Whilst such fine accuracy is very desirable in an indoor blimp, outside of the carefully controlled environments navigation is impossible. The other approach, as favoured by this project and many others, is to place a small lightweight camera on-board the blimp and estimate the pose of the blimp based on features in the environment. In [14], a 1.5 m long remote-controlled blimp platform able to carry a pan/tilt wireless camera was built and flown. That project included consideration of component weight and lift capacity. Low-cost servomotors, required due to weight constraints, caused some electrical noise problems. C. Objectives for his project This project built on the extensive literature to deliver a blimp platform suitable for control engineering and UAV vision development. The test case for the blimp is station-keeping relative to a target. If the target is stationary this behaviour corresponds to position holding and is an important prerequisite for docking and rendezvousing [15]. If the target moves, ability to follow it is useful for tasks such as wildlife monitoring and crowd observation. Following a moving target is also a prerequisite for more complex behaviours including search and vehicle tracking. Previous projects relied on communication to a ground station to augment their on-board systems and the blimp itself was unable to fly without the help of equipment on the ground. In this project, the control system is realised completely on board the blimp. Moving away from reliance on ground-based computation/location will allow β€˜untethered’ teleoperation, operation in potentially hazardous environments, and operation in environments where good positioning may be hard to acquire. II. THE BLIMP The contribution of this project has been to demonstrate untethered autonomous flight of a blimp using entry-level components. The work culminated in building a prototype whose specifications are summarised in Table 1: Envelope 3.5 m x 2 m βŒ€ x 0.8 mm thick PVC, weight 3.73 kg. Encloses 4.5 m 3 , net lift 1 kg. Propulsion & steering Two-off 200 mm βŒ€ propellers, each with 8 V brushless dc motor, vectored thrust. Power supply 5500 mAh 11.1V Lithium Polymer battery Control Raspberry Pi 2 teleoperated via Wi-Fi Sensing & Electronics IMU; 720p HD camera; Custom PCB stackup integrating all components. Optical isolation to reduce electrical noise Table 1 – Blimp as-built specification Using commodity parts where possible resulted in a total component cost of Β£140, and the gas-retaining envelope cost a further Β£200. A graphic breakdown is found in Appendix A. A. Envelope and weight budget In the literature, most blimps built for indoor research were around 2 m in size. Procurement issues with small blimp envelopes required this project to use a slightly larger 3.5 m envelope, which weighs 3.73 kg and can carry a 1 kg payload. Variable atmospheric conditions can affect the lifting capacity by Β±75 g between flights so ballast is employed to trim the blimp to neutral buoyancy prior to each flight. Components were selected to minimise weight, with some custom work where off-the-shelf parts would have been unacceptably bulky. The payload weight (excluding envelope self-weight) is broken down by component in Figure 2. Figure 2 –Payload weight breakdown, units kg. 0.45 0.31 0.05 0.07 0.16 0.06 0.01 0.02 5500mAh LiPo battery Frame; bolts; other Raspberry Pi; Wi-Fi; Camera Motor; servos Case; other ABS parts PCBs; electronics IMU; PWM driver Ballast
  • 3. M.ENG RESEARCH PROJECT APRIL 2015 3 B. Microcomputer choice Numerous lightweight embedded platforms are now available. A recent review of some of these platforms with relevance to UAVs may be found in [4]. This project selected the Raspberry Pi 2 Linux microcomputer, primarily for its matching camera module and hardware support. The software architecture, described in Section IV, is split into four concurrent threads to utilise the quad-core ARM CPU. The camera module connects directly to the Broadcom SoC using a wide SCI bus, so some of the video processing workload can be handled in the GPU. In Section V this capability is used to estimate the motion of the blimp alongside the main CPU-bound pose estimation routine. Other peripherals are connected to an I2C bus and General Purpose Input/Outputs (GPIOs). C. Electronics The payload electronics design for this project is shown as a block diagram in Figure 3. The camera module and microcomputer are located at the bottom of the stackup and receive 5 V power from the logic power regulator. Above this, a custom PCB integrates the IMU and Pulse Width Modulation (PWM) output driver. A second custom PCB locates the optical isolation and buffering required for control of the motors and servos. Figure 3 – Electronics block diagram The two primary propulsion units are brushless DC motor driven propellers as used in quadcopters. Directional control is achieved by vectoring the thrust direction using servo motors with a range of Β±90Β° A secondary board provides sixteen extra channels of 12 bit PWM output for the microcontroller (NXP Semiconductor PCA9685). The 11 degree-of-freedom inertial measurement unit (Adafruit Industries, [17]) integrates three microelectronic sensors: L3DG20H 3-axis gyroscope; LSM303DLHC 3-axis accelerometer and 3-axis magnetometer; BMP180 barometric pressure & temperature sensors. Both boards communicate with the microcomputer via an I2C bus. Logic systems are supplied with a regulated 5 V supply to prevent interference from electrically noisy actuators. The microcomputer takes around 30 s to reboot after an interruption, during which time actuator control cannot be assured, loss of power in flight is something to be avoided. It was noted in [14] that electrical noise, particularly from micro servos could interfere with effective video capture under certain circumstances. To prevent this, this project implemented optocouplers on the PWM and GPIO signal lines. Six twin-channel optocouplers are included; each buffered to the correct voltage for driving servos and motors using an op-amp. Component values were chosen to achieve an almost linear throttle response; a circuit diagram is in Appendix A. Optocouplers also prevent excess current damage to the microcontroller pins which could occur in fault conditions such as an accidental short circuit of the motors. Blimps flown outdoors frequently utilise additional actuators on the tail for greater manoeuvrability [18]. In readiness for future work where these may be required, the design also included two isolated bidirectional DC drivers. D. Power The main flight battery, powering the motors and servos, is a 5500 mAh Lithium Polymer battery, weight 0.448 kg. A secondary battery (capacity 1000 mAh, weight 58 g) was provided to allow use of the logic systems independently of the motors, allowing communication and remote control functionality to be confirmed before the motors are powered. The main battery can power the electronics for a theoretical maximum duration of greater than 7 hrs. During flight tests of manoeuvring and active station keeping, a low level of motor thrust was maintained continuously, so the estimated real- world battery life of the system is around 4 hrs. Current mid-market multirotors, by comparison, typically have quoted flight times less than 1 hr with little ability to carry spare batteries so in this respect the advantage of lighter- than-air platforms remains clear. The complete system is housed in a 3D-printed case, with aluminium arms that support the propeller units, as shown in Figure 4. Figure 4 – Blimp payload. Inset – Logic PCB showing IMU and PWM breakouts III. BLIMP DYNAMICS Developing a controller for the blimp begins with understanding its flight characteristics. In this section, the coordinate frames used are defined and then the method of [15] is followed to formulate an equation of motion for the blimp. The focus throughout is on the sources of simplifying assumptions and how they affect the controller design. vectoring servos frame attaches to envelope downward looking camera brushless motors 3D-printed case PCBs inside IMU PWM
  • 4. M.ENG RESEARCH PROJECT APRIL 2015 4 A. Coordinate frames A blimp has a minimum of six degrees of freedom: the 𝑋, π‘Œ, 𝑍 position and π‘¦π‘Žπ‘€, π‘π‘–π‘‘π‘β„Ž, π‘Ÿπ‘œπ‘™π‘™ rotations relative to the ground; plus any extra degrees of freedom required to model changes in shape during flight. These extra deformational degrees of freedom can be neglected by assuming that the envelope of the blimp is inflated taught so does not deform, and that the payload is rigidly affixed to the envelope. For the rigid body motions, an ego-centric coordinate system is used, with its origin at the centre of mass of the payload. Instead of tracking the motions of the blimp relative to the world, the system tracks the motion of objects in the world relative to itself. In the egocentric frame, 𝑋 direction is positive forwards, 𝑍 is positive upwards, and π‘Œ is positive to the right. π‘¦π‘Žπ‘€, π‘π‘–π‘‘π‘β„Ž and π‘Ÿπ‘œπ‘™π‘™ are defined in the conventional way (Figure 5). The principal axes of the IMU are aligned with the egocentric coordinate system; transformation of camera pose estimates into the ego frame is discussed below. Figure 5 – Blimp coordinate frame and envelope dimensions Hereafter, the pose of the blimp, these six coordinates, is denoted by the 6x1 vector {x}. B. Aerodynamics The blimp is equipped with two main propellers which are driven at variable speeds (from 0 to 8000 rpm), and directed at an angle between Β±90Β° to provide vectored thrust. Common mode thrust provides forward/backward and up/down control; differential thrust provides control of π‘¦π‘Žπ‘€. The settings of these four actuators are the output of the controller. This blimp therefore has more degrees of freedom than actuators – it is under-actuated. To resolve this, further simplifying assumptions are required before developing a controller. The first of these is that π‘π‘–π‘‘π‘β„Ž is decoupled from 𝑋 velocity, and effectively constant. This assumption amounts to neglecting the effect of aerodynamic lift or downforce as the blimp flies forward. Aero forces are proportional to velocity squared, so at the low speeds of flight (typically less than 1 m/s indoors) the symmetrical shape of the blimp envelope means lift is small and so this assumption is valid. At higher speeds and in strong prevailing winds, as experienced by outdoor blimps, these forces do become considerable, so aerilons and rudders are used on commercial outdoor blimps. The blimp envelope used here was originally intended for static advertising use so its four inflatable fins serve no aerodynamic purpose. An aerodynamic effect which is not negligible, however, is drag. The literature on airship drag has used wind tunnel data to estimate drag coefficients for large, outdoor flying envelopes of various aspect ratios at various Reynolds numbers [19]. Authors investigating small indoor blimps such as this have assumed that drag forces can be assumed to be a simple function of velocity, in effect assuming that any transitional, turbulent or form drags can be neglected and that the only important source of drag is skin friction [15]. As this blimp is larger than the 2 m model used in [15], this assumption has been tested by comparing the flight Reynolds number (i) of this blimp to an estimated critical Reynolds number 𝑅𝑒 π‘π‘Ÿπ‘–π‘‘for blimps in air. π‘…π‘’π‘“π‘™π‘–π‘”β„Žπ‘‘ = 𝜌 𝑒 π‘₯ 𝐿 πœ‡ = (1.22 π‘˜π‘” π‘š3⁄ )(1 π‘š 𝑠⁄ )(3.5 π‘š) (1.84 Γ— 10βˆ’5 π‘ƒπ‘Žπ‘ ) = 2.3 Γ— 105 (i) Wind tunnel tests of the HALROP stratospheric blimp [19] suggested that transition from laminar to turbulent boundary layer (and hence from skin friction dominated to pressure dominated drag) occurred at 𝑅𝑒 π‘π‘Ÿπ‘–π‘‘ = 7.5 Γ— 105 . That blimp, like those analysed in the literature related to transitional flows around blimp hulls, has a streamlined shape with an aspect ratio of 1 /6. In comparison, 𝑅𝑒 π‘π‘Ÿπ‘–π‘‘ for a sphere (aspect ratio 1) is 3.8 Γ— 105 [20]: compared to a sphere, the streamlined blimp shape reduces the adverse pressure gradient experienced by the boundary layer, and prevents it transitioning until a higher Reynolds number. When inflated, the blimp in this project has an aspect ratio closer to Β½, with an almost hemispherical front. As the aspect ratio of this blimp lies between that of a sphere and that of HALROP, it is reasonable to assume their respective critical Reynolds numbers are lower and upper bounds for 𝑅𝑒 π‘π‘Ÿπ‘–π‘‘ of this blimp. In any case, as π‘…π‘’π‘“π‘™π‘–π‘”β„Žπ‘‘ is less than the assumed lower bound for 𝑅𝑒 π‘π‘Ÿπ‘–π‘‘, the boundary layer may be assumed to remain laminar, and the assumption made by [15] of drag being proportional only to velocity squared holds. For larger blimps, blimps that fly faster than this or blimps with additional aerilons the laminar drag assumption should be re-examined. Laminar drag coefficients for planar and rotational motions of a model blimp (with aspect ratio 1 /3, π‘…π‘’π‘“π‘™π‘–π‘”β„Žπ‘‘ = 1 Γ— 105 ) were obtained experimentally in [18] – in the absence of any more applicable low speed data, these values may be utilised. Neglecting drag coefficients which represent interactions between degrees of freedom, (justified by the low flight speed), and absorbing constant factors such as the blimp cross- sectional area and air density into the terms, a 6x6 drag coefficient matrix, [𝐢 𝑑] may be obtained. Multiplying this by the velocities squared ({xΜ‡}2 ) gives the 6x1 vector of drag forces in the ego frame. The second simplifying assumption required is that roll and rolling motion is constant and negligible. This assumption is validated by concentrating the centre of mass of the blimp/payload below the centre of buoyancy which helps return the blimp to zero π‘Ÿπ‘œπ‘™π‘™ and π‘π‘–π‘‘π‘β„Ž. β„Ž Payload centre envelopeβŒ€2m Port Stbd
  • 5. M.ENG RESEARCH PROJECT APRIL 2015 5 C. Mass, inertia and centripetal effects The all-up mass, π‘š, of this blimp is 4.6 kg: 3.73 kg envelope material π‘š 𝑒, 0.874 kg payload, plus ballast to balance. Neglecting gas leakage which is very slow, this mass is assumed to be constant throughout each flight. Moments of inertia about the for each of the three rotational motions ( 𝐽 π‘¦π‘Žπ‘€, 𝐽 π‘π‘–π‘‘π‘β„Ž, π½π‘Ÿπ‘œπ‘™π‘™) can be calculated using standard formulae if the blimp is approximated as an ellipsoid with major axis 3.5 m and minor axes 2 m. A real blimp also displaces a large volume of air as it flies or rotates so its effective mass and inertia are much greater than would be expected from the formulae [15]. Coefficients for this additional mass and inertia ( π‘š π‘Ž π‘₯,𝑦,𝑧 , 𝐽 π‘Ž π‘¦π‘Žπ‘€,π‘π‘–π‘‘π‘β„Ž,π‘Ÿπ‘œπ‘™π‘™ ) are given by Figure 6.10 in [21], with b/a, the aspect ratio = 2/3.5. Due to the symmetry of modelling the blimp as an ellipsoid, π‘š π‘Ž 𝑦 = π‘š π‘Ž 𝑧 , 𝐽 π‘¦π‘Žπ‘€ = 𝐽 π‘π‘–π‘‘π‘β„Ž, likewise 𝐽 π‘Ž π‘¦π‘Žπ‘€ = 𝐽 π‘Ž π‘π‘–π‘‘π‘β„Ž . 𝐽 π‘Ž π‘Ÿπ‘œπ‘™π‘™ is considered negligible. These form the diagonal terms of the 6x6 mass matrix [𝑀]. The effect of off-diagonal terms, which would represent the inertia on coupling between degrees of freedom, and the effect of centripetal forces, has been ignored since flight speed is low. D. External forces Buoyancy, gravity, forces and moments from the thrusters, and any other disturbances (e.g. gusts) act as external forces on the blimp. For an envelope of volume 𝑉𝑒and air and helium at laboratory conditions the lift is calculated with (ii): Payload, kg = 𝑉𝑒(𝜌 π‘Žπ‘–π‘Ÿ βˆ’ 𝜌 𝐻𝑒) βˆ’ π‘š 𝑒 4.5 m3(1.22 kg/m3 βˆ’ 0.17 kg/m3) βˆ’ 3.73 kg = 1 kg (ii) Ideally, the weight is balanced by the buoyancy. During flights this is rarely perfect, so the thrusters either provide a constant downforce (if too heavy), or the blimp is held at a constant height using ballast (if too buoyant). By controlling the vertical component of the thrust, closed-loop regulation of height is achieved during flights. The thrusters create a turning moment if a differential thrust is applied, allowing control of the π‘¦π‘Žπ‘€ and π‘Ÿπ‘œπ‘™π‘™ degrees of freedom. In common with most airships, there is no ability to directly actuate in the π‘Œ direction, however the non-linearity present in the blimp response to thrust means this can usually be overcome with a combination of 𝑋 and π‘¦π‘Žπ‘€ motions. In indoor flight gusts can usually be prevented; the remaining forces are denoted as the 6x1 vector {𝐹𝑒π‘₯𝑑}. E. Dynamic model The assumptions related to each term in the blimp dynamics equation (iii) have now been examined. [𝑀]{ẍ} + [𝐢 𝑑]{xΜ‡}2 + {𝐹𝑒π‘₯𝑑} = {0} , from [15] (iii) Even this simplified model of the blimp is non-linear. In section VI the real world stability of the blimp is compared to predictions of classical control methods using this model. IV. SOFTWARE Software for controlling blimps was first explored in detail by the AURORA project [9], which, in common with many robots, used a hierarchical control model. In this project, a similar controller has been implemented, with the enhancement that an additional PC on the ground is not required. All control work, including handling pilot input and providing a GUI, is carried out by the microcomputer on- board the blimp. In the current implementation, the blimp may be piloted either by sending instructions from a (Bluetooth) keyboard, or by connecting wirelessly using Virtual Network Computing (VNC) to view the GUI. For routine flights VNC is preferred as it gives the pilot a full overview of the software; however important quality-of-service and flight safety concerns arise when relying on a wireless network for control, [22], so direct control using the keyboard is retained as a fall-back option. A. Software architecture The software, written in Python, is structured into four threads, each of which has a scope of responsibility as outlined in Table 2. Threads interact by setting and polling flags and data structures in the global namespace. Mutual exclusion operators control access to shared data and resources. thread initialisation runtime termination Graphical User Interface Thread Load data Initialise GUI Start other threads Respond to pilot input, Queue jobs for hardware Verify termination; Save log files. Hardware Abstraction Thread Set motor home states; Get initial IMU pose Control motors as required by job queue Poll the IMU Stop and home motors Autopilot Thread Fuse pose information from camera and IMU; Generate autopilot jobs Vision Thread Open camera, start recording Visual and motion pose estimation Close camera; save video. Table 2 – Four-thread software architecture In operational conditions the GUI Thread is responsible for starting each of the other threads. When all threads have successfully initialised, operation may begin. Interactions between the threads are shown in a flowchart in Appendix B. B. Hardware abstraction and error handling Actuators are represented by objects containing state variables for direction and thrust. These reside in the namespace of the Hardware Abstraction Thread and only that thread gets direct access to the hardware to manage race conditions and prevent actuators receiving conflicting instructions. Other code, such as the GUI, keyboard and autopilot, generates jobs (instructions describing what hardware changes are required when) into a queue managed by the Hardware Abstraction Thread. Jobs may have start times immediately or scheduled for the future; jobs may have variable priorities; jobs may be blocking (having a defined execution time) or non-blocking
  • 6. M.ENG RESEARCH PROJECT APRIL 2015 6 (assumed to take place as soon as they are due) and they may or may not recur again at a future time. When jobs are added to the queue, the logic in Figure 6 is applied to decide if a hardware update is required; otherwise, the thread polls the IMU for new pose measurements. Figure 6 – Job prioritisation logic This design ensures good pose estimates are available when required, and also ensures a high-priority job (e.g. an emergency stop) is able to override any lower priority jobs that might otherwise block the queue. In the event of an unexpected runtime problem that is not caught by its parent thread, the main thread issues stop instructions to the hardware job queue and terminates the other threads. This ensures that the actuator outputs (particularly the brushless motors) are not left in an undetermined state post-termination. C. Multitasking performance By splitting compute-intensive work such as vision processing into separate threads the job queue can remain responsive to pilot input even if another thread is busy. The four-threaded structure is also well suited to the quad- core processor of this microcomputer allowing the system to achieve a maximum throughput of 5.8 fps using 320 x 240 px images. In comparison, the single core predecessor Model B+ could only achieve around 1 fps at this image size. V. POSE ESTIMATION & CONTROL In the previous three sections the prototype vehicle has been presented, a dynamic model for blimps in flight outlined, and the high level software stack described. The challenge of controlling the prototype is now addressed. In the context of the four-thread architecture, the following methods are implemented by the Vision Thread and the Autopilot Thread. A. Pose estimation from camera For any control problem, reliable information about the current system state is essential. Here, this is the pose vector {x} = {𝑋, π‘Œ, 𝑍, π‘¦π‘Žπ‘€, π‘π‘–π‘‘π‘β„Ž, π‘Ÿπ‘œπ‘™π‘™} 𝑇 . As noted above, this blimp is not equipped with GPS (unreliable indoors) or a differential location method (useless without calibrated equipment independent of the blimp). Instead, the most effective sensor for pose estimation is the on-board camera. The camera was calibrated using the technique of [23] to calculate its distortion matrix – before each flight this is loaded from a file. Incoming video frames are converted to grayscale and de-distorted using the preloaded camera matrix. The image is then checked for the presence of the target – a planar chessboard pattern. Detection of the chessboard during flight uses the same method as in calibration – if found, the pose may then be estimated. The camera is located in the centre of the payload facing straight down onto the floor and target, allowing easy calculation of the rotation [𝑅] and translation {𝑑} relative to the target using the perspective-n-point solution of [24]. Requiring the floor and image plane to remain parallel is equivalent to assuming that π‘Ÿπ‘œπ‘™π‘™ and π‘π‘–π‘‘π‘β„Ž disturbances can be neglected. In Section III it was seen that slow flight speeds and high inertia of the blimp ensures this is usually the case. Expressing the rotation as a 3x3 Rodrigues form matrix, the rotational pose coordinates are recovered using(iv) and the positional pose coordinates {𝑋, π‘Œ, 𝑍} using (v): { π‘¦π‘Žπ‘€ π‘π‘–π‘‘π‘β„Ž π‘Ÿπ‘œπ‘™π‘™ } = { tanβˆ’1 ( 𝑅1,2 𝑅1,1 ) βˆ’tanβˆ’1 ( 𝑅3,2 𝑅3,3 ) tanβˆ’1 ( βˆ’π‘…3,1 √(𝑅3,2)2 + (𝑅3,3)2 ) } {𝑋, π‘Œ, 𝑍} 𝑇 = βˆ’[𝑅] 𝑇 {𝑑} (iv) (v) Where 𝑅1,2 etc. refer to terms of the Rodrigues matrix. The pose estimation method is run on a small image, 320x240 px, with two iterations of subpixel refinement to improve reliability of the pose data supplied to the autopilot controller. The trade-off between reliability and framerate in a bench test is assessed in Table 3. In this test, a β€˜bad pose estimate’ is defined as a disagreement of greater than 20Β° from ground truth in any of the three Euler angles, while the blimp is stationary over the target for one minute. As expected, larger images give superior accuracy, however a very slow frame rate is undesirable for controller stability. Equally, the marginal decrease in bad pose rate using unlimited refinement does not justify the frame rate penalty incurred. The selected configuration is highlighted in bold; it allows refined images to be processed at 4.8 fps. Of the good estimates, 90% are within Β±5Β° of ground truth. To try to counter this 5Β° variability by continuously functioning actuators would be inefficient and shorten component lifetimes, so instead controllers ignore fluctuations in this small range. Once the blimp has been controlled to be within tolerance of the target heading the natural stability of the blimp in the air helps it to remain in place.
  • 7. M.ENG RESEARCH PROJECT APRIL 2015 7 Rate of bad pose estimates and frame rate No subpixel refinement Two iterations of refinement Unlimited iterations Small image 320 x 240px 0.72 bad poses/s at 5.8 fps 0.43 bad poses/s at 4.8 fps 0.39 bad poses/s at 2.2 fps Large image 640 x 480px 0.11 bad poses/s at 1.8 fps 0.09 bad poses/s at 1.8 fps 0.08 bad poses/s at 0.7 fps Table 3 – Reducing rate of bad pose estimates by changing image size and number of refinement iterations. In the β€˜unlimited iterations’ case, refinement stops when the change in corner location is less than 0.001 px. B. Pose estimation from Inertial Measurement Unit The IMU, as described above, contains five sensors: accelerometer, magnetometer (compass), gyroscope (each with 3 orthogonal axes), barometric pressure and temperature. The pressure sensor gives a direct reading the height above sea level of the blimp, using the ISA barometric formula. The nominal resolution of this sensor is Β±0.1 m, although in test conditions this variability was found to be Β±0.25 m. Tests on the bench indicated that height readings are normally distributed about the true height, so in flight, variability of height estimates is smoothed by taking the mean of the most recent ten samples. Flight altitude 𝑍 is calculated by subtracting the current height above sea level from the height above sea level at takeoff. Similarly, with the magnetometer giving a measurement of π‘¦π‘Žπ‘€ relative to the North direction and the accelerometer giving measurements for π‘π‘–π‘‘π‘β„Ž and π‘Ÿπ‘œπ‘™π‘™ relative to gravity, the current flight pose relative to the start is also calculated by the IMU software [24]. Calculating positional coordinates by double-integrating IMU accelerations is generally considered to be ineffective because measurement noise is greatly exacerbated by the integration process, so the IMU is currently only used for rotational pose estimation. C. Velocity estimation using H.264 The final source of pose information is the optic flow in the scene, which in steady flight is correlated to the motion of the camera over the scene. Unlike the direct pose estimation, this method does not require a specific target in the scene, instead it can work using features in the natural environment. The camera used natively records video in H.264 (MPEG AVC) format. An intermediate step in the H.264 encoding process is to calculate the optic flow in each 16x16 pixel macroblock. In common with many modern SoCs, this encoding process is implemented using the graphic processing unit instead of the CPU, happily, the camera drivers allow access to these intermediates, permitting analysis of the motion domain as well as the image domain. Each time the camera encodes a video frame, the motion field is copied to memory using a HSV colour space to represent the direction of the motion field, sum-of-absolute- differences measure and magnitude of the motion field at each pixel, in a similar manner as used in [25]. In this application it was found that the GPU can work about five times faster than the CPU so to avoid unnecessary computation HSV motion arrays are only processed when camera pose data is also ready. The aim of the motion processing is to reduce the incoming motion field into four estimates: the direction and magnitude of the foreground and background motion. To achieve station keeping relative to a target in the foreground, the controller tries to minimise the motion relative to the foreground, and keep the motion relative to the background constant. In the case of a stationary target, the background motion should additionally be zero. The MPEG block-matcher that achieves this motion encoding is not as robust as a specialised optic flow routine, so the produced motion fields tend to be very noisy. Furthermore, the level of motion detected is not always independent of scene features. This is demonstrated in Figure 7, where the brightest colours indicate largest motion and the mapping from motion direction to hue is given in the key. The ground truth here is the camera moving forwards over the chessboard. In the HSV array, the predominant hue is red, corresponding to a direction of around 0Β° as expected. However, in some macroblocks the motion has been incorrectly classified the motion incorrectly (green). Figure 7 – Captured image with located chessboard axes overlaid; Motion HSV representation of same frame, strongest motion is detected in the chessboard area and along the bottom edge; Key. In many frames the signal-to-noise ratio is high so a clustering approach would frequently misclassify noisy background pixels as foreground and vice versa. Instead, the components of the motion field are considered, and the noise is modelled as Gaussian white noise present at a low level in every macroblock. Otsu’s thresholding [26] is used to binarise the magnitude of the motion field, removing small-magnitude noise from the frame, leaving the larger magnitude motion which must be classified as either foreground or background. A histogram of the directions of these motions with bin widths corresponding to 5 degree intervals is computed, and the modal class is taken to be the average direction of the foreground’s motion. For the HSV motion array shown above, this method estimated the foreground motion direction to be at 353 Β° which is in good agreement with the ground truth. The performance of this motion segmenter implementation on the blimp is evaluated over a larger number of tests in Table 4. In general, performance is satisfactory, although better performance is seen when the blimp is moving relative to a stationary target than when both the target and the background are experiencing large magnitude motion. In the final column of Table 4, if the target has already been located, a region-of-interest around the target is used to denote the foreground. In the problematic moving background case, this method helps, however it relies on the target being in view.
  • 8. M.ENG RESEARCH PROJECT APRIL 2015 8 Scenario Ground truth motion direction Number of frames where motion estimate is within Β±10Β° of ground truth Fore- ground Back- ground Fore- ground Back- ground Using ROI Flight over stationary target 0Β° None 86/ 177 91/ 177 38/ 41 Flight over moving target None 180Β° 53 /124 66 /124 20/ 53 Table 4 – Experimental verification of H.264 motion direction segmentation, acquired in the lab. A weakness of this current implementation of motion direction estimation is that it is not able to correctly detect or compensate for out-of-plane motion. The motion classifier presented in [25] solves this by compared the incoming data to a large library of pre-computed motion fields. D. Sensor Fusion Once all sources of pose estimates have been resolved into the ego frame, the simplest way to combine them into a single best guess is to take a weighted average. Here, the weights awarded to each component were optimised experimentally to minimise the disagreement between the different pose sources over a large number of samples in controlled conditions. In autopilot mode sensor fusion runs at 5 Hz, limited by the vision system refresh rate. Improved techniques for combining sensor data have been widely explored in the literature, where Kalman Filtering is a commonly used technique. In [26], two filter formulations are combined to fuse IMU and chessboard derived pose information. An opportunity for future work would be to apply that work to this blimp, and extend it by incorporating the motion directions estimates and their estimated uncertainties into state transitions. Such a β€˜motion augmented’ fusion method should give more reliable pose estimates than the current system. E. De-coupled control The system now has an estimate of its current pose which is good to within measurement error. The final step in realising autonomous station-keeping is to transform the error between the current pose estimate and the target pose into the actuator movements required to get to the target. In Section III the dynamics of the blimp were considered, and the simplifying assumptions permitted by low speed flight were explored. In particular, coupling between degrees of freedom due to aerodynamics and moment effects was neglected. Furthermore, motion in several degrees of freedom, notable π‘Ÿπ‘œπ‘™π‘™ and π‘π‘–π‘‘π‘β„Ž, was also neglected due to the flight characteristics of the large blimp envelope and concentrated payload mass. These simplifications mean that a separate controller can be implemented for each actuatable degree of freedom. Following the model of [9], higher-level controllers may then be implemented to adjust the setpoints of these controllers in accordance with mission requirements. F. 𝑍 controller Regulates the steady-state height of the blimp. Primary input source is the barometric pressure sensor, supplemented by camera pose distance-to-image information when chessboard target is in view. Output is the vertical component of the common mode thrust required to balance the weight of the blimp with the buoyancy. Good control of the flying height is required to ensure that the planar motion detection works properly. This is primarily provided by the neutral buoyancy of the envelope; using the 𝑍 controller at a low thrust level (10%) is sufficient to control height and also helps improve responsiveness of the other controllers, as the motors are already spinning continuously. G. π‘¦π‘Žπ‘€ controller Regulates the heading of the blimp. Primary input source is the sensor fusion angle provided by the IMU and vision system. When the chessboard target is in view, stable heading hold relative to the chessboard with no operator intervention was recorded. Without the vision system periodically correcting the target heading, stability was limited by the tendency for the IMU π‘¦π‘Žπ‘€ measurement to drift. Once the blimp had lost the original target heading, oscillation between Β±60Β° of the target direction was observed. The π‘¦π‘Žπ‘€ controller was tested and tuned with the blimp tethered at a fixed height above the chessboard target, but allowing it to yaw freely. With autopilot engaged, the target was displaced by angles between 0Β° and 60Β°. The system step response was calculated from the recorded log data. In the next section the tuning of this controller is outlined. H. 𝑋 controller, 𝑋 velocity controller Regulates the position of the blimp relative to the target, or, in the case of a moving target, tries to match the speed of the target. Primary input source for 𝑋 controller is the location of the target centre relative to the image. If H.264 motion directions are available, these are used to determine whether to speed up or slow down. As there are no absolute or inertial sources of reliable 𝑋 pose information, this controller can only function when the target is in view. Output is the horizontal component of the common-mode thrust. The 𝑋 controller was tested by ballasting the envelope to neutral buoyancy and flying over a chessboard moving forwards below. In Figure 8 the 𝑋 position of the target points (whose location in the first frame are shown in the inset) during one of these tests is plotted. The controller is able to keep the position of each target point in the frame within Β±25 pixels of the starting value, which, given the properties of the camera and distance to the target, equates to~ Β±90 mm in the real world. The lines do not remain parallel at all times because the blimp does not move purely in the 𝑋 direction; in particular, if 𝑍 and π‘¦π‘Žπ‘€ fluctuations were not also controlled, the accuracy of the 𝑋 controller was reduced to ~Β±200 mm. The large inertia of the blimp envelope when flying forward made this controller more susceptible to overshoot than the other controllers.
  • 9. M.ENG RESEARCH PROJECT APRIL 2015 9 Figure 8 – Position of target points during 𝑋 controller test VI. RESULTS & DISCUSSION Each controller has a proportional and integral gain. As in the conventional PI controller, proportional gain 𝐾𝑝 controls the magnitude of the thrust output and the gains have been selected to provide a sub-critical response for good stability. Multirotors require fast responses to errors for aerobatic flight, however in a blimp the low speed of flight and large inertia means that there is no great advantage to trading off stability for a quicker response. Unlike in a conventional controller which is evaluated at a fixed update frequency, this application requires that the controller can be updated irregularly whenever new pose information becomes available. Furthermore, thruster instructions put into the job queue by the autopilot may be interrupted and overridden by other higher priority instructions. For these reasons, the integration time in the PI controller is implemented as the runtime of each instruction. Large errors therefore result in a large correcting force being applied for a long time; if, as the blimp approaches its set point it receives a new pose estimate, it refines the instructions to the actuators accordingly. Even if no new information is received, the actuators will at the latest be stopped at the first estimated end time, which will be closer to the target than before. Once the blimp has settled to be within tolerance of the target it returns to its usual hovering state. A. Tuning by experiment To verify this approach experimentally, the normalised time-domain unit step response of the π‘¦π‘Žπ‘€ controller in the tuning tests is plotted in Figure 9. It is seen that whilst there is good convergence to the target, during the rise period the variability between tests (shown by the grey Β±2Οƒ bounds) is more than double that during steady state. With 𝐾𝑝 = βˆ’5, this is a subcritical response. The median 5 to 95% rise time of the responses is 4.94 s. The autopilot was configured to ignore pose estimates containing Euler angles outside of expected ranges to mitigate against incorrect pose estimates that could cause very large actuator commands to be issued. It was noticed that bad pose estimations were more likely immediately after a rapid motion of the servos (e.g. going from hard left to hard right) when the reaction caused the blimp gondola to briefly swing in the π‘Ÿπ‘œπ‘™π‘™ direction. In test flights, this out-of-plane motion was damped by the large inertia of the blimp envelope so did not affect control stability. Performing Kalman filtering on pose estimates prior to their use in the autopilot would further help protect against this source of errors. B. Comparison with analytical methods Standard methods for controller tuning require that the system transfer function be linear and second-order [27]. Even after the simplification process outlined above was applied to the blimp dynamics the result was non-linear; a complete treatment which does not omit coupling between degrees of freedom would also likely be higher-than-seccond order. Tuning results from standard methods must therefore be approached with caution; this project used them to inform the range of values of 𝐾𝑝 to test. Approximating the system transfer function as second order with time delay using Harriott’s method (see [27]) allowed Nyquist analysis to find the gain margin from the step response tests. The gain margin of the median line in Figure 9 was 26 db, suggesting 𝐾𝑝 should be set to 19 for a critical response. Figure 9 – Normalised time domain unit step response of π‘¦π‘Žπ‘€ controller at 𝐾 𝑝 = βˆ’5. Responses have been normalised by timeshifting such that the step input occurrs at t=0, and scaling such that the average π‘¦π‘Žπ‘€ before the step = 0 and average π‘¦π‘Žπ‘€ after step = 1. Median and Β±2 standard deviation bounds of the normalised responses are also plotted at each time instant. 0 40 80 120 160 200 240 0 10 20 30 40 50 60 time, s x1 x2 x3 x4 X Y initial point positions ego frame directions trackedXpositions,px -0.2 0.0 0.2 0.4 0.6 0.8 1.0 1.2 -6 -4 -2 0 2 4 6 8 10 12 14 16 18 normalisedyawresponse time relative to step, s Unit Step at t=0 Median Norm. response Median+2Οƒ Median-2Οƒ Test 1 Normalised response Test 2 Normalised response Test 3 Normalised response Test 4 Normalised response Test 5 Normalised response Test 6 Normalised response Test 7 Normalised response Test 8 Normalised response
  • 10. M.ENG RESEARCH PROJECT APRIL 2015 10 In reality this was found to be an over-estimate, frequently causing the system to overshoot the target or develop unstable oscilations which required manual intervention to correct. In contrast, the widespread Zeigler-Nichols time-domain tuning method underpredicted 𝐾𝑝. Z-N tuning suggested 𝐾𝑝 should be set to 5.71 for a critical response. Gains up to 15 were tested and found to be stable, albeit with some overshoot causing oscilation about the target. Both analyical tuning methods estimate system properties from the transient part of the state response, so their accuracy will have suffered from the high variablity of the data during this important period. Reducing this variabilty proved difficult due to the non-linear dynamics of the real blimp and enviromental disturbances such as gusts which, although considered to be small, were not controlled for. The final weakness in this tuning method is that interactions between degrees of freedom are not well-modeled by using independent controllers for π‘¦π‘Žπ‘€, 𝑋 and 𝑍. Nevertheless, setting 𝐾𝑝 to 10 resulted in much improved performance. The step response was twice as fast as in tuning (2 s median rise time), and overshoot oscilation was < Β±10Β°. The long-term performance of the tuned π‘¦π‘Žπ‘€ controller system is plotted in Figure 10. In these tests, the blimp was positioned at rest out-of alignment with the target, started up and autopilot engauged. As desired, the estimated π‘¦π‘Žπ‘€ moves towards the setpoint, and then good control stability is seen over the rest of the test duration. Corrective action, shown by the Actuator Differental Thrust, was only required when the blimp drifted more than 5Β° away from the target. Figure 10 – π‘¦π‘Žπ‘€ station-keeping long-duration test, 𝐾𝑝 = 10, following a step change in target position of 55Β° VII. CONCLUSION An indoor blimp capable of untethered autonomous flight has been designed, built and tuned. Development of the control scheme began by considering the dynamics of the blimp, and utilised pose estimates from an IMU and a visual target. The controller was tuned, and the experimental values compared to values suggested by analytical tuning methods. Indoor flight testing proved that the system works well. Manual control of the blimp is sufficiently dexterous to pilot it towards the target, and, once the target is in visual range the stabilising autopilot holds the blimp position within Β±10Β° and Β±0.2 m. Provided the target remains in view of the blimp, the system also realises β€˜follow me’ behaviour if the target moves. To achieve these results, the approach of [15] was followed: the main actuatable degrees of freedom were decoupled to allow a separate controller to be implemented for each one. This approach proved to be simple but effective – once tuned, it achieved stable performance in the test cases for this project. With the groundwork of a usable platform now in place, opportunities to extend it abound, both in controls engineering and beyond. Improving the motion classification and better integrating it with the other pose estimate sources, e.g. by implementing Kalman filtering on board the blimp should lead to better stability when the target is not in view. A method of adjusting controller gains adaptively as described by [28] could replace the manually tuned controllers used here to further improve the stability of the blimp in autopilot mode by making it more resilient to disturbances. In wider applications, prior to this project Durham University did not have a blimp so work requiring a persistent aerial viewpoint was performed by other types of UAV. In some of these applications the advantages of long flight durations and aerodynamic stability possessed by a blimp may be beneficial. This project has included optical isolation and segregated power supplies to prevent problems of electrical noise suffered by some similar previous projects, at negligible extra cost. Similarly, using off-the-shelf and open-source components in preference to custom solutions should prolong the useful life of the system by enabling continuous improvement. Unlike in previous work, all pose estimation and control processing is performed on board the blimp, demonstrating that autonomous station keeping of a UAV blimp no longer requires vast resources or reliance on ground-based equipment. ACKNOWLEDGMENT The author would like to thank Dr T Breckon for his support throughout; I Hutchinson, N Clarey and the Electronics Workshop staff for their assistance procuring electronics and fabricating PCBs; C Wintrip and the Mechanical Workshop staff for 3D printing work; J Gibson for arranging helium supply; and S Apps and the Thermo Lab staff for assistance with filling the blimp before flights. 20 40 60 80 100 yaw,Β° Setpoint SSF Yaw -100 0 100 0 20 40 60 80 100 120 140 160 180 200 220 thrust,% time, s Actuator Differential Thrust
  • 11. M.ENG RESEARCH PROJECT APRIL 2015 11 NOMENCLATURE GPIO General Purpose Input/Outputs BLDC Brushless DC Motor 𝑋, π‘Œ, 𝑍 Blimp positional coordinates π‘¦π‘Žπ‘€, π‘π‘–π‘‘π‘β„Ž, π‘Ÿπ‘œπ‘™π‘™ Blimp rotational coordinates [Euler angles] {π‘₯} Pose vector = {𝑋, π‘Œ, 𝑍, π‘¦π‘Žπ‘€, π‘π‘–π‘‘π‘β„Ž, π‘Ÿπ‘œπ‘™π‘™} 𝑇 {π‘₯Μ‡}, {π‘₯̈} Velocity vector, Acceleration vector π‘š, 𝐽 Total mass, moment of inertia π‘š 𝑒, 𝑉𝑒 Mass of envelope, Volume of envelope 𝜌 π‘Žπ‘–π‘Ÿ, 𝜌 𝐻𝑒 Standard densities for Air and Helium π‘š π‘Ž 𝑋,π‘Œ,𝑍 Added mass in 𝑋, π‘Œ, 𝑍 directions respectively 𝐽 π‘Ž π‘¦π‘Žπ‘€,π‘π‘–π‘‘π‘β„Ž,π‘Ÿπ‘œπ‘™π‘™ Added inertia in π‘¦π‘Žπ‘€, π‘π‘–π‘‘π‘β„Ž, π‘Ÿπ‘œπ‘™π‘™ directions [𝑀] Matrix of masses, inertias & added masses [𝐢 𝑑] Matrix of drag coefficients, 𝜌 π‘Žπ‘–π‘Ÿ, etc. {𝐹𝑒π‘₯𝑑} Vector of external forces e.g. gravity etc. ISA International Standard Atmosphere [𝑅] 3x3 Rodrigues form rotation matrix {𝑑} Translation vector, = {Δ𝑋, Ξ”π‘Œ, Δ𝑍, } 𝑇 H.264 MPEG Advanced Video Codec REFERENCES [1] J. Irizarry and E. N. Johnson, β€œFeasibility Study to Determine the Economic and Operational Benefits of Utilizing Unmanned Aerial Vehicles,” Georgia Institute of Technology, Atlanta, USA, 2014. [2] D. Jenkins and B. Vasigh, β€œThe economic impact of unmanned aircraft systems integration in the United States,” Association for Unmanned Vehicle Systems International, Arlington, VA, USA, 2013. [3] A. Mathieson, β€œLiterature Reveiw - An Autotonous Flying Blimp,” Durham University, Durham, UK, 2015. [4] A. Bjalemarkl and H. Bergkvist, β€œQuadcopter control using Android based sensing,” Lund University, Lund, Denmark, 2014. [5] R. Brockers, M. Humenberger, S. Weiss and L. Matthies, β€œTowards autonomous navigation of miniature UAV,” in IEEE Conference on Computer Vision and Pattern Recognition Workshops, Columbus Ohio, USA, 2014. [6] S. Shah, β€œReal-time Image Processing on Low Cost Embedded Computers,” EECS Department UCBC , Berkeley, USA, 2014. [7] C. S. Clark and E. Simon, β€œEvaluation of Lithium Polymer Technology for Small Satellite Applications,” in Small Satellites, AIAA/USU 21st Ann. Conf. on, North Logan, Utah, USA, 2007. [8] A. Elfes, S. S. Bueno, M. Bergerman and J. G. Ramos, β€œA semi- autonomous robotic airship for environmental monitoring missions,” in 1998 IEEE International Conference on Robotics and Automation, Leuven, 1998. [9] A. Elfes, S. S. Bueno1, J. G. Ramos, E. C. d. Paiva, M. Bergerman, J. R. H. Carvalho, S. M. Maeta, L. G. B. Mirisola, B. G. Faria and J. R. Azinheira, β€œModelling, Control and Perception for an Autonomous Robotic Airship,” in Sensor Based Intelligent Robots, Int. Workshop on, Dagstuhl, Germany, 2000. [10] J. Pestana, J. Sanchez-Lopez, P. Campoy and S. Saripalli, β€œVision based GPS-denied Object Tracking and Following for Unmanned Aerial Vehicles,” in IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR), Linkoping, Sweden, 2013. [11] Y. Kawai, S. Kitagawa, S. Izoe and M. Fujita, β€œAn Unmanned Planar Blimp on Visual Feedback Control: Experimental Results,” Faculty of Engineering, Kanazawa University, Kanazawa, Japan, 2004. [12] L. M. Alkurdi and R. B. Fisher, β€œVisual Control of an Autonomous Indoor Robotic Blimp,” Edinburgh University, Edinburgh, UK, 2011. [13] J. Muller, N. Kohler and W. Burgard, β€œAutonomous Miniature Blimp Navigation with Online Motion Planning and Re-planning,” in IEEE/RSJ International Conference on Inteligent Robots and Systems, San Francisco, USA, 2011. [14] C. Shucksmith, β€œDesign and Implementation of an Airbourne Surveillance Vehicle,” Oxford University, Oxford, UK, 2006. [15] S. van der Zwaan, A. Bernardino and J. Santos-Victor, β€œVision based station keeping and docking for an aerial blimp,” in IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2000) (Volume:1 ) , Takamatsu, 2000. [16] Adafruit Industries, β€œAdafruit 10-DOF IMU Breakout - L3GD20H + LSM303 + BMP180,” 15 10 2014. [Online]. Available: http://www.adafruit.com/product/1604. [Accessed 20 10 2014]. [17] S. B. V. Gomes, β€œAn Investigation of the Flight Dynamics of Airships,” Cranfeild Institute of Technology Aero Department, Cranfeild, UK, 1990. [18] N. Yamamura, K. Matsuuchi, M. Onda, S. Yamazaki and A. Saski, β€œDrag Reduction of High Altitude Airships by Active Boundary Layer Control,” Fluids and Thermal Engineering, JSME Int. Journal Series B, vol. 42, no. 1, pp. 230-237, 2008. [19] E. L. Houghton, P. W. Carpenter, S. Collicott and D. Valentine, β€œTurbulence on Spheres,” in Aerodynamics for Engineering Students, Elsevier, 2012, p. 507. [20] P. Naaijen, β€œRigid Body Dynamics in TU Deflt OCW,” 2014. [Online]. Available: http://ocw.tudelft.nl/fileadmin/ocw/courses/OffshoreHydromechanic s/res00032/!506172742032204f666673686f726520487964726f6d65 6368616e696373.pdf. [Accessed 03 06 2015]. [21] J. J. G. Ramos, S. M. Maeta, L. G. B. Mirisola, S. S. Bueno, M. Bergerman, B. G. Faria, G. E. M. Pinto and A. H. Bruciapaglia, β€œInternet-Based Solutions in the Development and Operation of an Unmanned Robotic Airship,” Proceedings of the IEEE, vol. 91, no. 3, pp. 463-475, 03 2003. [22] Z. Zhang, β€œFlexible camera calibration by viewing a plane from unknown orientations,” in Computer Vision, Proc. 7th IEEE Int. Conf on, Kerkyra, Greece, 1999. [23] D. F. DeMenthon and L. S. Davis, β€œModel-Based Object Pose in 25 Lines of Code,” Computer Vision, Int. Jrn of,, vol. 23, no. 1, pp. 123- 141, 1995. [24] R. Barnett, β€œRTIMULib - a versatile C++ and Python 9-dof and 10- dof IMU library,” 11 12 2014. [Online]. Available: https://github.com/richards-tech/RTIMULib/. [Accessed 01 01 2015]. [25] M. Narayana, A. Hanson and E. Learned-Miller, β€œCoherent Motion Segmentation in Moving Camera Videos using Optical Flow Orientations,” in Computer Vision ICCV, 2013 IEEE Int. Conf on, Sydney, Australia, 2013. [26] N. Otsu, β€œA Threshold Selection Method from Gray-Level Histograms,” Systems, Man & Cybernetics, IEEE Trans. on, vol. 9, no. 1, pp. 62-66, 1999. [27] G. Ligorio and A. M. Sabatini, β€œExtended Kalman Filter-Based Methods for Pose Estimation Using Visual, Inertial and Magnetic Sensors: Comparative Analysis,” MDPI Sensors, vol. 13, no. 2, pp. 1919-1941, 2013. [28] I. P. Jakoubek, β€œExperimental Identification of Stable Nonoscillatory Systems from Step-Responses by Selected Methods,” 2008. [29] J. Ko, D. Klein, D. Fox and D. Haehnel, β€œGaussian Processes and Reinforcement Learning for Identification and Control of an Autonomous Blimp,” in IEEE Internation Conference on Robotics and Automation, Rome, 2009. [30] Adafruit Industries, β€œAdafruit 16-Channel 12-bit PWM/Servo Driver - I2C interface,” 16 02 2015. [Online]. Available: http://www.adafruit.com/product/815. [Accessed 20 10 2014].
  • 12. M.ENG RESEARCH PROJECT APRIL 2015 12 APPENDICES – SUPPLEMENTARY FIGURES A. The Blimp Figure 11 – Cost breakdown. Figure 12 – Schematic for two channels of optoisolation/buffering, with signal flow from PWM output to servo highlighted. The blimp has twelve channels of optically isolated output available. B. Software Figure 13 – Thread interactions during startup. Bold lines show flow of execution, dashed lines represent data exchange between functions. Β£197 Β£30 Β£28 Β£33 Β£41 Β£10 Blimp envelope 3D printed enclosure * Raspberry Pi; Wi-Fi; Camera BLDC motors; ESC; battery * IMU; PWM driver Electronics; other * * denotes free issue items PWM signal twin optocoupler MCT61, DIP Logic side: 5.0 V dc supply Motor side: 7.4 V dc supply Thrust vector servos Servo signal Twin op amps LM358A
  • 13. M.ENG RESEARCH PROJECT APRIL 2015 13 Figure 14 – Thread interaction during runtime. Bold lines show flow of execution, dashed lines represent data exchange between functions. At termination, Hardware Abstraction Thread stops all motors, GUI Thread writes log files to disc, and Vision Thread closes camera. C. Performance Results Figure 15 – Comparison of step response predicted by 2nd order with time delay modelled transfer function (Harriot’s method) and median step response of real blimp. Inset – correlation plot between the two step responses. Spearman’s 𝑅2 = 0.967. -0.2 0 0.2 0.4 0.6 0.8 1 1.2 -4 -2 0 2 4 6 8 10 Stepresponse time relative to step, s Harriott approx. Median response RΒ² = 0.9679 Medianpoints Harriott points Correlation plot