SlideShare a Scribd company logo
1 of 90
Download to read offline
The University of Manchester
School of Mechanical, Aerospace and Civil Engineering
TOWARDS 3D OBJECT CAPTURE FOR
INTERACTIVE CFD
WITH AUTOMOTIVE APPLICATIONS
A dissertation submitted to The University of Manchester for the degree of
Master of Science in the Faculty of Science and Engineering.
2016
Malcolm Olav Dias
9803763
Supervisor: Dr Alistair Revell
Towards 3D Object Capture for Interactive CFD with Automotive Applications Malcolm O. Dias | 9803763
2
Table of Contents
List of Figures..................................................................................................................................................................4
List of Tables....................................................................................................................................................................5
Abstract..............................................................................................................................................................................6
Declaration.......................................................................................................................................................................7
Copyright Statement ....................................................................................................................................................7
Acknowledgements .......................................................................................................................................................9
1. Introduction.........................................................................................................................................................10
1.1. Motivation ...................................................................................................................................................10
1.2. Computational Fluid Dynamics..........................................................................................................12
1.2.1. Lattice-Boltzmann method (LBM)...........................................................................................16
1.3. Geometry Capture ....................................................................................................................................20
1.3.1. Microsoft Kinect Camera..............................................................................................................25
1.4. Object Capture Pipeline (OCP)............................................................................................................26
1.5. Objectives.....................................................................................................................................................27
2. Literature Review..............................................................................................................................................28
2.1. 3D Scanning................................................................................................................................................28
2.2. Post-processing Point Cloud ................................................................................................................35
3. Methodology........................................................................................................................................................43
3.1. Laboratory upgrade................................................................................................................................43
3.2. Rough Alignment......................................................................................................................................44
3.3. Varying SOR parameters.......................................................................................................................47
3.4. Varying Registration Parameters .....................................................................................................48
3.5. Setting up Clip Box...................................................................................................................................50
3.6. Modifying Point Cloud using Axis Alignment - Bounding Box (AABB)..............................50
4. Results and Discussion.....................................................................................................................................52
4.1. Laboratory Upgrade ...............................................................................................................................52
4.2. Rough Alignment......................................................................................................................................55
4.3. SOR Filtering...............................................................................................................................................61
4.3.1. Time Analysis ....................................................................................................................................61
4.3.2. Qualitative Analysis........................................................................................................................65
Towards 3D Object Capture for Interactive CFD with Automotive Applications Malcolm O. Dias | 9803763
3
4.4. Registration ................................................................................................................................................70
4.4.1. Time Analysis ....................................................................................................................................70
4.4.2. Qualitative Analysis........................................................................................................................75
4.5. Clip Box.........................................................................................................................................................79
4.6. Axis Alignment – Bounding Box (AABB).........................................................................................81
5. Conclusions...........................................................................................................................................................83
5.1. Future Improvements.............................................................................................................................86
6. References.............................................................................................................................................................88
Appendix A: Machine Specifications and Software Used ...........................................................................91
Word Count: 13862
Towards 3D Object Capture for Interactive CFD with Automotive Applications Malcolm O. Dias | 9803763
4
List of Figures
Figure 1.1.1: Outline of the research carried out by Harwood et. al.....................................................11
Figure 1.2.1: Nodal points for FDM, FEM and FVM......................................................................................15
Figure 1.2.2: Meshing on a complex geometry [7]........................................................................................15
Figure 1.2.3: Techniques of simulation [2].......................................................................................................16
Figure 1.2.4: Lattice arrangements [2].............................................................................................................19
Figure 1.4.1: General Flow Chart of an OCP used for the research [Harwood A., 2016]..............27
Figure 2.1.1: Scanner accuracy (Small parabola: triangulation scanner with short base. Large
parabola: triangulation scanner with long base. Straight line: Time of flight) [9] .......................29
Figure 2.1.2: The calibration board in the IR .................................................................................................33
Figure 2.2.1: Feature Histograms for corresponding points on different point cloud datasets
[19.]...................................................................................................................................................................................38
Figure 3.1.1: Original laboratory set-up (before upgrade) ......................................................................43
Figure 3.2.1: Kinect Camera Local Axes [21] ..................................................................................................45
Figure 3.2.2: Translation Parameters for Rough Alignment ...................................................................46
Figure 3.2.3: Rotation Parameters for Rough Alignment..........................................................................46
Figure 3.3.1: SOR filtering (For neighbours = 10).........................................................................................47
Figure 3.4.1: Registration........................................................................................................................................49
Figure 4.1.1: Schematic of proposed laboratory design.............................................................................53
Figure 4.1.2: Schematic of the aluminium frame ordered ........................................................................53
Figure 4.1.3: Kinect camera clamped to the aluminium frame ..............................................................54
Figure 4.1.4: Laboratory setup after upgrading ...........................................................................................54
Figure 4.1.5: Upgraded car model base.............................................................................................................55
Figure 4.2.1: Point Cloud raw data from Kinect (Before Processing)..................................................56
Figure 4.2.2: Laboratory arrangement with global axes ..........................................................................57
Figure 4.2.3: Point Cloud Data after aligning Camera axes to Global axes.......................................58
Figure 4.2.4: Point Cloud Data after Rough Alignment..............................................................................60
Figure 4.2.5: Zoomed in view to locate the rough alignment of Car model.......................................61
Figure 4.3.1: Variation of Time (T/TSD-max) for ‘Standard Deviation’ for various ‘Neighbours’63
Figure 4.3.2: Variation of Time (1-T/Tavg@N) for ‘Neighbours’ for various ‘Standard Deviation’
.............................................................................................................................................................................................64
Figure 4.3.3: SOR Filtered – Camera 1 for Std.Dev. = 1.0 and various Neighbours ........................66
Figure 4.3.4: SOR Filtered – Camera 1 for Neighbours = 10 and various Standard Deviation .68
Figure 4.3.5: SOR Filtered – All Cameras for Neighbours = 5 and Standard Deviation = 1.0.....69
Figure 4.4.1: Effect of selecting low Correspondence Distance value ..................................................72
Figure 4.4.2: Variation of Time (T/Tmin@MI,RT) with respect to ‘Correspondence Distance’ for
different values of ‘Registration Tolerance’ and ‘Maximum Iterations’..............................................73
Figure 4.4.3: Variation of Time (T/Tavg@MI,CD) with respect to ‘Registration Tolerance’ for
different ‘Correspondence Distance’ and ‘Maximum Iterations’ values..............................................74
Towards 3D Object Capture for Interactive CFD with Automotive Applications Malcolm O. Dias | 9803763
5
Figure 4.4.4: Point Cloud obtained from the CAD design of the car model........................................75
Figure 4.4.5: Registered Point Cloud for Registration Tolerance = 0.01, Maximum Iterations of
10 and 20 and Correspondence Distance of 0.1m, 1m and 10m.............................................................77
Figure 4.4.6: Registered Point Cloud for Correspondence Distance =10m, Registration
Tolerance = 0.1 and 0.001 and Maximum Iterations of 10 and 20 .......................................................78
Figure 4.4.7: Effect of selecting a high Correspondence distance value..............................................78
Figure 4.5.1: Effect of using different Clip Box limits ..................................................................................80
Figure 4.6.1: Point cloud after AABB..................................................................................................................82
List of Tables
Table 1.3.1: Stereoscopic scanner ‘Real-View 3D’ [13]...............................................................................22
Table 1.3.2: Scanning technology principles [9]............................................................................................24
Table 2.1.1: Device failure ratios for two application modes for the major error sources
discussed [16] ...............................................................................................................................................................35
Table 4.2.1: Camera colour code legend for SOR point cloud data .......................................................56
Table 4.2.2: Values used for Translation input file for Rough Alignment ..........................................59
Table 4.2.3: Values for Rotation input file for Rough Alignment ...........................................................59
Table 4.3.1: Time Taken for SOR filtering........................................................................................................62
Table 4.4.1: Time taken for registration at Maximum Iteration =10...................................................70
Table 4.4.2: Time taken for registration at Maximum Iteration =20...................................................70
Table 4.5.1: Clip Box limits selected....................................................................................................................81
Table 4.6.1: Parameters used for AABB ............................................................................................................82
Towards 3D Object Capture for Interactive CFD with Automotive Applications Malcolm O. Dias | 9803763
6
Abstract
Computational Fluid Dynamics (CFD) is an essential activity with most engineering
universities and industries carrying out research to improve it. To this end, research is
being carried out by Harwood et. al. at The University of Manchester, to innovate the
way CFD is being used. Their research uses depth-sensing cameras to capture the
geometry of an object which can then be post-processed, and a CFD analysis can be
carried out using the lattice-Boltzmann method (LBM). The work done in this report
will form a part of the main research and deal with upgrading the scanning laboratory
and carrying out a study on the effect of the input parameters used in the object
reconstruction. A wooden base was designed, built and then fixed to the car model to
attain completeness. The laboratory was upgraded from four cameras to six cameras,
and an aluminium frame was installed to mount the cameras instead of tripods to make
the laboratory setup more steady and robust. The object capture software was updated
to incorporate the new laboratory setup to perform rough alignment. The effect of
varying the input values for noise filtering was studied and it was found that the
‘neighbours’ parameter had the maximum influence on time taken. However, both
neighbours and standard deviation affected the point cloud data obtained after filtering.
Varying the registration parameters suggested that the physical parameter
(correspondence distance) had the major effect on the time taken for registration and
also on the resulting point cloud. The effect of the other two criteria used for
registration was negligible. A brief analysis on rough alignment, clip box and axis
alignment bounding box stages of the capture software was carried out as well.
Towards 3D Object Capture for Interactive CFD with Automotive Applications Malcolm O. Dias | 9803763
7
Declaration
No portion of the work referred to in the dissertation has been submitted in support
of an application for another degree or qualification of this or any other university
or other institute of learning.
Copyright Statement
i. The author of this dissertation (including any appendices and/or
schedules to this dissertation) owns certain copyright or related rights in
it (the “Copyright”) and s/he has given The University of Manchester
certain rights to use such Copyright, including for administrative
purposes.
ii. Copies of this dissertation, either in full or in extracts and whether in hard
or electronic copy, may be made only in accordance with the Copyright,
Designs and Patents Act 1988 (as amended) and regulations issued under
it or, where appropriate, in accordance with licensing agreements which
the University has entered into. This page must form part of any such
copies made.
iii. The ownership of certain Copyright, patents, designs, trademarks and
other intellectual property (the “Intellectual Property”) and any
reproductions of copyright works in the dissertation, for example graphs
Towards 3D Object Capture for Interactive CFD with Automotive Applications Malcolm O. Dias | 9803763
8
and tables (“Reproductions”), which may be described in this dissertation,
may not be owned by the author and may be owned by third parties. Such
Intellectual Property and Reproductions cannot and must not be made
available for use without the prior written permission of the owner(s) of
the relevant Intellectual Property and/or Reproductions.
iv. Further information on the conditions under which disclosure, publication
and commercialisation of this dissertation, the Copyright and any
Intellectual Property and/or Reproductions described in it may take place
is available in the University IP Policy, in any relevant Dissertation
restriction declarations deposited in the University Library, and The
University Library’s regulations.
Towards 3D Object Capture for Interactive CFD with Automotive Applications Malcolm O. Dias | 9803763
9
Acknowledgements
I take this opportunity to express my sincere gratitude to my supervisor Dr Alistair
Revell for his support and encouragement for this dissertation. I would also like to
express my gratitude to Dr Adrian Harwood for his cordial support, valuable
information and guidance.
I would like to thank Mr Thomas Lawton, Mrs Natalie Parish and all the laboratory
technical support staff at George Begg, for helping me while setting up the
laboratory.
I would also like to thank my family and friends for their continuous encouragement
without which this dissertation would not have been possible.
Towards 3D Object Capture for Interactive CFD with Automotive Applications Malcolm O. Dias | 9803763
10
1. Introduction
The availability of new devices, faster computers and other resources makes it
easier and interesting to develop even newer technologies and innovative ideas.
Modern tools and devices can be used to develop engineering methods and practices
as well. Engineering design, which can be considered as one of the first stages of
product development is evolving with technology with a lot of research and
innovation being put in by researchers and engineers in order to improve it. The
understanding and detailed knowledge of complex fluid flow form an essential
aspect of many modern engineering systems and hence, an integral to engineering
design. The growth of Computational Fluid Dynamics (CFD) has made the study of
fluid flow much more accurate. Moreover, CFD simulations can also be used to
perform thermal investigations (like heat transfer). Therefore, computational fluid
dynamics (CFD) forms a crucial stage of the design phase but it is far from exact and
accuracy comes at the expense of computational effort.
1.1. Motivation
Sometimes, if not always, aesthetics is given very much importance for example in
designing a new car model. Designers may put out possibly the best ‘looking’ design,
but this design may compromise when it comes to engineering performance. To
carry out evaluation, the engineers need to carry out various analysis and tests,
including CFD. Mesh construction is a must when it comes to using the typical CFD
Towards 3D Object Capture for Interactive CFD with Automotive Applications Malcolm O. Dias | 9803763
11
techniques which can be a time-consuming process. Simulations can take days or
even weeks to complete. What if an object is modified physically by adding or
removing some part or perhaps change the entire shape (e.g. as in clay constructed
models) and the engineers are asked to carry out the analysis at the same time?
They may have to redesign the entire object using CAD and then reconstruct the
mesh. This can be a tedious and repetitive process, added to the time consumption
the meshing takes. In order to improve this process, it may be beneficial to run
faster, low accuracy simulations to steer design. Research at The University of
Manchester is being carried out wherein depth sensing cameras are used to capture
the geometric parameters of a tangible object, which is then post-processed and
real-time interactive CFD analysis are carried out using the lattice-Boltzmann
method.
Figure 1.1.1: Outline of the research carried out by Harwood et. al.
• Kinect Capture
• Registration
• Filtering
• AABB
OCP
• Voxeliser
• BC Config
• Solver
LBM • Particle
Generator
• Field Mapping
Vis
Towards 3D Object Capture for Interactive CFD with Automotive Applications Malcolm O. Dias | 9803763
12
The research detailed in this report will concentrate on improving the software
used to reconstruct captured objects. This software is referred to as the Object
Capture Pipeline (OCP). The effects of varying different input parameters on the
results obtained are studied.
1.2. Computational Fluid Dynamics
Performing experiments and tests can be helpful in gathering important information
but experiments usually turn out to be expensive and time-consuming. Moreover, to
scale the parameters correctly may be difficult. Additionally, the measuring devices
or probes inserted may disturb the flow properties and accessing complex locations
in the flow may be difficult. Replicating experiments for explosions or blasts may
cause safety issues to the environment and to the individuals performing it. Some
simple problems can be tackled with empirical correlations. However, these are not
applicable (or not available) for complex flows. In these situations simulations must
be performed to provide insight.
The Navier-Stokes (NS) equations (Equation 1.1 and 1.2) may be used to describe
the flow behaviour of a viscous flow which follows the Newtonian laws.
Towards 3D Object Capture for Interactive CFD with Automotive Applications Malcolm O. Dias | 9803763
13
Continuity: (1.1)
Momentum: ( ) (1.2)
The equations above have been derived from the fundamental laws governing the
continuum mechanics i.e. the conservation of mass and conservation of momentum,
and hence solution of these equations allow us understanding the flow physics.
Usually, in actual applications, additional terms and equations may be needed to
account for the heat transfer, turbulence etc. However, the above equations are
impossible to solve for a realistic boundary condition and hence the need for a
discrete representation and a computational solver to solve them is entirely
justified. The test domain is divided into smaller sections (sub-domains) called
elements or cells (or volumes in 3D). Usually, these sections are constructed of
geometric primitives like triangles, quadrilaterals etc. in 2D and cubes, prisms,
tetrahedra etc. in 3D [4]. A group of the cells or elements arranged in a particular
test domain is referred to as a ‘mesh’. The partial differential equations (PDEs) are
then approximated over each element using a discretization scheme. Common
schemes are Finite Element Method (FEM), Finite Different Method (FDM) and
Finite Volume Method (FVM).
Towards 3D Object Capture for Interactive CFD with Automotive Applications Malcolm O. Dias | 9803763
14
In the 1960s, the finite element method was used to solve the structural analysis
problems and during the same time the finite difference method was used to some
extent to solve the fluid dynamics equations. In the 1980s the finite volume method
was developed and was extensively used to solve the fluid transport equations [2].
In finite difference method, the derivatives are represented using Taylor series
expansions and the dependant values are stored at the nodes. In this method the
terms are represented as fixed set of nodal quantities and then solving for these
quantities. In finite element method, the function is integrated over the finite
element and the dependant variables are stored at the centre of the element. Here,
the solution is represented as values from each element weighted by a shape
function and requires the computing of mass stiffness and damping coefficients.
Whereas in finite volume method, the integrations are carried out over the control
volume and the node is at the centre of the control volume where the dependant
variables are stored. In FVM, the flux terms on faces are reconstructed to obtain
conservation equations and then solving them first for the fluxes and then for the
remaining variables. The discretized equations are solved iteratively until the
convergence criteria are met. Complex geometries and structures increase the
difficulties of mesh construction and hence can be a time consuming process to
establish a mesh with suitable accuracy.
Towards 3D Object Capture for Interactive CFD with Automotive Applications Malcolm O. Dias | 9803763
15
(a) : FDM (b) : FEM (c) : FVM
Figure 1.2.1: Nodal points for FDM, FEM and FVM
Figure 1.2.2: Meshing on a complex geometry [7]
Some modern ‘meshless’ methods like the Smoothed Particle Hydrodynamics (SPH)
are developed and recently gaining importance [22]. Another recent method which
works on the Cartesian grids (lattices) called the lattice-Boltzmann method (LBM) is
gaining popularity for its speed of operation and applications (e.g. multiphase flows,
etc.). The equations used in LBM to design fluid flow are local i.e. they are explicit,
which makes lattice-Boltzmann method suitable for programming on parallel
processing computers [1]. Therefore, the LBM is highly appropriate for Interactive-
CFD from all the methods currently available.
Towards 3D Object Capture for Interactive CFD with Automotive Applications Malcolm O. Dias | 9803763
16
1.2.1. Lattice-Boltzmann method (LBM)
The lattice-Boltzmann method was originally introduced in 1988 by McNamara and
Zanetti [2][6] to overcome the weaknesses (e.g. statistical noise [6]) of the cellular
gas automata1 [6]. But it was only during later 1990s and early 2000s that this
method came into prominence after continuous development. The lattice-
Boltzmann method is a mesoscale method which lies between molecular dynamics
and macroscale dynamics. In molecular dynamics, each collision is considered
explicitly, while the macroscale assumes a continuous medium. Mesoscale considers
a group of particles and analyses the motion using particle dynamics.
Figure 1.2.3: Techniques of simulation [2]
In the lattice-Boltzmann method, the properties of a collection of particles may be
statistically represented by using a distribution function. The main idea of the
1 Cellular gas automata is a method which is used to model fluid flow assuming the fluid is made up of particles
which undergo binary collisions (i.e. only two particles can collide)
Towards 3D Object Capture for Interactive CFD with Automotive Applications Malcolm O. Dias | 9803763
17
lattice-Boltzmann method is that fluids can be imagined as comprising of a large
number of small particles with random movement, and the transfer of momentum
and energy is achieved through streaming and collision. The Boltzmann equation
(Equation 1.3) [6] is used to model this movement.
⃗
(1.3)
Where, ⃗ is the particle distribution function2, ⃗ is the particle velocity and is
the collision operator (rate of change of collisions) which redistributes the particle
momenta. The collision operator can be simplified using the Bhatnagar-Gross-Krook
(BGK) approximation which increases the efficiency of the simulations and the
transport coefficients are more flexible [2].
(1.4)
Therefore, the collision and streaming process can be discretized as below [6],
Collision: ⃗ ⃗ ( ⃗ ⃗ ) (1.5)
Streaming: ⃗ ⃗ ⃗ (1.6)
Where, is the distribution function after collision, is the equilibrium
distribution function (Maxwell-Boltzmann distribution function), is the relaxation
time which is the time required for the distribution function to return to its
equilibrium position, ⃗ is the distance between the particles, ⃗ are the molecular
2
Particle Distribution Function – It represents the proportion of particles at a particular lattice site moving in a
given lattice direction.
Towards 3D Object Capture for Interactive CFD with Automotive Applications Malcolm O. Dias | 9803763
18
velocities in which denotes the direction based on lattice arrangement, is the
time and is the time step.
More complex time relation schemes are also available,
 TRT (Twin Relation Time): for two component flow (multiphase)
 MRT (Multiple Relaxation Time): to provide stability at high Reynolds
number (necessary for turbulent flow)
In LBM, the solution domain is divided into one or more lattices and at each lattice
node, the components of the distribution function f of the particles are stored. The
distributions move along specified directions to the neighbouring nodes, the
number of direction depends on the lattice arrangement. Usually, the lattice
arrangements are classified by the DnQm scheme, where ‘n’ refers to the number of
physical dimensions and ‘m’ refers to the number of discretised velocities and is
more relevant as one considers particle energies. For example, in D2Q9 there are
nine discretised velocities and three speeds (0, 1, √2). A D3Q27configuration will
have 27 velocities and four speeds (0, 1, √2, √3). Hence, using a higher value for the
lattice arrangement would include more information and therefore would make the
solution more accurate. However, selecting a larger configuration would increase in
the time taken to compute the solution.
Towards 3D Object Capture for Interactive CFD with Automotive Applications Malcolm O. Dias | 9803763
19
(a) : For 1D Problems D1Q3
(b) : For 2D Problems D2Q9
(c) : For 3D problems D3Q15
Figure 1.2.4: Lattice arrangements [2]
The macroscopic quantities can be related to the above equations with the help of
the following statistical moments,
∫ (1.7)
∫ (1.8)
Where, is the density of the fluid, u is the fluid velocity vector.
Towards 3D Object Capture for Interactive CFD with Automotive Applications Malcolm O. Dias | 9803763
20
It is also interesting to note that the Navier-Stokes equations can be recovered from
the Lattice Boltzmann equations with the help of Chapman-Enskog theory [2][6].
Like every method, LBM has its limitations one of which is that it is invalid for all but
very low Mach numbers and it is difficult to model high Reynolds number flow. The
main advantages of LBM over NS equations are that in the former method the
operations are linear as compared to nonlinear in the latter, there is no pressure
velocity coupling in LBM as in NS and the equations in LBM are local rather than
elliptic. Additionally, the local nature of the LBM equations makes it efficient to
program them on parallel processing machines (especially on Graphical Processing
Units (GPUs)). In case of a moving boundary, the traditional CFD methods need to
trace the boundary, which is not needed in LBM and hence makes it a suitable
technique for multiphase flows like solidification and melting. The speed of LBM is
what makes it ideal for attempting real-time3 CFD simulations [1].
1.3. Geometry Capture
In classic CFD methods, the subject on which the study is carried out is usually
obtained from a 3D CAD design. This design is then fed into a mesh generating
software (e.g. ANSYS etc.), and the mesh is constructed. However, the essence of the
research carried out by Harwood et. al. is to replace this step by establishing an
3
Computational time equal to physical time
Towards 3D Object Capture for Interactive CFD with Automotive Applications Malcolm O. Dias | 9803763
21
alternative for constructing the geometry of the subject. One method, which is
widely used for 3D object capture generally, but not really for CFD, is 3D scanning.
3D scanning is the method (or technology) of converting geometrical parameters
(sometimes appearance like colour etc.) of a physical object and/or scenes into a set
of numerical data which can be interpreted and post processed using a suitable
software. The devices which acquire the data are called 3D scanners [9][11]. The
data from a 3D scanner is usually called a ‘Point Cloud’. A 3D scanner has many
similar features of a camera; they both have a field of view which is usually in the
shape of a cone and they capture information about planes and surfaces of a visible
object. However, the latter captures the colour of the surface and former captures
the distance of the surface from the sensor and hence captures the depth
information. Please note that the words camera and scanner are used
interchangeably in this report.
Different scanners use different scanning technologies with their own advantages
and drawbacks. These technologies can be broadly divided into two main
categories: contact and non-contact. In contact 3D scanning, the depth is scanned
(measured) by physically touching the object while it is resting on a precision flat
surface. In case when the surface is not flat, suitable fixtures are used to hold the
object. But one can intuitively suggest that physically touching the object can
damage the object and hence this method is not advisable in case of sensitive or
Towards 3D Object Capture for Interactive CFD with Automotive Applications Malcolm O. Dias | 9803763
22
precious objects. A coordinate measure machine (CMM) is a classic example of this
technique.
The non-contact 3D scanning technique is further divided into active and passive
methods. Non-contact passive devices use the natural light or radiation available
instead of using an emitter. This method is usually very cheap as it does not need
any expensive hardware but can use a simple camera. A stereoscopic non-contact
passive 3D scanner, which uses two cameras to capture the geometry [14], is shown
in the figure below.
Table 1.3.1: Stereoscopic scanner ‘Real-View 3D’ [13]
Non-contact active scanners emit light or radiation (ultrasound, x-rays etc.) and
detect the reflection (or radiation through the object) in order to sense the object
and/or the environment. Some of the techniques used in 3D scanners are briefly
described below.
Towards 3D Object Capture for Interactive CFD with Automotive Applications Malcolm O. Dias | 9803763
23
(a)Time-of-Flight: Light (usually LASER) is emitted by the device in the
direction of the object to be probed and the time taken to receive the light is
measured. Since the speed of light is already known, the distance of the object
from the sensor can be easily computed. The accuracy of the instrument
usually relies on the time measuring precision, as the speed of light is of high
magnitude [9]. Yet, these devices are used to measure buildings and
geographical features as they are capable of working on a long distance (on
the order of kilometres).
(b)Triangulation: In this technique, an emitter is used to emit the laser light
onto the object and a camera is used to find the location of the laser dot. The
laser dot appears at different locations in the field of view of the camera
which is proportional to how far the object is from the laser. The name
triangulation comes due to the formation of a triangle by the laser emitter, the
laser dot and the camera. The distance between the emitter and the camera
and the angle of the laser emitter are prefixed parameters, hence they are
already known. The second angle, which is formed at the camera corner, can
be obtained by viewing the location laser spot in the camera field of view.
With these three parameters, the distance (depth) of the object or the
environment can be measured [9]. The scanners using this technique can be
used only for short distance measurements, but they have high accuracy.
Towards 3D Object Capture for Interactive CFD with Automotive Applications Malcolm O. Dias | 9803763
24
(a) : Time of Flight principle
(b) : Triangulation principle – single camera (c) : Triangulation principle – double cameras
Table 1.3.2: Scanning technology principles [9]
3D scanning devices typically use one of these two techniques to collect object data.
Devices fall into one of two broad categories:
Hand-held laser scanners: These types of scanners work on the triangulation
technique and use an internal coordinate reference system. An additional reference
system is used to calibrate the data when the scanner is in motion.
Towards 3D Object Capture for Interactive CFD with Automotive Applications Malcolm O. Dias | 9803763
25
Structured-light scanners: A defined pattern of light is projected using a stable
light source (e.g.: LCD Projector) onto the object or subject to be probed and a
camera is used to analyse the deformation in the pattern. The camera is at an offset
from the pattern emitter. In structured-light scanners, since multiple points in the
field of view are scanned, the time taken for scanning is relatively low. Because the
entire field of view is scanned, the distortions due to motion are reduced and hence
the precision of the scanned image is high.
1.3.1. Microsoft Kinect Camera
The Kinect is a motion sensing device which was developed by Microsoft in 2010 for
the purpose of gaming with Xbox 360, but has attracted a lot of attention from the
researchers due to the way the data is captured [12]. In 2015, Microsoft launched a
new version of their new gaming system Xbox One which was accompanied with a
fresh design for the Kinect camera (Kinect 2.0). The depth sensor on the initial
model operated on the principle of triangulation with the structured light approach.
However, the new one works on the Time-of-flight technique using Infra-red (IR)
blasters and sensors for depth sensing. Moreover, the Kinect 2.0 has a wider field of
view and an increase depth capture resolution than its predecessor. As an affordable
solution, these devices are used in this research project for object capture.
Towards 3D Object Capture for Interactive CFD with Automotive Applications Malcolm O. Dias | 9803763
26
1.4. Object Capture Pipeline (OCP)
The data obtained from a 3D scanner (or any other scanner) needs to be processed
such that the information captured can be used for the desired purpose. A sequence
of tasks needs to be carried out on the data before it attains the required form and
structure for use in a CFD simulation. A pipeline, with respect to computer
programming and coding, refers to a sequence of processing parts or elements
consisting of functions, subroutines etc. which are organised such that the input to
each section is taken from the output of the previous section. It is analogous to a
physical pipeline. Our Object Capture Pipeline (OCP), as the name suggests, is a
pipeline which captures an image (or point cloud) from an image (depth) sensing
device and then processed (aligned, filtered etc.) in order to achieve the necessary
output. Its design is dependent on the methods used and so OCPs are usually of a
variety of designs and processing done is variable based on the end result needed. In
Figure 1.4.1, the red blocks indicate the main routine of the OCP, with green, orange
and blue blocks indicating the subroutines and functions used within the main code.
The current OCP has a number of free parameters which need to be optimally
selected to maximise the performance.
Towards 3D Object Capture for Interactive CFD with Automotive Applications Malcolm O. Dias | 9803763
27
Figure 1.4.1: General Flow Chart of an OCP used for the research [Harwood A., 2016]
1.5. Objectives
The objectives of this dissertation are,
 To upgrade the scanning laboratory from four cameras to six cameras and
make the arrangement more stable.
 To upgrade the object to be scanned by completing its surface construction.
 To optimise, and where possible, remove the empiricism in the Object
Capture Pipeline.
 To study the effects of input parameters used for aligning and clipping the
point cloud.
 To study the effects of input parameters used for noise filtering and
registration (point cloud alignment) on the time taken and compare the
quality of the resulting point clouds.
Towards 3D Object Capture for Interactive CFD with Automotive Applications Malcolm O. Dias | 9803763
28
2. Literature Review
This section will discuss the prior work done by other researchers in a similar field.
Attention is focussed towards different scanning technologies, with prime
importance given to the Kinect camera, followed by different methods and
approaches used for registration and processing point cloud data. The scope is
limited to relatively recent approaches; however, where necessary old findings are
considered.
2.1. 3D Scanning
(Boehler and Marbs, 2008) discuss the principle of operation and accuracy
considerations of different close range 3D scanning instruments. They have focused
their study basically for the purpose of maintaining records of cultural heritage. The
authors state that in case of ranging scanners, the scanners measure the horizontal
and vertical angles and then compute the distance either by phase comparison
method or by time of flight method. In the phase comparison method the light is
sent out in the form of an organised harmonic wave, and the phase difference
between the transmitted light and the light obtained by the receiver is used to
compute the distance. However, this method may tend to produce some errors in
the results, as a well-defined returning signal is required [9]. On the other hand, the
Towards 3D Object Capture for Interactive CFD with Automotive Applications Malcolm O. Dias | 9803763
29
ranging scanners working on the time-of –flight principle compute the distance by
calibrating it to the time required for the laser beam to travel from the transmitter
to the receiver. These type of scanners may also lead to poor results since they use
simpler algorithms, and also the angular pointing of the beam can affect the 3D
accuracy of the results obtained [9]. Some 3D scanners work on Triangulation
principle, and to identify the location of the light spot on the object one or two
cameras are used (see Figure 1.3.1). The triangle formed is then used to obtain the
3D position of the object.
Figure 2.1.1: Scanner accuracy (Small parabola: triangulation scanner with short base. Large parabola:
triangulation scanner with long base. Straight line: Time of flight) [9]
While discussing accuracy considerations, (Boehler and Marbs, 2008) mention that
if the surfaces obtained are of irregular nature then modelling them particularly by
mesh may be cumbersome since a smoothing operation cannot be applied (due to
the presence of noisy points). Hence they suggest using an accurate scanner is
desirable. The authors very briefly explain the importance of speed, resolution of
Towards 3D Object Capture for Interactive CFD with Automotive Applications Malcolm O. Dias | 9803763
30
spot size, range limits and influence of radiation, field of view, registration devices,
imaging cameras, ease of transportation power supply and scanning software that
may be considered while selecting a 3D scanner although the main focus is kept on
the accuracy and principles of operation.
The discussion by (Boehler and Marbs, 2008) is very brief in terms of the matter
discussed. However the parameters considered including the ones briefly discussed
by them, can provide a guideline while selecting and comparing different 3D
scanning devices.
(Khoshelham et al., 2012) give a detailed study of the Kinect working on structured
light triangulation principle. They provide a mathematical model to obtain the depth
of point with respect to the sensor; it is given as follows,
(2.1)
Where Z0 is the distance of the reference plane from the sensor, Zk denotes the
distance (depth) of the point k in object space, b is the base length, f is the focal
length of the infrared camera, D is the displacement of the point k in object space,
and d is the observed disparity in image space [15].
Towards 3D Object Capture for Interactive CFD with Automotive Applications Malcolm O. Dias | 9803763
31
The object co-ordinates along the plane are given by (Khoshelham et al., 2012) as
follows,
(2.2)
(2.3)
Where and are the image coordinates of the point, and are the
coordinates of the principal point, and and are corrections for lens distortion
[15].
A relation between the normalised disparity and the inverse depth of a point is
shown in Equation 1.4 [15]
( ) (2.4)
(Khoshelham et al., 2012) suggest that the errors in a Kinect camera may originate
in the sensor, the measurement setup and/or properties of the object surface.
The depth determination diminishes quadratically with an increase in the distance
from the sensor. The point dividing in the depth direction (along the optical pivot of
the sensor) is as extensive as 7 cm and goes up to 5 meters maximum [15]. The
random errors of depth estimations increments quadratically with expanding
Towards 3D Object Capture for Interactive CFD with Automotive Applications Malcolm O. Dias | 9803763
32
separation from the sensor and achieves 4 cm at the greatest scope of 5 meters [15].
To dispose of bends in the point cloud and misalignments between the colour and
depth information an exact stereo alignment of the IR camera and the RGB camera is
important [15].
The analysis carried out by (Khoshelham et al., 2012) would be very crucial in
setting up the cameras in the workspace. Also, the results obtained help in
understanding the potential errors that can creep in the data while obtaining the
results.
(Smisek, Jancosek, & Pajdla, 2013) give a geometrical investigation of the Kinect
camera, outline its geometrical model, suggest a calibration technique and also show
its performance by comparing the results with a SwissRanger SR-4000 and 3.5
megapixel SLR stereo. As suggested by the authors, the Kinect functions as a depth
camera and a colour (RGB) camera, which can be utilised to perceive picture
substance and surface 3-D points. They adjusted Kinect cameras by showing the
same alignment point to the IR and RGB cameras. Along these lines, both cameras
are adjusted with respect to the same 3D points. The results demonstrate that much
better picture is obtained by obstructing the IR projector and lighting up the
objective by an incandescent (halogen) light for the purpose of calibration [12].
Also, the complex residual errors were studied by them.
Towards 3D Object Capture for Interactive CFD with Automotive Applications Malcolm O. Dias | 9803763
33
(a) : IR image of a calibration
checkerboard illuminated by IR
pattern
(b) : IR image of the calibration checkerboard
illuminated by a halogen lamp with the IR
projection blocked
Figure 2.1.2: The calibration board in the IR
They mounted the Kinect and SLR stereo firmly and aligned together, and gauged
the same planar focuses in 315 control adjustment points on each of the 14 targets.
SR- 4000 3D TOF measured diverse planar targets however in a practically identical
scope of separations 0.9 − 1.4 meters from the sensor in 88 control alignment points
on each of the 11 alignment targets [12]. The authors conclude that in the nature of
the multi-view reconstruction, Kinect accomplished better results than SwissRanger
SR-4000 and was near to 3.5 M pixel SLR Stereo.
The research by (Smisek, Jancosek, & Pajdla, 2013) gives a detailed idea of how to
calibrate Kinect camera which can be used for calibrating the Kinect cameras in the
working area. The geometric models for Kinect can be effectively used for
calibration and also the results obtained by comparing the three types of cameras
can be utilised while selecting the scanning devices. The results obtained can be said
to agree with the previous study done by (Khoshelham et al., 2012).
Towards 3D Object Capture for Interactive CFD with Automotive Applications Malcolm O. Dias | 9803763
34
A study carried out by (Sarbolandi, Lefloch, and Kolb, 2015) introduces a definite
correlation between two types of range sensing Kinect cameras; structured light and
time-of-flight. With a specific end goal to direct the correlation, they propose a
structure of seven diverse trial setups. Their objective was to identify impacts of the
Kinect cameras such that they can be applied to other range sensing devices. The
general objective of their work gives a strong understanding into the upsides and
downsides of either gadget. In this manner, utilising Kinect range detecting cameras
in particular applications can specifically evaluate advantages and potential issues
of either device.
The seven set-ups were used to study the performance of each device under seven
different conditions; they are ambient background light, dynamic inhomogeneity
and dynamic scenery, semi-transparent media and scattering, effects of having
multiple cameras, error due to linearity, due to planarity and finally for the device
heat up. Table 2.1.1 shows the comparison in performances of the two types of the
Kinect cameras, where the ratios of infinity indicate high failure rates of the time-of-
flight Kinect cameras, whereas ratios close to zero indicates the same for structured
light Kinect device. Ratios close to one indicate that both devices perform
particularly in the same manner under certain conditions [16].
Towards 3D Object Capture for Interactive CFD with Automotive Applications Malcolm O. Dias | 9803763
35
Table 2.1.1: Device failure ratios for two application modes for the major error sources discussed [16]
The comparison by (Sarbolandi, Lefloch, and Kolb, 2015) may be very crucial in
selecting the type of camera depending on the experimental setup. They have
considered most of the parameters that can help select the type of Kinect to use.
Also, it can be seen that ‘multi-device’ interference is present only in structured light
Kinect which suggests that if multiple cameras are needed, it would not be wrong to
suggest that one should use the Kinect camera working on the time-of-flight
principle.
2.2. Post-processing Point Cloud
(Schnabel, Wahl, and Klein, 2007) suggest a method for identifying certain
geometric shapes like planes, spheres, cylinders, cones and tori using an efficient
Towards 3D Object Capture for Interactive CFD with Automotive Applications Malcolm O. Dias | 9803763
36
RANSAC (Random Sample Consensus) method. The shapes mentioned have
between three to seven parameters and every 3D point pi fixes one parameter of the
shape. Approximate surface normal ni is computed so that two surface parameters
are per sample are obtained which in turn would reduce the required number of
points. However, the authors suggest that having extra parameters can be utilised in
identifying the shapes faster. Appropriate methods for the detection of each type of
shape are explained in detail by (Schnabel, Wahl, and Klein, 2007). The number of
shapes to be considered depends on the probability that the exact shape is actually
detected, and the authors devise a probability distribution for the same.
Successively, they devise the sampling strategy to be used to find the sample sets
which has an effect on the runtime complexity. Hence, a good sampling strategy is
essential to select the number of possible shapes. Each shape is then given a
particular score and this score is then evaluated to obtain the shape [18].
(Schnabel, Wahl, and Klein, 2007) suggest that their method effectively predicts the
correct shape. Also, in case of complex geometries, the shapes are separated into a
basic shape and the remaining points display the complexity of remaining surface.
They also conduct experiments which imply that their algorithm handles noisy
situations also quite efficiently. (Schnabel, Wahl, and Klein, 2007) state that the
speed of their method, quality of results and fewer data requirement makes it a
practical choice for shape detection in many cases.
Towards 3D Object Capture for Interactive CFD with Automotive Applications Malcolm O. Dias | 9803763
37
This work was published in 2007, which suggests that perhaps new methods are
available to post process the point clouds. Nevertheless, their study and results are
definitely worth considering in understanding the algorithm for detecting shapes
and hence can be considered an important parameter in post processing the point
clouds into certain shapes.
(Rusu et al., 2008) discuss the persistent feature histogram algorithm in obtaining a
global model by arranging the point cloud data. They state that the algorithm is
steady when dealing with noisy point data. At different levels, the persistence of a
certain feature is analysed in order to optimally classify a point cloud. The
persistent features provide a fair starting point for the Iterative Closest Point (ICP)
algorithm. They discuss the point feature histograms and the algorithm that
computes them. A compact point subset Pf is found which best represents the point
cloud obtained for the point features. With a specific end goal directed towards
selecting the best element points for a given cloud, they break down the area of
every point p multiple times, by enclosing p on a sphere of radius ri and p as its
centre. Then they shift r over an interval relying upon the point cloud size and
density and process the neighbourhood point feature histograms for each point.
Then they select all the points in the cloud and register the mean of the feature
distribution (μ-histogram), by looking at the feature histogram of every point
against the μ-histogram utilising a separation metric and building a distribution of
separations. Multiple radii can be considered to calculate the persistence of a
Towards 3D Object Capture for Interactive CFD with Automotive Applications Malcolm O. Dias | 9803763
38
feature statistically [19]. Subsequently, they select the arrangement of points (Pfi),
whose feature distances are outside a defined limit, as distinctive features. This is
done for each r. At the end, the similar components of the two point subsets which
are persistent in both ri and ri+1 [19] are selected, that is:
⋃ (2.4)
Figure 2.2.1: Feature Histograms for corresponding points on different point cloud datasets [19.]
Towards 3D Object Capture for Interactive CFD with Automotive Applications Malcolm O. Dias | 9803763
39
(Rusu et al., 2008) also lay a discussion on the estimation of good alignments. Their
algorithm proves out to be robust even when using high dimensionality (16D). With
the assistance of an initial alignment algorithm based on the geometric restrictions,
datasets that are partially overlapping can be successfully aligned in the
convergence region of the ICP [19]. Their discussion certainly helps in
understanding the point cloud registration methods and the measures to take while
considering certain conditions. The test results show that the algorithm by (Rusu et
al., 2008) performs better than the Integral Volume Descriptors (IVD) approach or
the surface curvature estimates. Also, the convergence rate improves when the
persistent feature histogram method is utilised.
(Holz, et al, 2015) explain the tools present in the open-source Point Cloud Library
(PCL) for point cloud registration. PCL consolidates strategies for the initial
arrangement of a point cloud, utilising different local shape feature descriptors also
with respect to refining starting arrangements, utilising diverse variations of the
Iterative Closest Point (ICP) calculation. The authors give a review of the different
algorithms used for registration and also use illustrative examples of the PCL
implementations. Three complete examples and their respective registration
pipeline in PCL are considered; these cases incorporate dense RGB-D point clouds
obtained by consumer colour and depth cameras, high-determination laser checks
from commercial 3D scanners, and low-resolution scanty point cloud obtained by a
custom lightweight 3D scanner on a micro aerial vehicle.
Towards 3D Object Capture for Interactive CFD with Automotive Applications Malcolm O. Dias | 9803763
40
According to (Holz, et al, 2015) when registering two point cloud data the steps that
can be considered are:
 Selection of the sampling points in the input clouds.
 Matching the corresponding data between the subsampled point clouds.
 Filtering and rejecting the outliers.
 Aligning the data to find the optimal model.
The above steps are explained in detail while performing the iterative registration of
the closest point. Also, the algorithms involved for each step are explained along
with respective models used for each step. Estimating the key points, describing
them by feature descriptors is done first which is then followed by correspondence
estimation. Filtering needs to be performed before estimation of the transformation
is done for the coarse alignment method via descriptor matching.
While conducting the experiment using large-scale 3-D scanners, (Holz, et al, 2015)
compute the initial alignment and then refine the initial estimate to obtain a perfect
overlap. PCL gives joint estimation parts that find key points and descriptors which
can be used to obtain the initial alignment. An iterative algorithm is then used to
precisely align the point cloud and obtain the fine registration. While using the
sparse low-resolution 3-D laser scan, the non-uniform densities degrade the results
obtained by the Generalized ICP algorithm, (Holz, et al.,2015) also show how to use
custom covariance with the Generalized ICP algorithm. While registering RGB-D
Towards 3D Object Capture for Interactive CFD with Automotive Applications Malcolm O. Dias | 9803763
41
images pre-processing is done in-order filter any noise data (which is most likely to
be present in this type of cameras) [20]. A bilateral filter from the 2D image
processing is used to perform edge preserving filtering. This filter smoothens more
if similar pixels are present in the neighbourhood and less when there are
irregularities present. For pre-processing by fast feature estimation, a grid structure
of the RGB-D data may be used to speed up the computation of each normal. The
advantage of fast feature estimation over filtering is that the former needs only
linear pre-processing stage [20]. A hybrid registration pipeline is used for aligning
the sequence of images. These type of sensors acquire a large amount of
measurements so sampling is done while registering the pipeline. The normal space
subsampling is considered to be robust [20] to register the RGB-D point clouds.
(Holz, et al, 2015) mention that this method has a high probability of converging to
the global minimum as this sampling method takes care that the point cloud has
sample data from all the surfaces which have different positioning. This is followed
by correspondence estimation and filtering which would not take much time since
the data is relatively less. Transformation estimation and weighting are done before
registering the final results.
The Point Cloud Library (PCL) is a standalone, large scale, open project for 2D/3D
image and point cloud processing [17]. If one looks into an experiment wherein
scanning is involved it is a definite to have an understanding of PCL. (Holz, et al.,
2015) provide details which are very useful in understanding the working of the
Towards 3D Object Capture for Interactive CFD with Automotive Applications Malcolm O. Dias | 9803763
42
library and also provide listing form with the PCL code. A detailed explanation is
provided which would certainly be useful in developing (or refining) an object
capture pipeline (OCP) and also for the general registration pipelines. The steps
involved in the Iterative Closest Point (ICP) method and the coarse alignment via
descriptor matching are mentioned in detail allowing one to understand these
methods.
Towards 3D Object Capture for Interactive CFD with Automotive Applications Malcolm O. Dias | 9803763
43
3. Methodology
This section explains the steps used to work towards the objectives of the research.
It begins with laboratory upgrade, followed by the parameter testing in the OCP and
mentions the manner in which it was done.
3.1. Laboratory upgrade
Figure 3.1.1: Original laboratory set-up (before upgrade)
The four cameras originally were positioned on tripods at four corners and the
object to be scanned was placed in the centre as shown in Figure 3.1.1. This was
upgraded to a six-camera setup, for which an aluminium frame was ordered from an
Towards 3D Object Capture for Interactive CFD with Automotive Applications Malcolm O. Dias | 9803763
44
external source. Special camera fixtures were ordered to clamp the cameras to the
aluminium frame. Four cameras were clamped to the vertical support of the frame,
the 5th and 6th camera were clamped, one each, to the ceiling and to the base of the
frame.
The object used as the subject for scanning is a wooden car model, which was
modified in order to attain completeness of the model. The original design did not
include a base to the model; it was hollow from underneath, hence a wooden base
was designed so that the model could be scanned from all directions. The design
was done in SolidWorks and the wood was cut using a laser-cutter machine. The
base was fixed to the main car body using nails and glue.
3.2. Rough Alignment
Rough alignment, with respect to point clouds, is the process of aligning all the point
clouds captured by two or more cameras such that the final result will display the
fields in their approximate relative positions. In other words, it is the process of
aligning the coordinate system of each camera to an imaginary global coordinate
system which is fixed in space. This scene then forms the initial conditions for more
precise alignment later.
Since the positioning of the cameras was changed and two extra cameras were
added, the transformation sequence of the camera coordinates to the world system
Towards 3D Object Capture for Interactive CFD with Automotive Applications Malcolm O. Dias | 9803763
45
was updated for the four cameras and two extra transformations were added to
account for the two new cameras. The x, y and z distance from a fixed point in space
(the origin of the global coordinate system) to the camera was approximately
measured using a measuring tape, which was then used as the translation value for
the rough alignment. The rotation angles were selected by approximately measuring
the ‘pitch’ angle, the ‘yaw’ angle and the ‘roll’ angle using the ‘compass’ application
on iPhone 6S Plus. The pitch angle was taken as the rotation of the Kinect around
the x-axis of the camera, the yaw angle as the rotation around the y-axis of the
camera, and consequently, the roll angle as the rotation around the z-axis of the
camera. The camera axes are shown in Figure 3.2.1.
Figure 3.2.1: Kinect Camera Local Axes [21]
The OCP code was run using the approximated values and the rough aligned space
(point cloud) was analysed using MATLAB and then fine adjustments were made by
changing the values in the input file and running the code again. First, the
translation parameters (See Figure 3.2.2) were changed such that the displacement
Towards 3D Object Capture for Interactive CFD with Automotive Applications Malcolm O. Dias | 9803763
46
of the two point clouds was aligned followed by adjusting the rotation parameters
(See figure 3.2.3). Then the aligned point cloud of one and two, from the previous
step, was aligned with the third point cloud. This was repeated until a desirable
alignment was achieved.
(a) : Side View (b) : Top View
Figure 3.2.2: Translation Parameters for Rough Alignment
(a) : Side View (b) : Top View (c) : Front View
Figure 3.2.3: Rotation Parameters for Rough Alignment
Towards 3D Object Capture for Interactive CFD with Automotive Applications Malcolm O. Dias | 9803763
47
3.3. Varying SOR parameters
The statistical outlier removal operation requires two parameters which are user
defined; they are ‘Neighbours’ (surrounding points in the point cloud) and ‘Standard
Deviation’. The SOR algorithm computes the mean distance from the point of
consideration to its closest neighbours. The number of neighbours the algorithm
checks for is keyed in as the value for the ‘neighbours’ parameter. It then assumes
all points are distributed with Gaussian separation and only points with a mean
distance from their neighbours within the specified ‘standard deviation’ are kept.
This method works well when a very regular point data is available.
Figure 3.3.1: Schematic of SOR filtering (For neighbours = 10)
The lines used in the code to define these parameters is shown below,
// SOR parameters
#define cNeighbours 0
#define cStdDev 0
Towards 3D Object Capture for Interactive CFD with Automotive Applications Malcolm O. Dias | 9803763
48
A set of 5 values were chosen for each. The values for neighbours were chosen as 1,
5, 10, 50 and 100 and similarly, for standard deviation, the values chosen were 0.1,
0.5, 1.0, 5.0 and 10.0 such that they covered a decent range. For each combination
(25 combinations in total) the code was run and the time required was noted. Also,
the SOR filtered point clouds were saved for qualitative comparison using MATLAB.
Please note that SOR filtering was performed immediately after rough alignment
and clip box (See section 3.5 for Clip box). Clip box was done before SOR filtering so
that the SOR algorithm would not have to deal with unwanted data which would
slow the process down.
3.4. Varying Registration Parameters
The registration algorithm required one physical parameter called the
correspondence distance and two criteria; maximum iteration and registration
tolerance. The algorithm for registration fixes the value for correspondence distance
(See Figure 3.4.1) and then checks for the corresponding points. Once the
corresponding surface has been detected, the algorithm shifts (rotates and
translates) the target surface to merge with the source surface until one of the two
criteria is met.
Towards 3D Object Capture for Interactive CFD with Automotive Applications Malcolm O. Dias | 9803763
49
Figure 3.4.1: Schematic of registration
The lines from the code are shown below,
// Registration parameters
#define cCorrespondDist 0
#define cMaxItr 0
#define cRegistrationTolerance 0
The correspondence distance is measured in meters. It was given the values of 0.1m,
1m and 10m. The values selected for the registration tolerance were 0.1, 0.01 and
0.001. Two values were chosen for the maximum iterations criterion; 10 and 20.
The program was run for each arrangement resulting in 18 combinations in total.
The time required to perform the registration was noted. The point clouds were
obtained using MATLAB and selected images were saved for qualitative analysis.
Towards 3D Object Capture for Interactive CFD with Automotive Applications Malcolm O. Dias | 9803763
50
3.5. Setting up Clip Box
The clip box is used retain only the required subject and eliminate the unwanted
data. The lines used in the code to input the clip box limits are shown below,
// Camera Space bounding box limits (relative to world origin) 'clip box'
#define CSxLow 0
#define CSxHigh 0
#define CSyLow 0
#define CSyHigh 0
#define CSzLow 0
#define CSzHigh 0
These values were modified and the code was run until a very compact clip box was
built. The purpose of this stage is to allow the OCP to only process the data required
for that part of the pipeline to maintain efficiency. An average, only about 20% of
the captured data actually represents the car with the rest representing the rest of
the room.
3.6. Modifying Point Cloud using Axis Alignment - Bounding Box
(AABB)
This was the final step of the OCP. The point cloud data, after all the previous steps
were performed, was moved to the first quadrant such that the flat surfaces of the
model were parallel to the axes plane. This is a necessary condition for inclusion in
the CFD simulation as it ensures the vehicle is oriented correctly with respect to the
flow axes.
Towards 3D Object Capture for Interactive CFD with Automotive Applications Malcolm O. Dias | 9803763
51
// Axis-Aligned bounding box rotation and limits
#define cAAxAngle 0
#define cAAyAngle 0
#define cAAzAngle 0
#define cAAxLow 0
#define cAAxHigh 0
#define cAAyLow 0
#define cAAyHigh 0
#define cAAzLow 0
#define cAAzHigh 0
The lines used in the code are shown above. These values were modified relative to
the values used in rough alignment for angles and clip box for translations.
Towards 3D Object Capture for Interactive CFD with Automotive Applications Malcolm O. Dias | 9803763
52
4. Results and Discussion
4.1. Laboratory Upgrade
Figure 3.1.1 shows the initial setup of the laboratory, in which the cameras were
mounted on tripod stands and the object to be scanned was placed in the centre on a
transparent table. It included four cameras at four corners which would capture the
3D data of the object. There was a high tendency of the camera positioning to be
affected if any individual working in the laboratory accidently hit the tripod stand or
the camera. Hence, a robust design was required so that the camera calibration
would not need to be repeated every time. Figure 4.1.1, shows the schematic of the
proposed laboratory design which consists of a frame constructed of aluminium.
The specifications of the Aluminium frame are shown in Figure 4.1.2. Special
clamping jaws were ordered to fix the cameras to the frame (See Figure 4.1.3). Four
cameras were clamped to the four vertical supports, similar to that of the previous
design, however, there was a smaller chance of the camera positioning being
affected by accidental collision. Two cameras were added to the original design to
capture the maximum possible information of the object to be analysed. These were
clamped to the horizontal supports on the ceiling and on the floor. Figure 4.1.4
shows the upgraded laboratory design with the Aluminium frame and the six
cameras with the car model placed in the centre. Please note the table used to place
the car model is not from the proposed design. In reality, a metal frame was used.
Towards 3D Object Capture for Interactive CFD with Automotive Applications Malcolm O. Dias | 9803763
53
Figure 4.1.1: Schematic of proposed laboratory design
Figure 4.1.2: Schematic of the aluminium frame ordered
Towards 3D Object Capture for Interactive CFD with Automotive Applications Malcolm O. Dias | 9803763
54
Figure 4.1.3: Kinect camera clamped to the aluminium frame
Figure 4.1.4: Laboratory setup after upgrading
Towards 3D Object Capture for Interactive CFD with Automotive Applications Malcolm O. Dias | 9803763
55
The initial car model did not include a base, hence a base was designed and affixed
to the main car body as shown in Figure 4.1.5. This allowed the possibility of
capturing the object from all directions leading to a better 3D constructed image for
CFD analysis.
Figure 4.1.5: Upgraded car model base
4.2. Rough Alignment
Figure 4.2.1 shows the raw 3D point cloud data captured by the Kinect camera. The
point cloud from each camera can be identified based on different colours. The
legend provided in Table 4.2.1 can be used to identify these different point clouds
and is valid for all the point cloud figures used in Section 4.2 and Section 4.3. The
point cloud data shown in Figure 4.2.1 is aligned according to the camera coordinate
system and hence is of not much use in understanding the geometric features of the
Towards 3D Object Capture for Interactive CFD with Automotive Applications Malcolm O. Dias | 9803763
56
entire scanned space. This is because the camera coordinate systems are coincident
and each view overlaps the others. This data needs to be processed in order to align
the camera coordinate system to a selected global coordinate system so that the
entire 3D space can be recreated from the point cloud data.
Figure 4.2.1: Point Cloud raw data from Kinect (Before Processing)
Camera 1
Camera 2
Camera 3
Camera 4
Camera 5
Camera 6
Table 4.2.1: Camera colour code legend for SOR point cloud data
XY
Z
Towards 3D Object Capture for Interactive CFD with Automotive Applications Malcolm O. Dias | 9803763
57
The axes of the camera local coordinate system are shown in Figure 3.2.1, and
Figure 4.2.2 shows the axes of the global coordinate system, constructed
schematically. The reference names used for each camera are also shown in Figure
4.2.2. Please note that the global axes shown in the Figure 4.2.2 are meant to depict
only the direction of the axes and not the origin which was taken to be at the level of
the table.
Figure 4.2.2: Laboratory arrangement with global axes
After performing the axes transformation, the resulting point cloud data is aligned
with respect to the direction of the global coordinate system (Figure 4.2.3).
However, further processing is needed so that the data could be aligned with
Towards 3D Object Capture for Interactive CFD with Automotive Applications Malcolm O. Dias | 9803763
58
respect to a common origin and this was achieved by ‘translating’ and ‘rotating’ the
data about the origin.
Figure 4.2.3: Point Cloud Data after aligning Camera axes to Global axes
The code reads the translation and rotation parameters from an input file, one for
each, which contains tab separated values. Each camera requires three values X, Y
and Z for translation (See Figures 3.2.2), similarly three angles a, b and c for rotation
(See Figures 3.2.3). The values chosen for rough alignment were selected by
changing each value and checking the resulting point cloud plot using MATLAB.
XY
Z
Towards 3D Object Capture for Interactive CFD with Automotive Applications Malcolm O. Dias | 9803763
59
Table 4.2.2 and Table 4.2.3 display the values selected for translation and rotation
respectively.
Camera
Distance from the origin (mm)
X Y Z
CAMERA 1 0 900 0
CAMERA 2 -80 1280 350
CAMERA 3 30 1365 420
CAMERA 4 -120 1400 350
CAMERA 5 -130 1320 450
CAMERA 6 30 800 30
Table 4.2.2: Values used for Translation input file for Rough Alignment
Camera
Angle (degrees)
A B c
CAMERA 1 -3 0 0
CAMERA 2 -15 0 0
CAMERA 3 -15 1 0
CAMERA 4 -11 0 0
CAMERA 5 -20 3 0
CAMERA 6 0 -5 0
Table 4.2.3: Values for Rotation input file for Rough Alignment
Towards 3D Object Capture for Interactive CFD with Automotive Applications Malcolm O. Dias | 9803763
60
Figure 4.2.4: Point Cloud Data after Rough Alignment
Figure 4.2.4 shows the final point cloud data, and it can be seen that the entire 3D
space, i.e. the laboratory, can be seen recreated using the point clouds. The rough
alignment of the surfaces of the car model can be seen in the zoomed-in view of the
point cloud in Figure 4.2.5. The 3D image of the car is formed with satisfactory
accuracy.
X
Y
Z
Towards 3D Object Capture for Interactive CFD with Automotive Applications Malcolm O. Dias | 9803763
61
Figure 4.2.5: Zoomed in view to locate the rough alignment of Car model
4.3. SOR Filtering
4.3.1. Time Analysis
As mentioned before, a set of five values were chosen for each of the two
parameters and the time taken4 (in milliseconds) to perform filtering each
combination is shown in Table 4.3.1. It can be seen that with an increase in the
number of Neighbours, the time required to perform the SOR filtering increases
4 Please refer appendix A for the specifications of the system on which the code was run.
y
z
x
Towards 3D Object Capture for Interactive CFD with Automotive Applications Malcolm O. Dias | 9803763
62
significantly. However, this is not the same with the increase in the standard
deviation. The time taken does not get affected much with change in standard
deviation, but remains almost the same for a particular choice of the neighbours
parameter.
Time (ms)
STANDARD DEVIATION
0.1 0.5 1 5 10
NEIGHBOURS
1 1611 1782 1712 1599 1569
5 2078 2126 2034 2088 3520
10 2293 2564 2803 2431 2372
50 5294 5620 5338 5454 5326
100 10428 10302 10288 10012 10629
Table 4.3.1: Time Taken for SOR filtering
Figure 4.3.1 shows the variation of time with respect to standard deviation for
different neighbours. The time is made non-dimensional by dividing it with the
minimum value of time for the particular standard deviation (Tmin@SD). A linear
increase can be noticed from the graph, resulting in maximum time for the highest
neighbour chosen for a particular standard deviation. The non-dimensional time,
((T/ Tmin@SD)-1) for SOR filtering increased by 5% when the parameter for
‘neighbours’ was increased from 1 to 100.
Figure 4.3.2 gives the variation of time with respect to the neighbours for difference
values of standard deviation. Here, the time is made non-dimensional by subtracting
the time from the average time (Tavg@N) value for a specific neighbour and then
Towards 3D Object Capture for Interactive CFD with Automotive Applications Malcolm O. Dias | 9803763
63
dividing by Tavg@N. As was noticed from Table 4.3.1, there is not much variation in
the time taken for different standard deviation for a definite neighbour.
Figure 4.3.1: Variation of Time (T/TSD-max) for ‘Standard Deviation’ for various ‘Neighbours’
0
1
2
3
4
5
6
0 20 40 60 80 100
Non-dimensionTime((T/Tmin@SD)-1)
Neighbours
Std. Dev. = 0.1
Std. Dev. = 0.5
Std. Dev. = 1.0
Std. Dev. = 5.0
Std. Dev. = 10.0
Towards 3D Object Capture for Interactive CFD with Automotive Applications Malcolm O. Dias | 9803763
64
Figure 4.3.2: Variation of Time (1-T/Tavg@N) for ‘Neighbours’ for various ‘Standard Deviation’
Hence, it can be fairly said that the time taken for SOR filtering is directly
proportional to the value of the parameter for ‘neighbours’ chosen.
(4.1)
(4.2)
Where, is a constant which is approximately equal to -0.07, from the data
obtained from the tests.
-0.5
0
0.5
0.0 2.0 4.0 6.0 8.0 10.0
Non-dimensionalTime(1-T/Tavg@N)
Standard Deviation
Neighbours = 1
Neighbours = 5
Neighbours = 10
Neighbours = 50
Neighbours = 100
Towards 3D Object Capture for Interactive CFD with Automotive Applications Malcolm O. Dias | 9803763
65
4.3.2. Qualitative Analysis
4.3.2.1. Varying Neighbours for a fixed Standard Deviation
Figure 4.3.3 shows the images obtained for different neighbours for a fixed standard
deviation of 1.0. It can be seen that when the parameter for neighbours is chosen as
1, significant amount of noise is present. However, as the number of neighbours
increases the point cloud data becomes cleaner and sharper. The edges are well
defined and the object takes perfect shape. But a closer look reveals that as the
number of neighbours is increased, some useful data is lost. The algorithm becomes
more aggressive and tends to filter out necessary information from the point cloud.
No doubt the images are becoming sharper but losing data would not provide true
information about the object. A value of neighbours in the vicinity of 5 would be a
satisfactory choice.
Towards 3D Object Capture for Interactive CFD with Automotive Applications Malcolm O. Dias | 9803763
66
(a) : Neighbours=1 Std.Dev.=1.0 (b) : Neighbours=5 Std.Dev.=1.0
(c) : Neighbours=10 Std.Dev.=1.0 (d) : Neighbours=50 Std.Dev.=1.0
(e) : Neighbours=100 Std.Dev.=1.0
Figure 4.3.3: SOR Filtered – Camera 1 for Std.Dev. = 1.0 and various Neighbours
x x
x x
y y
y y
z z
z z
x
y
z
Towards 3D Object Capture for Interactive CFD with Automotive Applications Malcolm O. Dias | 9803763
67
4.3.2.2. Varying Standard Deviation for fixed Neighbours
Figure 4.3.4 depicts the output for various values for standard deviation, when the
value for the ‘neighbours’ parameter was fixed at 10. At a standard deviation of 1.0,
least amount is noise is present; however, a lot of data is lost. The amount noise
increases as the standard deviation is increased, however, the required data is
recovered as well. But, a very high standard deviation would result in the presence
of a significant amount of noise.
Hence the following equation can be obtained to summarise the effect of selecting
the parameters for SOR Filtration.
(4.2)
Towards 3D Object Capture for Interactive CFD with Automotive Applications Malcolm O. Dias | 9803763
68
(a) : Neighbours = 10 Std.Dev.=0.1 (b) : Neighbours = 10 Std.Dev.=0.5
(c) : Neighbours = 10 Std.Dev.=1.0 (d) : Neighbours = 10 Std.Dev.=5.0
(e) : Neighbours = 10 Std.Dev.=10.0
Figure 4.3.4: SOR Filtered – Camera 1 for Neighbours = 10 and various Standard Deviation
x
x
x x
x
y
yy
y
y
zz
z
zz
Towards 3D Object Capture for Interactive CFD with Automotive Applications Malcolm O. Dias | 9803763
69
Therefore, selecting an intermediate value would be a decent approach in selecting
these parameters. An extreme value of either one, would lead a poor resulting data.
Standard deviation of 1.0 and Neighbours of 10 or 5 would be a satisfactory
selection. But, considering the time taken for SOR which is dependant only on
neighbours, a lower parameter value for neighbours would be ideal. Hence,
Neighbours = 5 and Standard Deviation = 1.0 can be considered as the best set of
values. The point cloud obtained from these parameters is shown in the figure
below.
Figure 4.3.5: SOR Filtered – All Cameras for Neighbours = 5 and Standard Deviation = 1.0
x
y
z
Towards 3D Object Capture for Interactive CFD with Automotive Applications Malcolm O. Dias | 9803763
70
4.4. Registration
4.4.1. Time Analysis
Maximum
Iterations =10
Time (s)
Registration Tolerance
0.1 0.01 0.001
Correspondence
Distance
0.1 1769.4* 1663.62* 1993.72*
1 1629.69 1599.65 1414.83
10 7267.25 8518.31 8834.44
*With PCL error : Iterative Closest Point Non-linear
Table 4.4.1: Time taken for registration at Maximum Iteration =10
Maximum
Iterations =20
Time (s)
Registration Tolerance
0.1 0.01 0.001
Correspondence
Distance
0.1 1645.34* 1579.56* 1582.27*
1 1729.4 1959.35 2324.3
10 13611.5 16863.9 16037.7
*With PCL error : Iterative Closest Point Non-linear
Table 4.4.2: Time taken for registration at Maximum Iteration =20
The tables above give the variation of time taken5 to perform registration with
different values for correspondence distance, registration tolerance and maximum
iterations. It is, however, important to note that the code was currently designed to
save a lot of data for qualitative analysis, while performing registration, which may
not be required while running it on a regular basis. Therefore, the value of time
tabulated in Table 4.4.1 and Table 4.4.2 are relatively higher than the time it may
require to run generally. Nevertheless, the data is useful for laying down
comparisons for selecting the input values.
5 Please refer appendix A for machine specifications on which the code was run.
Towards 3D Object Capture for Interactive CFD with Automotive Applications Malcolm O. Dias | 9803763
71
The time required to achieve registration does not increase much when the
correspondence distance is changed from 0.1 to 1, but increases considerably when
it is changed to 10 when the two criteria are fixed. The effect of changing the
registration tolerance on time is very less much when the correspondence distance
value is at 0.1 or 1; however, it tends to increase slightly when the tolerance is
decreased for a correspondence distance of 10 for fixed value of maximum
iterations. Similarly, there are minor differences in the time required for all values
of registration tolerance and lower values of correspondence distance for the two
values of maximum iteration chosen, but an overall increase in the time for
maximum iteration = 10 to maximum iteration = 20 when the correspondence
distance is changed to 10.
It is also important to note that an error message is thrown out by the ICP algorithm
while performing registration. The error message given out states
‘[pcl::IterativeClosestPointNonLinear::ComputeTransformation] Not enough
correspondences found. Relax your threshold parameters’. This error is because a low
value for correspondence distance is used and the algorithm cannot find enough
points to do the registration completely. The figure below shows the effect of
selecting a small value for the correspondence distance parameter.
Towards 3D Object Capture for Interactive CFD with Automotive Applications Malcolm O. Dias | 9803763
72
Figure 4.4.1: Effect of selecting low Correspondence Distance value
Figure 4.4.2 displays the variation of time required for registration graphically.
Tmin@MI,RT is the minimum time for that particular combination of maximum iteration
and registration tolerance (depicted by the subscript), i.e. at constant maximum
iterations and registration tolerance values, for the set of correspondence distance
tested. As discussed before, not much variation can be seen at lower values of
correspondence distance, but a linear increase in the non-dimensional time when
the correspondence distance is increased from 1m to 10m. It is interesting to note
that at 10m, there is an increase of about 10% in the non-dimensional time
((T/Tmin@MI,RT) – 1) when the registration tolerance is decreased by a factor of 10, for
a maximum iterations value of 10 and an increase of 20% when the maximum
iterations value is 20 for the same change in registration tolerance. For the case
when maximum iterations =20 and registration tolerance = 0.001, the non-
dimensional time ((T/Tmin@MI,RT) – 1) is almost the same as the previous case at
Towards 3D Object Capture for Interactive CFD with Automotive Applications Malcolm O. Dias | 9803763
73
correspondence distance of 10m, this is perhaps due to fact that the maximum
iterations criterion is satisfied before the desired tolerance is achieved and hence
the loop in the registration code is terminated. Also, when the maximum iterations
criterion is increased to 20 from 10, the increase in the non-dimensional time is
almost double, for the same registration tolerance and 10m correspondence
distance.
Figure 4.4.2: Variation of Time (T/Tmin@MI,RT) with respect to ‘Correspondence Distance’ for different values of
‘Registration Tolerance’ and ‘Maximum Iterations’
The graph in Figure 4.4.3 displays the fact that there is not much variation in the
non-dimensional time (T/Tavg@MI,CD), where Tavg@MI,CD is the average time at a
constant value of maximum iterations and correspondence distance, when the
registration tolerance is changed.
0
1
2
3
4
5
6
7
8
9
10
0.1 1 10
Non-dimensionalTime((T/Tmin@MI,RT)-1)
log Correspondence Distance
Max.Itr.=10
Reg.Tol.=0.1
Max.Itr.=10
Reg.Tol.=0.01
Max.Itr=10
Reg.Tol.=0.001
Max.Itr.=20
Reg.Tol.=0.1
Max.Itr.=20
Reg.Tol.=0.01
Max.Itr.=20
Reg.Tol.=0.001
Towards 3D Object Capture for Interactive CFD with Automotive Applications Malcolm O. Dias | 9803763
74
Figure 4.4.3: Variation of Time (T/Tavg@MI,CD) with respect to ‘Registration Tolerance’ for different ‘Correspondence
Distance’ and ‘Maximum Iterations’ values.
Therefore, the relation of time for the registration can be simply inscribed as,
(4.3)
But, for higher values of correspondence distance, it can be written as follows,
(4.4)
-0.5
0
0.5
0 0.02 0.04 0.06 0.08 0.1
Non-dimensionalTime(1-T/Tavg@MI,CD)
Registration Tolerance
Max.Itr.=10
Cor.Dist.=0.1
Max.Itr.=10
Cor.Dist.=0.01
Max.Itr.=10
Cor.Dist.=0.001
Max.Itr.=20
Cor.Dist.=0.1
Max.Itr.=20
Cor.Dist.=0.01
Max.Itr.=20
Cor.Dist.=0.001
Towards 3D Object Capture for Interactive CFD with Automotive Applications Malcolm O. Dias | 9803763
75
4.4.2. Qualitative Analysis
Figure 4.4.5 shows the point cloud data obtained for various tested cases at
registration tolerance of 0.01 before performing SOR filtering. Figure 4.4.4 shows
the point cloud obtained from the CAD model of the car which can be used as a
reference for comparing the data.
Figure 4.4.4: Point Cloud obtained from the CAD design of the car model
As was noticed in the time analysis for registration in the previous section, not much
variation is observed in the point cloud data for correspondence distance of 0.1m
and 1m (See Figure 4.4.5). These two point cloud maps show similar registered data,
with perhaps some negligible differences. However, when the correspondence
distance is increased to 10m, some significant variations can be observed. The
portion circled green indicates good registration (Figure 4.4.5 (c) and (d)) and the
Y
Z
Towards 3D Object Capture for Interactive CFD with Automotive Applications Malcolm O. Dias | 9803763
76
circled region in red in Figure 4.4.5 (e) highlights the raised portion on the bonnet of
the car model which is definitely an error. This is present even when the number of
iterations is increased 20 (See Figure 4.4.5 (f)). Figure 4.4.6 shows the registered
point cloud for a correspondence distance of 10m with varying the two criteria, and
the raised bonnet error is present here as well indicating that the error is definitely
due to the increase the correspondence distance to a very high value. Some
differences can also be noted in the base of the car as well, which again looks a bit
raised.
Towards 3D Object Capture for Interactive CFD with Automotive Applications Malcolm O. Dias | 9803763
77
(a) : Cor.Dist.=0.1 Max.Itr.=10 Reg.Tol.=0.01 (b) : Cor.Dist.=0.1 Maxt.Itr.=20 Reg.Tol.=0.01
(c) : Cor.Dist.=1 Max.Itr.=10 Reg.Tol.=0.01 (d) : Cor.Dist.=1 Max.Itr.=20 Reg.Tol.=0.01
(e) : Cor.Dist.=10 Max.Itr.=10 Reg.Tol.=0.01 (f) : Cor.Dist.=10 Max.Itr.=20 Reg.Tol.=0.01
Figure 4.4.5: Registered Point Cloud for Registration Tolerance = 0.01, Maximum Iterations of 10 and 20 and
Correspondence Distance of 0.1m, 1m and 10m
y y
yy
y y
zz
z z
z z
Towards 3D Object Capture for Interactive CFD with Automotive Applications Malcolm O. Dias | 9803763
78
(a) : Cor.Dist.=10 Max.Itr.=10 Reg.Tol.=0.1 (b) : Cor.Dist.=10 Max.Itr.=20 Reg.Tol.=0.1
(c) : Cor.Dist.=10 Max.Itr.=10 Reg.Tol=0.001 (d) : Cor.Dist.=10 Max.Itr.=20 Reg.Tol.=0.001
Figure 4.4.6: Registered Point Cloud for Correspondence Distance =10m, Registration Tolerance = 0.1 and 0.001 and
Maximum Iterations of 10 and 20
Figure 4.4.7: Effect of selecting a high Correspondence distance value
z z
z z
yy
y y
Towards 3D Object Capture for Interactive CFD with Automotive Applications Malcolm O. Dias | 9803763
79
This error can be explained with the help of Figure 4.4.7, where an increased
correspondence distance input value may include points of a different surface and
hence result in improper registration. Hence, selecting the correct correspondence
distance is a crucial aspect for performing registration. Selecting a value of
correspondence distance of the order of 1m, would give decent point cloud results
and with a relatively moderate time, based on the discussion from the time analysis
for registration.
4.5. Clip Box
A clip-box is set up around the object or scene of focus in-order to eliminate the
unnecessary data. Setting up the clip box as close to the subject as possible is
important so that maximum amount of noise can be removed before performing
SOR. Figure 4.5.1 shows the effect of using different clip box values. Using a large
clip box (See Figure 4.5.1 (e)) would include unnecessary data (and noise). The SOR
filtering will then have to try to process this data, for which aggressive SOR
parameters may have to be used. This would subsequently lead to either loss of data
or increase in time or perhaps both, as was discussed in section 4.3. Using a smaller
clip box limits would delete some parts of the object and hence would result in
improper point cloud. Therefore, selecting the correct balance is required; such that
maximum unwanted data is clipped off but all the data required for constructing the
Towards 3D Object Capture for Interactive CFD with Automotive Applications Malcolm O. Dias | 9803763
80
subject (car model in this case) is preserved. The clip box limits selected for this
case which resulted in a ‘good’ clip box are tabulated in Table 4.5.1.
(a) : Large Clip Box
(b) : Good Clip Box
(c) : Small Clip Box
Figure 4.5.1: Effect of using different Clip Box limits
x
y
z
z
z
y
x
x
y
Towards 3D Object Capture for Interactive CFD with Automotive Applications Malcolm O. Dias | 9803763
81
Limits (m)
x-Low -290
x-High 240
y-Low -650
y-High 550
z-Low -220
z-High 300
Table 4.5.1: Clip Box limits selected
4.6. Axis Alignment – Bounding Box (AABB)
The final processing required to be done on the registered point cloud is done here.
It is a combination of Rough Alignment and Clip Box. The entire subject is moved
into the first quadrant and aligned to the axes. This ensures that when the object is
read into the CFD simulation, it is correctly and automatically aligned with the flow
axis without user intervention.
The values selected for this particular case are tabulated in Table 4.6.1. It can be
seen that they are just the difference of the two limits used in clip box for X, Y and Z.
The rotation done during Rough Alignment was good enough, so there was no need
to modify the angles here. The point cloud after AABB is shown in Figure 4.6.1.
Towards 3D Object Capture for Interactive CFD with Automotive Applications Malcolm O. Dias | 9803763
82
Angle Limits (m)
Angle a 0 x - Low 0
Angle b 0 x - High 530
Angle c 0 y - Low 0
y - High 1200
z - Low 0
z - High 520
Table 4.6.1: Parameters used for AABB
Figure 4.6.1: Point cloud after AABB
z
y
x
Towards 3D Object Capture for Interactive CFD with Automotive Applications Malcolm O. Dias | 9803763
83
5. Conclusions
The research completed here has addressed all the objectives specified. The car
model was upgraded by fixing a base to it, which allowed setting up a camera to
capture the bottom of the car. The laboratory was upgraded with the new
aluminium frame and the addition of two extra cameras was made in the new setup.
The new frame is steady and robust; hence there is no need to align the cameras
every time the space is scanned. Additional cameras resulted in increased model
information and hence better reconstruction than previously.
The Object Capture Pipeline (OCP) was updated to handle with the new setup. The
orientation and calibration of the cameras was successfully performed to ensure the
resulting alignment was of desired accuracy. There was no prior information on the
input parameters used for SOR filtering and registration. The input values were
selected arbitrarily. However, the study carried out in this report gives detailed
information about the effect of each input parameter on the output and also the time
taken to process it.
It was observed that for SOR filtering, as the value for ‘neighbours’ was increased,
the time taken to perform filtering increased as well. A graph for a particular non-
dimensional time suggested that there is a linear relationship between the time
taken and the value for neighbours used for a fixed standard deviation. The second
Towards 3D Object Capture for Interactive CFD with Automotive Applications Malcolm O. Dias | 9803763
84
input parameter for SOR filtering is the standard deviation, the variation of which
did not show much influence on the time taken for a particular choice of neighbours.
However, the qualitative analysis suggested that a low value of standard deviation
gave a very good SOR filtering, but at the expense of data loss. Increasing the
standard deviation allowed data recovery but simultaneously increased the amount
of noise present. On the other hand, a low value for neighbours gave an output with
a lot of noisy points present, nevertheless increasing the value for the same resulted
in better filtered images, but again some of the important information of the point
cloud was lost. Hence, it can be justly said that selecting an intermediate value for
the input parameters would be the best possible choice. An equation was also
obtained based on the results for the time taken for SOR as a function of the input
parameters, and a similar equation was found for the ‘noise’ as well.
As mentioned before, as was the case for SOR filtering input parameters, previous
parameters used for registration were arbitrarily selected. The values were used
with knowledge of neither the time taken for registration nor the resulting point
cloud obtained. Out of the three inputs used for registration; one physical parameter
– correspondence distance and two criteria – maximum iterations and registration
tolerance, the physical parameter played a major part in influencing the time taken
for registration. There was little influence on the time taken at low correspondence
distance value, but steep gradients were observed when the value was increase to
high value (10m). However, a low value for the correspondence distance (0.1m)
Towards 3D Object Capture for Interactive CFD with Automotive Applications Malcolm O. Dias | 9803763
85
resulted in an error message thrown out by the PCL algorithm stating that not
enough correspondences were found. The variation for the time taken had
negligible influence of the two criteria at lower correspondence values, but an
increase in the correspondence distance value increased the effect of the two
criteria on the time taken. In particular, the non-dimensional time doubled when the
maximum iterations was increased from 10 to 20 at higher correspondence distance
values. The same time increased in steps of about 5% when the registration
tolerance was decreased by a factor of 10 (0.1 to 0.01 to 0.001). A similar trend was
noticed even for the point cloud data wherein there was little difference in the
registered point cloud at lower value of correspondence distance for variable
criteria, however, drastic errors were spotted when the correspondence distance
was increased to a value of 10. The criteria had no real effect on the registered point
cloud. A set of equations were developed based on the results for registration time
based on the correspondence distance used (one for lower correspondence distance
and one for higher). Based on the results it can be concluded that it would not be
wrong to choose a lower correspondence distance value, however, it should not be
very low otherwise the algorithm will fail. Choosing any value from those tested in
this report would not affect the results much.
The clip box limits were set up such that only the required point cloud data was
stored and the unwanted points were clipped off. Similarly, a final axis alignment
was performed to shift the point cloud in the first quadrant and any minor rotations
Towards 3D Object Capture for Interactive CFD with Automotive Applications - By Malcolm O. Dias 9803763
Towards 3D Object Capture for Interactive CFD with Automotive Applications - By Malcolm O. Dias 9803763
Towards 3D Object Capture for Interactive CFD with Automotive Applications - By Malcolm O. Dias 9803763
Towards 3D Object Capture for Interactive CFD with Automotive Applications - By Malcolm O. Dias 9803763
Towards 3D Object Capture for Interactive CFD with Automotive Applications - By Malcolm O. Dias 9803763

More Related Content

What's hot

Notifications for GATE 2015-2016
Notifications for GATE 2015-2016Notifications for GATE 2015-2016
Notifications for GATE 2015-2016ceschandigarh
 
EM599_Final Report_AKÇ&MÖ_Final
EM599_Final Report_AKÇ&MÖ_FinalEM599_Final Report_AKÇ&MÖ_Final
EM599_Final Report_AKÇ&MÖ_FinalMurat Ozcan
 
Blood-Bank-Management-System-Salesforce
Blood-Bank-Management-System-SalesforceBlood-Bank-Management-System-Salesforce
Blood-Bank-Management-System-SalesforceDikshantBhawsar
 
NX7 for Engineering Design
NX7 for Engineering DesignNX7 for Engineering Design
NX7 for Engineering DesignNam Hoai
 
Strategic Technology Roadmap Houston Community College 2005
Strategic Technology Roadmap Houston Community College 2005Strategic Technology Roadmap Houston Community College 2005
Strategic Technology Roadmap Houston Community College 2005schetikos
 
Health of-the-australian-construction-industry-research-report
Health of-the-australian-construction-industry-research-reportHealth of-the-australian-construction-industry-research-report
Health of-the-australian-construction-industry-research-reportTurlough Guerin GAICD FGIA
 
California enterprise architecture_framework_2_0
California enterprise architecture_framework_2_0California enterprise architecture_framework_2_0
California enterprise architecture_framework_2_0ppalacz
 
NX9 for Engineering Design
NX9 for Engineering DesignNX9 for Engineering Design
NX9 for Engineering DesignNam Hoai
 
Off_Road_Chassis_Spring_Final_Report
Off_Road_Chassis_Spring_Final_ReportOff_Road_Chassis_Spring_Final_Report
Off_Road_Chassis_Spring_Final_ReportChris Pell
 
TPM and the effect of health and safety
TPM and the effect of health and safetyTPM and the effect of health and safety
TPM and the effect of health and safetyMartin Munsie
 
Data analytics in education domain
Data analytics in education domainData analytics in education domain
Data analytics in education domainRishi Raj
 
Greece Debt Crises - Aditya Aima
Greece Debt Crises - Aditya AimaGreece Debt Crises - Aditya Aima
Greece Debt Crises - Aditya AimaAditya Aima
 
Project proposal 32
Project  proposal 32Project  proposal 32
Project proposal 32Firomsa Taye
 
Facility Planning Project - Tefal Pot Production
Facility Planning Project - Tefal Pot ProductionFacility Planning Project - Tefal Pot Production
Facility Planning Project - Tefal Pot ProductionMohamed Mostafa Ahmed Adam
 
Guidelines of osh in ci management v3_final for public comment
Guidelines of osh in ci management v3_final for public commentGuidelines of osh in ci management v3_final for public comment
Guidelines of osh in ci management v3_final for public commentDr. Mohammad Anati
 
NX10 for Engineering Design
NX10 for Engineering DesignNX10 for Engineering Design
NX10 for Engineering DesignNam Hoai
 

What's hot (18)

Notifications for GATE 2015-2016
Notifications for GATE 2015-2016Notifications for GATE 2015-2016
Notifications for GATE 2015-2016
 
EM599_Final Report_AKÇ&MÖ_Final
EM599_Final Report_AKÇ&MÖ_FinalEM599_Final Report_AKÇ&MÖ_Final
EM599_Final Report_AKÇ&MÖ_Final
 
Blood-Bank-Management-System-Salesforce
Blood-Bank-Management-System-SalesforceBlood-Bank-Management-System-Salesforce
Blood-Bank-Management-System-Salesforce
 
Gate 2013
Gate 2013Gate 2013
Gate 2013
 
NX7 for Engineering Design
NX7 for Engineering DesignNX7 for Engineering Design
NX7 for Engineering Design
 
Strategic Technology Roadmap Houston Community College 2005
Strategic Technology Roadmap Houston Community College 2005Strategic Technology Roadmap Houston Community College 2005
Strategic Technology Roadmap Houston Community College 2005
 
Health of-the-australian-construction-industry-research-report
Health of-the-australian-construction-industry-research-reportHealth of-the-australian-construction-industry-research-report
Health of-the-australian-construction-industry-research-report
 
California enterprise architecture_framework_2_0
California enterprise architecture_framework_2_0California enterprise architecture_framework_2_0
California enterprise architecture_framework_2_0
 
NX9 for Engineering Design
NX9 for Engineering DesignNX9 for Engineering Design
NX9 for Engineering Design
 
Off_Road_Chassis_Spring_Final_Report
Off_Road_Chassis_Spring_Final_ReportOff_Road_Chassis_Spring_Final_Report
Off_Road_Chassis_Spring_Final_Report
 
TPM and the effect of health and safety
TPM and the effect of health and safetyTPM and the effect of health and safety
TPM and the effect of health and safety
 
Data analytics in education domain
Data analytics in education domainData analytics in education domain
Data analytics in education domain
 
Greece Debt Crises - Aditya Aima
Greece Debt Crises - Aditya AimaGreece Debt Crises - Aditya Aima
Greece Debt Crises - Aditya Aima
 
Project proposal 32
Project  proposal 32Project  proposal 32
Project proposal 32
 
Facility Planning Project - Tefal Pot Production
Facility Planning Project - Tefal Pot ProductionFacility Planning Project - Tefal Pot Production
Facility Planning Project - Tefal Pot Production
 
Guidelines of osh in ci management v3_final for public comment
Guidelines of osh in ci management v3_final for public commentGuidelines of osh in ci management v3_final for public comment
Guidelines of osh in ci management v3_final for public comment
 
MBS paper v2
MBS paper v2MBS paper v2
MBS paper v2
 
NX10 for Engineering Design
NX10 for Engineering DesignNX10 for Engineering Design
NX10 for Engineering Design
 

Viewers also liked

Cfd analysis report of bike model
Cfd analysis report of  bike modelCfd analysis report of  bike model
Cfd analysis report of bike modelSoumya Dash
 
Yechun portfolio
Yechun portfolioYechun portfolio
Yechun portfolioYechun Fu
 
Smarter Innovation at Scale
Smarter Innovation at ScaleSmarter Innovation at Scale
Smarter Innovation at ScaleGovnet Events
 
STUDY AND ANALYSIS OF TREE SHAPED FINS BY USING FLUENT
STUDY AND ANALYSIS OF TREE SHAPED FINS BY USING FLUENTSTUDY AND ANALYSIS OF TREE SHAPED FINS BY USING FLUENT
STUDY AND ANALYSIS OF TREE SHAPED FINS BY USING FLUENTIjripublishers Ijri
 
Huawei Powers Efficient and Scalable HPC
Huawei Powers Efficient and Scalable HPCHuawei Powers Efficient and Scalable HPC
Huawei Powers Efficient and Scalable HPCinside-BigData.com
 
Recent Progress in SCCS on GPU Simulation of Biomedical and Hydrodynamic Prob...
Recent Progress in SCCS on GPU Simulation of Biomedical and Hydrodynamic Prob...Recent Progress in SCCS on GPU Simulation of Biomedical and Hydrodynamic Prob...
Recent Progress in SCCS on GPU Simulation of Biomedical and Hydrodynamic Prob...NVIDIA Taiwan
 
Automotive mould maker & Auto Plastic Part Manufacturer - 2015
Automotive mould maker & Auto Plastic Part Manufacturer - 2015Automotive mould maker & Auto Plastic Part Manufacturer - 2015
Automotive mould maker & Auto Plastic Part Manufacturer - 2015Huy Bui Van
 
Performing Simulation-Based, Real-time Decision Making with Cloud HPC
Performing Simulation-Based, Real-time Decision Making with Cloud HPCPerforming Simulation-Based, Real-time Decision Making with Cloud HPC
Performing Simulation-Based, Real-time Decision Making with Cloud HPCinside-BigData.com
 
INVESTIGATION INTO THE AERODYNAMIC DESIGN OF A FORMULA ONE CAR
INVESTIGATION INTO THE AERODYNAMIC DESIGN OF A FORMULA ONE CARINVESTIGATION INTO THE AERODYNAMIC DESIGN OF A FORMULA ONE CAR
INVESTIGATION INTO THE AERODYNAMIC DESIGN OF A FORMULA ONE CARDaniel Baker
 
The Return on Investment of Computational Fluid Dynamics
The Return on Investment of Computational Fluid DynamicsThe Return on Investment of Computational Fluid Dynamics
The Return on Investment of Computational Fluid DynamicsAnsys
 
Trends towards the merge of HPC + Big Data systems
Trends towards the merge of HPC + Big Data systemsTrends towards the merge of HPC + Big Data systems
Trends towards the merge of HPC + Big Data systemsIgor José F. Freitas
 
10 good reasons to go for model-based systems engineering in your organization
10 good reasons to go for model-based systems engineering in your organization10 good reasons to go for model-based systems engineering in your organization
10 good reasons to go for model-based systems engineering in your organizationSiemens PLM Software
 
Using FMI (Functional Mock-up Interface) for MBSE at all steps of System Design
Using FMI (Functional Mock-up Interface) for MBSE at all steps of System DesignUsing FMI (Functional Mock-up Interface) for MBSE at all steps of System Design
Using FMI (Functional Mock-up Interface) for MBSE at all steps of System DesignSiemens PLM Software
 
Mellanox Announces HDR 200 Gb/s InfiniBand Solutions
Mellanox Announces HDR 200 Gb/s InfiniBand SolutionsMellanox Announces HDR 200 Gb/s InfiniBand Solutions
Mellanox Announces HDR 200 Gb/s InfiniBand Solutionsinside-BigData.com
 
CFD : Modern Applications, Challenges and Future Trends
CFD : Modern Applications, Challenges and Future Trends CFD : Modern Applications, Challenges and Future Trends
CFD : Modern Applications, Challenges and Future Trends Dr. Khalid Saqr
 
Computational fluid dynamics approach, conservation equations and
Computational fluid dynamics approach, conservation equations andComputational fluid dynamics approach, conservation equations and
Computational fluid dynamics approach, conservation equations andlavarchanamn
 
Nimbix: Cloud for the Missing Middle
Nimbix: Cloud for the Missing MiddleNimbix: Cloud for the Missing Middle
Nimbix: Cloud for the Missing Middleinside-BigData.com
 

Viewers also liked (20)

As per Industry Requirements Automotive, Aerospace & CFD Certified Training ...
As per Industry Requirements Automotive, Aerospace & CFD  Certified Training ...As per Industry Requirements Automotive, Aerospace & CFD  Certified Training ...
As per Industry Requirements Automotive, Aerospace & CFD Certified Training ...
 
Cfd analysis report of bike model
Cfd analysis report of  bike modelCfd analysis report of  bike model
Cfd analysis report of bike model
 
Yechun portfolio
Yechun portfolioYechun portfolio
Yechun portfolio
 
NSCC Training Introductory Class
NSCC Training Introductory Class NSCC Training Introductory Class
NSCC Training Introductory Class
 
Smarter Innovation at Scale
Smarter Innovation at ScaleSmarter Innovation at Scale
Smarter Innovation at Scale
 
STUDY AND ANALYSIS OF TREE SHAPED FINS BY USING FLUENT
STUDY AND ANALYSIS OF TREE SHAPED FINS BY USING FLUENTSTUDY AND ANALYSIS OF TREE SHAPED FINS BY USING FLUENT
STUDY AND ANALYSIS OF TREE SHAPED FINS BY USING FLUENT
 
Huawei Powers Efficient and Scalable HPC
Huawei Powers Efficient and Scalable HPCHuawei Powers Efficient and Scalable HPC
Huawei Powers Efficient and Scalable HPC
 
Recent Progress in SCCS on GPU Simulation of Biomedical and Hydrodynamic Prob...
Recent Progress in SCCS on GPU Simulation of Biomedical and Hydrodynamic Prob...Recent Progress in SCCS on GPU Simulation of Biomedical and Hydrodynamic Prob...
Recent Progress in SCCS on GPU Simulation of Biomedical and Hydrodynamic Prob...
 
Automotive mould maker & Auto Plastic Part Manufacturer - 2015
Automotive mould maker & Auto Plastic Part Manufacturer - 2015Automotive mould maker & Auto Plastic Part Manufacturer - 2015
Automotive mould maker & Auto Plastic Part Manufacturer - 2015
 
Performing Simulation-Based, Real-time Decision Making with Cloud HPC
Performing Simulation-Based, Real-time Decision Making with Cloud HPCPerforming Simulation-Based, Real-time Decision Making with Cloud HPC
Performing Simulation-Based, Real-time Decision Making with Cloud HPC
 
INVESTIGATION INTO THE AERODYNAMIC DESIGN OF A FORMULA ONE CAR
INVESTIGATION INTO THE AERODYNAMIC DESIGN OF A FORMULA ONE CARINVESTIGATION INTO THE AERODYNAMIC DESIGN OF A FORMULA ONE CAR
INVESTIGATION INTO THE AERODYNAMIC DESIGN OF A FORMULA ONE CAR
 
The Return on Investment of Computational Fluid Dynamics
The Return on Investment of Computational Fluid DynamicsThe Return on Investment of Computational Fluid Dynamics
The Return on Investment of Computational Fluid Dynamics
 
Trends towards the merge of HPC + Big Data systems
Trends towards the merge of HPC + Big Data systemsTrends towards the merge of HPC + Big Data systems
Trends towards the merge of HPC + Big Data systems
 
10 good reasons to go for model-based systems engineering in your organization
10 good reasons to go for model-based systems engineering in your organization10 good reasons to go for model-based systems engineering in your organization
10 good reasons to go for model-based systems engineering in your organization
 
Using FMI (Functional Mock-up Interface) for MBSE at all steps of System Design
Using FMI (Functional Mock-up Interface) for MBSE at all steps of System DesignUsing FMI (Functional Mock-up Interface) for MBSE at all steps of System Design
Using FMI (Functional Mock-up Interface) for MBSE at all steps of System Design
 
Mellanox Announces HDR 200 Gb/s InfiniBand Solutions
Mellanox Announces HDR 200 Gb/s InfiniBand SolutionsMellanox Announces HDR 200 Gb/s InfiniBand Solutions
Mellanox Announces HDR 200 Gb/s InfiniBand Solutions
 
CFD : Modern Applications, Challenges and Future Trends
CFD : Modern Applications, Challenges and Future Trends CFD : Modern Applications, Challenges and Future Trends
CFD : Modern Applications, Challenges and Future Trends
 
Computational fluid dynamics approach, conservation equations and
Computational fluid dynamics approach, conservation equations andComputational fluid dynamics approach, conservation equations and
Computational fluid dynamics approach, conservation equations and
 
Nimbix: Cloud for the Missing Middle
Nimbix: Cloud for the Missing MiddleNimbix: Cloud for the Missing Middle
Nimbix: Cloud for the Missing Middle
 
CFD & ANSYS FLUENT
CFD & ANSYS FLUENTCFD & ANSYS FLUENT
CFD & ANSYS FLUENT
 

Similar to Towards 3D Object Capture for Interactive CFD with Automotive Applications - By Malcolm O. Dias 9803763

NYU Masters Thesis - 2009 (Thesis of the Year - Runner Up)
NYU Masters Thesis - 2009 (Thesis of the Year - Runner Up)NYU Masters Thesis - 2009 (Thesis of the Year - Runner Up)
NYU Masters Thesis - 2009 (Thesis of the Year - Runner Up)Jim Floyd
 
SSTRM - StrategicReviewGroup.ca - Human and Systems Integration Workshop - Vo...
SSTRM - StrategicReviewGroup.ca - Human and Systems Integration Workshop - Vo...SSTRM - StrategicReviewGroup.ca - Human and Systems Integration Workshop - Vo...
SSTRM - StrategicReviewGroup.ca - Human and Systems Integration Workshop - Vo...Phil Carr
 
SSTRM - StrategicReviewGroup.ca - Workshop 2: Power/Energy and Sustainability...
SSTRM - StrategicReviewGroup.ca - Workshop 2: Power/Energy and Sustainability...SSTRM - StrategicReviewGroup.ca - Workshop 2: Power/Energy and Sustainability...
SSTRM - StrategicReviewGroup.ca - Workshop 2: Power/Energy and Sustainability...Phil Carr
 
SSTRM - StrategicReviewGroup.ca - Workshop 5: Soldier Survivability/Sustainab...
SSTRM - StrategicReviewGroup.ca - Workshop 5: Soldier Survivability/Sustainab...SSTRM - StrategicReviewGroup.ca - Workshop 5: Soldier Survivability/Sustainab...
SSTRM - StrategicReviewGroup.ca - Workshop 5: Soldier Survivability/Sustainab...Phil Carr
 
NASA Technology Roadmaps- Materials, Structures & Manufacturing
NASA Technology Roadmaps- Materials, Structures & ManufacturingNASA Technology Roadmaps- Materials, Structures & Manufacturing
NASA Technology Roadmaps- Materials, Structures & ManufacturingDr Dev Kambhampati
 
Renewable energy technologies
Renewable energy technologiesRenewable energy technologies
Renewable energy technologiesNuwan Dinusha
 
MEng Report Merged - FINAL
MEng Report Merged - FINALMEng Report Merged - FINAL
MEng Report Merged - FINALAmit Ramji ✈
 
Transitioning to a performance-based price regulation in Estonia. A Case Stud...
Transitioning to a performance-based price regulation in Estonia. A Case Stud...Transitioning to a performance-based price regulation in Estonia. A Case Stud...
Transitioning to a performance-based price regulation in Estonia. A Case Stud...Jaanus Uiga ✔︎
 
Design for public services- The fourth way
Design for public services- The fourth wayDesign for public services- The fourth way
Design for public services- The fourth wayforumvirium
 
Lessons Learned in ICFMP Project for Verification and Validation of Computer ...
Lessons Learned in ICFMP Project for Verification and Validation of Computer ...Lessons Learned in ICFMP Project for Verification and Validation of Computer ...
Lessons Learned in ICFMP Project for Verification and Validation of Computer ...Dr. Monideep Dey
 
Paul Ebbs (2011) - Can lean construction improve the irish construction industry
Paul Ebbs (2011) - Can lean construction improve the irish construction industryPaul Ebbs (2011) - Can lean construction improve the irish construction industry
Paul Ebbs (2011) - Can lean construction improve the irish construction industryPaul Ebbs
 

Similar to Towards 3D Object Capture for Interactive CFD with Automotive Applications - By Malcolm O. Dias 9803763 (20)

NYU Masters Thesis - 2009 (Thesis of the Year - Runner Up)
NYU Masters Thesis - 2009 (Thesis of the Year - Runner Up)NYU Masters Thesis - 2009 (Thesis of the Year - Runner Up)
NYU Masters Thesis - 2009 (Thesis of the Year - Runner Up)
 
SSTRM - StrategicReviewGroup.ca - Human and Systems Integration Workshop - Vo...
SSTRM - StrategicReviewGroup.ca - Human and Systems Integration Workshop - Vo...SSTRM - StrategicReviewGroup.ca - Human and Systems Integration Workshop - Vo...
SSTRM - StrategicReviewGroup.ca - Human and Systems Integration Workshop - Vo...
 
SSTRM - StrategicReviewGroup.ca - Workshop 2: Power/Energy and Sustainability...
SSTRM - StrategicReviewGroup.ca - Workshop 2: Power/Energy and Sustainability...SSTRM - StrategicReviewGroup.ca - Workshop 2: Power/Energy and Sustainability...
SSTRM - StrategicReviewGroup.ca - Workshop 2: Power/Energy and Sustainability...
 
SSTRM - StrategicReviewGroup.ca - Workshop 5: Soldier Survivability/Sustainab...
SSTRM - StrategicReviewGroup.ca - Workshop 5: Soldier Survivability/Sustainab...SSTRM - StrategicReviewGroup.ca - Workshop 5: Soldier Survivability/Sustainab...
SSTRM - StrategicReviewGroup.ca - Workshop 5: Soldier Survivability/Sustainab...
 
NASA Technology Roadmaps- Materials, Structures & Manufacturing
NASA Technology Roadmaps- Materials, Structures & ManufacturingNASA Technology Roadmaps- Materials, Structures & Manufacturing
NASA Technology Roadmaps- Materials, Structures & Manufacturing
 
thesis
thesisthesis
thesis
 
Renewable energy technologies
Renewable energy technologiesRenewable energy technologies
Renewable energy technologies
 
Final thesis1 hard bound amended
Final thesis1 hard bound amendedFinal thesis1 hard bound amended
Final thesis1 hard bound amended
 
Suyash Thesis
Suyash ThesisSuyash Thesis
Suyash Thesis
 
MEng Report Merged - FINAL
MEng Report Merged - FINALMEng Report Merged - FINAL
MEng Report Merged - FINAL
 
20120112-Dissertation7-2
20120112-Dissertation7-220120112-Dissertation7-2
20120112-Dissertation7-2
 
HonsTokelo
HonsTokeloHonsTokelo
HonsTokelo
 
Fabrication of mosfets
Fabrication of mosfetsFabrication of mosfets
Fabrication of mosfets
 
Transitioning to a performance-based price regulation in Estonia. A Case Stud...
Transitioning to a performance-based price regulation in Estonia. A Case Stud...Transitioning to a performance-based price regulation in Estonia. A Case Stud...
Transitioning to a performance-based price regulation in Estonia. A Case Stud...
 
Generic Industrial Audit Report-2-24-15
Generic Industrial Audit Report-2-24-15Generic Industrial Audit Report-2-24-15
Generic Industrial Audit Report-2-24-15
 
02whole
02whole02whole
02whole
 
Design for public services- The fourth way
Design for public services- The fourth wayDesign for public services- The fourth way
Design for public services- The fourth way
 
Lessons Learned in ICFMP Project for Verification and Validation of Computer ...
Lessons Learned in ICFMP Project for Verification and Validation of Computer ...Lessons Learned in ICFMP Project for Verification and Validation of Computer ...
Lessons Learned in ICFMP Project for Verification and Validation of Computer ...
 
Paul Ebbs (2011) - Can lean construction improve the irish construction industry
Paul Ebbs (2011) - Can lean construction improve the irish construction industryPaul Ebbs (2011) - Can lean construction improve the irish construction industry
Paul Ebbs (2011) - Can lean construction improve the irish construction industry
 
Project_Report
Project_ReportProject_Report
Project_Report
 

Towards 3D Object Capture for Interactive CFD with Automotive Applications - By Malcolm O. Dias 9803763

  • 1. The University of Manchester School of Mechanical, Aerospace and Civil Engineering TOWARDS 3D OBJECT CAPTURE FOR INTERACTIVE CFD WITH AUTOMOTIVE APPLICATIONS A dissertation submitted to The University of Manchester for the degree of Master of Science in the Faculty of Science and Engineering. 2016 Malcolm Olav Dias 9803763 Supervisor: Dr Alistair Revell
  • 2. Towards 3D Object Capture for Interactive CFD with Automotive Applications Malcolm O. Dias | 9803763 2 Table of Contents List of Figures..................................................................................................................................................................4 List of Tables....................................................................................................................................................................5 Abstract..............................................................................................................................................................................6 Declaration.......................................................................................................................................................................7 Copyright Statement ....................................................................................................................................................7 Acknowledgements .......................................................................................................................................................9 1. Introduction.........................................................................................................................................................10 1.1. Motivation ...................................................................................................................................................10 1.2. Computational Fluid Dynamics..........................................................................................................12 1.2.1. Lattice-Boltzmann method (LBM)...........................................................................................16 1.3. Geometry Capture ....................................................................................................................................20 1.3.1. Microsoft Kinect Camera..............................................................................................................25 1.4. Object Capture Pipeline (OCP)............................................................................................................26 1.5. Objectives.....................................................................................................................................................27 2. Literature Review..............................................................................................................................................28 2.1. 3D Scanning................................................................................................................................................28 2.2. Post-processing Point Cloud ................................................................................................................35 3. Methodology........................................................................................................................................................43 3.1. Laboratory upgrade................................................................................................................................43 3.2. Rough Alignment......................................................................................................................................44 3.3. Varying SOR parameters.......................................................................................................................47 3.4. Varying Registration Parameters .....................................................................................................48 3.5. Setting up Clip Box...................................................................................................................................50 3.6. Modifying Point Cloud using Axis Alignment - Bounding Box (AABB)..............................50 4. Results and Discussion.....................................................................................................................................52 4.1. Laboratory Upgrade ...............................................................................................................................52 4.2. Rough Alignment......................................................................................................................................55 4.3. SOR Filtering...............................................................................................................................................61 4.3.1. Time Analysis ....................................................................................................................................61 4.3.2. Qualitative Analysis........................................................................................................................65
  • 3. Towards 3D Object Capture for Interactive CFD with Automotive Applications Malcolm O. Dias | 9803763 3 4.4. Registration ................................................................................................................................................70 4.4.1. Time Analysis ....................................................................................................................................70 4.4.2. Qualitative Analysis........................................................................................................................75 4.5. Clip Box.........................................................................................................................................................79 4.6. Axis Alignment – Bounding Box (AABB).........................................................................................81 5. Conclusions...........................................................................................................................................................83 5.1. Future Improvements.............................................................................................................................86 6. References.............................................................................................................................................................88 Appendix A: Machine Specifications and Software Used ...........................................................................91 Word Count: 13862
  • 4. Towards 3D Object Capture for Interactive CFD with Automotive Applications Malcolm O. Dias | 9803763 4 List of Figures Figure 1.1.1: Outline of the research carried out by Harwood et. al.....................................................11 Figure 1.2.1: Nodal points for FDM, FEM and FVM......................................................................................15 Figure 1.2.2: Meshing on a complex geometry [7]........................................................................................15 Figure 1.2.3: Techniques of simulation [2].......................................................................................................16 Figure 1.2.4: Lattice arrangements [2].............................................................................................................19 Figure 1.4.1: General Flow Chart of an OCP used for the research [Harwood A., 2016]..............27 Figure 2.1.1: Scanner accuracy (Small parabola: triangulation scanner with short base. Large parabola: triangulation scanner with long base. Straight line: Time of flight) [9] .......................29 Figure 2.1.2: The calibration board in the IR .................................................................................................33 Figure 2.2.1: Feature Histograms for corresponding points on different point cloud datasets [19.]...................................................................................................................................................................................38 Figure 3.1.1: Original laboratory set-up (before upgrade) ......................................................................43 Figure 3.2.1: Kinect Camera Local Axes [21] ..................................................................................................45 Figure 3.2.2: Translation Parameters for Rough Alignment ...................................................................46 Figure 3.2.3: Rotation Parameters for Rough Alignment..........................................................................46 Figure 3.3.1: SOR filtering (For neighbours = 10).........................................................................................47 Figure 3.4.1: Registration........................................................................................................................................49 Figure 4.1.1: Schematic of proposed laboratory design.............................................................................53 Figure 4.1.2: Schematic of the aluminium frame ordered ........................................................................53 Figure 4.1.3: Kinect camera clamped to the aluminium frame ..............................................................54 Figure 4.1.4: Laboratory setup after upgrading ...........................................................................................54 Figure 4.1.5: Upgraded car model base.............................................................................................................55 Figure 4.2.1: Point Cloud raw data from Kinect (Before Processing)..................................................56 Figure 4.2.2: Laboratory arrangement with global axes ..........................................................................57 Figure 4.2.3: Point Cloud Data after aligning Camera axes to Global axes.......................................58 Figure 4.2.4: Point Cloud Data after Rough Alignment..............................................................................60 Figure 4.2.5: Zoomed in view to locate the rough alignment of Car model.......................................61 Figure 4.3.1: Variation of Time (T/TSD-max) for ‘Standard Deviation’ for various ‘Neighbours’63 Figure 4.3.2: Variation of Time (1-T/Tavg@N) for ‘Neighbours’ for various ‘Standard Deviation’ .............................................................................................................................................................................................64 Figure 4.3.3: SOR Filtered – Camera 1 for Std.Dev. = 1.0 and various Neighbours ........................66 Figure 4.3.4: SOR Filtered – Camera 1 for Neighbours = 10 and various Standard Deviation .68 Figure 4.3.5: SOR Filtered – All Cameras for Neighbours = 5 and Standard Deviation = 1.0.....69 Figure 4.4.1: Effect of selecting low Correspondence Distance value ..................................................72 Figure 4.4.2: Variation of Time (T/Tmin@MI,RT) with respect to ‘Correspondence Distance’ for different values of ‘Registration Tolerance’ and ‘Maximum Iterations’..............................................73 Figure 4.4.3: Variation of Time (T/Tavg@MI,CD) with respect to ‘Registration Tolerance’ for different ‘Correspondence Distance’ and ‘Maximum Iterations’ values..............................................74
  • 5. Towards 3D Object Capture for Interactive CFD with Automotive Applications Malcolm O. Dias | 9803763 5 Figure 4.4.4: Point Cloud obtained from the CAD design of the car model........................................75 Figure 4.4.5: Registered Point Cloud for Registration Tolerance = 0.01, Maximum Iterations of 10 and 20 and Correspondence Distance of 0.1m, 1m and 10m.............................................................77 Figure 4.4.6: Registered Point Cloud for Correspondence Distance =10m, Registration Tolerance = 0.1 and 0.001 and Maximum Iterations of 10 and 20 .......................................................78 Figure 4.4.7: Effect of selecting a high Correspondence distance value..............................................78 Figure 4.5.1: Effect of using different Clip Box limits ..................................................................................80 Figure 4.6.1: Point cloud after AABB..................................................................................................................82 List of Tables Table 1.3.1: Stereoscopic scanner ‘Real-View 3D’ [13]...............................................................................22 Table 1.3.2: Scanning technology principles [9]............................................................................................24 Table 2.1.1: Device failure ratios for two application modes for the major error sources discussed [16] ...............................................................................................................................................................35 Table 4.2.1: Camera colour code legend for SOR point cloud data .......................................................56 Table 4.2.2: Values used for Translation input file for Rough Alignment ..........................................59 Table 4.2.3: Values for Rotation input file for Rough Alignment ...........................................................59 Table 4.3.1: Time Taken for SOR filtering........................................................................................................62 Table 4.4.1: Time taken for registration at Maximum Iteration =10...................................................70 Table 4.4.2: Time taken for registration at Maximum Iteration =20...................................................70 Table 4.5.1: Clip Box limits selected....................................................................................................................81 Table 4.6.1: Parameters used for AABB ............................................................................................................82
  • 6. Towards 3D Object Capture for Interactive CFD with Automotive Applications Malcolm O. Dias | 9803763 6 Abstract Computational Fluid Dynamics (CFD) is an essential activity with most engineering universities and industries carrying out research to improve it. To this end, research is being carried out by Harwood et. al. at The University of Manchester, to innovate the way CFD is being used. Their research uses depth-sensing cameras to capture the geometry of an object which can then be post-processed, and a CFD analysis can be carried out using the lattice-Boltzmann method (LBM). The work done in this report will form a part of the main research and deal with upgrading the scanning laboratory and carrying out a study on the effect of the input parameters used in the object reconstruction. A wooden base was designed, built and then fixed to the car model to attain completeness. The laboratory was upgraded from four cameras to six cameras, and an aluminium frame was installed to mount the cameras instead of tripods to make the laboratory setup more steady and robust. The object capture software was updated to incorporate the new laboratory setup to perform rough alignment. The effect of varying the input values for noise filtering was studied and it was found that the ‘neighbours’ parameter had the maximum influence on time taken. However, both neighbours and standard deviation affected the point cloud data obtained after filtering. Varying the registration parameters suggested that the physical parameter (correspondence distance) had the major effect on the time taken for registration and also on the resulting point cloud. The effect of the other two criteria used for registration was negligible. A brief analysis on rough alignment, clip box and axis alignment bounding box stages of the capture software was carried out as well.
  • 7. Towards 3D Object Capture for Interactive CFD with Automotive Applications Malcolm O. Dias | 9803763 7 Declaration No portion of the work referred to in the dissertation has been submitted in support of an application for another degree or qualification of this or any other university or other institute of learning. Copyright Statement i. The author of this dissertation (including any appendices and/or schedules to this dissertation) owns certain copyright or related rights in it (the “Copyright”) and s/he has given The University of Manchester certain rights to use such Copyright, including for administrative purposes. ii. Copies of this dissertation, either in full or in extracts and whether in hard or electronic copy, may be made only in accordance with the Copyright, Designs and Patents Act 1988 (as amended) and regulations issued under it or, where appropriate, in accordance with licensing agreements which the University has entered into. This page must form part of any such copies made. iii. The ownership of certain Copyright, patents, designs, trademarks and other intellectual property (the “Intellectual Property”) and any reproductions of copyright works in the dissertation, for example graphs
  • 8. Towards 3D Object Capture for Interactive CFD with Automotive Applications Malcolm O. Dias | 9803763 8 and tables (“Reproductions”), which may be described in this dissertation, may not be owned by the author and may be owned by third parties. Such Intellectual Property and Reproductions cannot and must not be made available for use without the prior written permission of the owner(s) of the relevant Intellectual Property and/or Reproductions. iv. Further information on the conditions under which disclosure, publication and commercialisation of this dissertation, the Copyright and any Intellectual Property and/or Reproductions described in it may take place is available in the University IP Policy, in any relevant Dissertation restriction declarations deposited in the University Library, and The University Library’s regulations.
  • 9. Towards 3D Object Capture for Interactive CFD with Automotive Applications Malcolm O. Dias | 9803763 9 Acknowledgements I take this opportunity to express my sincere gratitude to my supervisor Dr Alistair Revell for his support and encouragement for this dissertation. I would also like to express my gratitude to Dr Adrian Harwood for his cordial support, valuable information and guidance. I would like to thank Mr Thomas Lawton, Mrs Natalie Parish and all the laboratory technical support staff at George Begg, for helping me while setting up the laboratory. I would also like to thank my family and friends for their continuous encouragement without which this dissertation would not have been possible.
  • 10. Towards 3D Object Capture for Interactive CFD with Automotive Applications Malcolm O. Dias | 9803763 10 1. Introduction The availability of new devices, faster computers and other resources makes it easier and interesting to develop even newer technologies and innovative ideas. Modern tools and devices can be used to develop engineering methods and practices as well. Engineering design, which can be considered as one of the first stages of product development is evolving with technology with a lot of research and innovation being put in by researchers and engineers in order to improve it. The understanding and detailed knowledge of complex fluid flow form an essential aspect of many modern engineering systems and hence, an integral to engineering design. The growth of Computational Fluid Dynamics (CFD) has made the study of fluid flow much more accurate. Moreover, CFD simulations can also be used to perform thermal investigations (like heat transfer). Therefore, computational fluid dynamics (CFD) forms a crucial stage of the design phase but it is far from exact and accuracy comes at the expense of computational effort. 1.1. Motivation Sometimes, if not always, aesthetics is given very much importance for example in designing a new car model. Designers may put out possibly the best ‘looking’ design, but this design may compromise when it comes to engineering performance. To carry out evaluation, the engineers need to carry out various analysis and tests, including CFD. Mesh construction is a must when it comes to using the typical CFD
  • 11. Towards 3D Object Capture for Interactive CFD with Automotive Applications Malcolm O. Dias | 9803763 11 techniques which can be a time-consuming process. Simulations can take days or even weeks to complete. What if an object is modified physically by adding or removing some part or perhaps change the entire shape (e.g. as in clay constructed models) and the engineers are asked to carry out the analysis at the same time? They may have to redesign the entire object using CAD and then reconstruct the mesh. This can be a tedious and repetitive process, added to the time consumption the meshing takes. In order to improve this process, it may be beneficial to run faster, low accuracy simulations to steer design. Research at The University of Manchester is being carried out wherein depth sensing cameras are used to capture the geometric parameters of a tangible object, which is then post-processed and real-time interactive CFD analysis are carried out using the lattice-Boltzmann method. Figure 1.1.1: Outline of the research carried out by Harwood et. al. • Kinect Capture • Registration • Filtering • AABB OCP • Voxeliser • BC Config • Solver LBM • Particle Generator • Field Mapping Vis
  • 12. Towards 3D Object Capture for Interactive CFD with Automotive Applications Malcolm O. Dias | 9803763 12 The research detailed in this report will concentrate on improving the software used to reconstruct captured objects. This software is referred to as the Object Capture Pipeline (OCP). The effects of varying different input parameters on the results obtained are studied. 1.2. Computational Fluid Dynamics Performing experiments and tests can be helpful in gathering important information but experiments usually turn out to be expensive and time-consuming. Moreover, to scale the parameters correctly may be difficult. Additionally, the measuring devices or probes inserted may disturb the flow properties and accessing complex locations in the flow may be difficult. Replicating experiments for explosions or blasts may cause safety issues to the environment and to the individuals performing it. Some simple problems can be tackled with empirical correlations. However, these are not applicable (or not available) for complex flows. In these situations simulations must be performed to provide insight. The Navier-Stokes (NS) equations (Equation 1.1 and 1.2) may be used to describe the flow behaviour of a viscous flow which follows the Newtonian laws.
  • 13. Towards 3D Object Capture for Interactive CFD with Automotive Applications Malcolm O. Dias | 9803763 13 Continuity: (1.1) Momentum: ( ) (1.2) The equations above have been derived from the fundamental laws governing the continuum mechanics i.e. the conservation of mass and conservation of momentum, and hence solution of these equations allow us understanding the flow physics. Usually, in actual applications, additional terms and equations may be needed to account for the heat transfer, turbulence etc. However, the above equations are impossible to solve for a realistic boundary condition and hence the need for a discrete representation and a computational solver to solve them is entirely justified. The test domain is divided into smaller sections (sub-domains) called elements or cells (or volumes in 3D). Usually, these sections are constructed of geometric primitives like triangles, quadrilaterals etc. in 2D and cubes, prisms, tetrahedra etc. in 3D [4]. A group of the cells or elements arranged in a particular test domain is referred to as a ‘mesh’. The partial differential equations (PDEs) are then approximated over each element using a discretization scheme. Common schemes are Finite Element Method (FEM), Finite Different Method (FDM) and Finite Volume Method (FVM).
  • 14. Towards 3D Object Capture for Interactive CFD with Automotive Applications Malcolm O. Dias | 9803763 14 In the 1960s, the finite element method was used to solve the structural analysis problems and during the same time the finite difference method was used to some extent to solve the fluid dynamics equations. In the 1980s the finite volume method was developed and was extensively used to solve the fluid transport equations [2]. In finite difference method, the derivatives are represented using Taylor series expansions and the dependant values are stored at the nodes. In this method the terms are represented as fixed set of nodal quantities and then solving for these quantities. In finite element method, the function is integrated over the finite element and the dependant variables are stored at the centre of the element. Here, the solution is represented as values from each element weighted by a shape function and requires the computing of mass stiffness and damping coefficients. Whereas in finite volume method, the integrations are carried out over the control volume and the node is at the centre of the control volume where the dependant variables are stored. In FVM, the flux terms on faces are reconstructed to obtain conservation equations and then solving them first for the fluxes and then for the remaining variables. The discretized equations are solved iteratively until the convergence criteria are met. Complex geometries and structures increase the difficulties of mesh construction and hence can be a time consuming process to establish a mesh with suitable accuracy.
  • 15. Towards 3D Object Capture for Interactive CFD with Automotive Applications Malcolm O. Dias | 9803763 15 (a) : FDM (b) : FEM (c) : FVM Figure 1.2.1: Nodal points for FDM, FEM and FVM Figure 1.2.2: Meshing on a complex geometry [7] Some modern ‘meshless’ methods like the Smoothed Particle Hydrodynamics (SPH) are developed and recently gaining importance [22]. Another recent method which works on the Cartesian grids (lattices) called the lattice-Boltzmann method (LBM) is gaining popularity for its speed of operation and applications (e.g. multiphase flows, etc.). The equations used in LBM to design fluid flow are local i.e. they are explicit, which makes lattice-Boltzmann method suitable for programming on parallel processing computers [1]. Therefore, the LBM is highly appropriate for Interactive- CFD from all the methods currently available.
  • 16. Towards 3D Object Capture for Interactive CFD with Automotive Applications Malcolm O. Dias | 9803763 16 1.2.1. Lattice-Boltzmann method (LBM) The lattice-Boltzmann method was originally introduced in 1988 by McNamara and Zanetti [2][6] to overcome the weaknesses (e.g. statistical noise [6]) of the cellular gas automata1 [6]. But it was only during later 1990s and early 2000s that this method came into prominence after continuous development. The lattice- Boltzmann method is a mesoscale method which lies between molecular dynamics and macroscale dynamics. In molecular dynamics, each collision is considered explicitly, while the macroscale assumes a continuous medium. Mesoscale considers a group of particles and analyses the motion using particle dynamics. Figure 1.2.3: Techniques of simulation [2] In the lattice-Boltzmann method, the properties of a collection of particles may be statistically represented by using a distribution function. The main idea of the 1 Cellular gas automata is a method which is used to model fluid flow assuming the fluid is made up of particles which undergo binary collisions (i.e. only two particles can collide)
  • 17. Towards 3D Object Capture for Interactive CFD with Automotive Applications Malcolm O. Dias | 9803763 17 lattice-Boltzmann method is that fluids can be imagined as comprising of a large number of small particles with random movement, and the transfer of momentum and energy is achieved through streaming and collision. The Boltzmann equation (Equation 1.3) [6] is used to model this movement. ⃗ (1.3) Where, ⃗ is the particle distribution function2, ⃗ is the particle velocity and is the collision operator (rate of change of collisions) which redistributes the particle momenta. The collision operator can be simplified using the Bhatnagar-Gross-Krook (BGK) approximation which increases the efficiency of the simulations and the transport coefficients are more flexible [2]. (1.4) Therefore, the collision and streaming process can be discretized as below [6], Collision: ⃗ ⃗ ( ⃗ ⃗ ) (1.5) Streaming: ⃗ ⃗ ⃗ (1.6) Where, is the distribution function after collision, is the equilibrium distribution function (Maxwell-Boltzmann distribution function), is the relaxation time which is the time required for the distribution function to return to its equilibrium position, ⃗ is the distance between the particles, ⃗ are the molecular 2 Particle Distribution Function – It represents the proportion of particles at a particular lattice site moving in a given lattice direction.
  • 18. Towards 3D Object Capture for Interactive CFD with Automotive Applications Malcolm O. Dias | 9803763 18 velocities in which denotes the direction based on lattice arrangement, is the time and is the time step. More complex time relation schemes are also available,  TRT (Twin Relation Time): for two component flow (multiphase)  MRT (Multiple Relaxation Time): to provide stability at high Reynolds number (necessary for turbulent flow) In LBM, the solution domain is divided into one or more lattices and at each lattice node, the components of the distribution function f of the particles are stored. The distributions move along specified directions to the neighbouring nodes, the number of direction depends on the lattice arrangement. Usually, the lattice arrangements are classified by the DnQm scheme, where ‘n’ refers to the number of physical dimensions and ‘m’ refers to the number of discretised velocities and is more relevant as one considers particle energies. For example, in D2Q9 there are nine discretised velocities and three speeds (0, 1, √2). A D3Q27configuration will have 27 velocities and four speeds (0, 1, √2, √3). Hence, using a higher value for the lattice arrangement would include more information and therefore would make the solution more accurate. However, selecting a larger configuration would increase in the time taken to compute the solution.
  • 19. Towards 3D Object Capture for Interactive CFD with Automotive Applications Malcolm O. Dias | 9803763 19 (a) : For 1D Problems D1Q3 (b) : For 2D Problems D2Q9 (c) : For 3D problems D3Q15 Figure 1.2.4: Lattice arrangements [2] The macroscopic quantities can be related to the above equations with the help of the following statistical moments, ∫ (1.7) ∫ (1.8) Where, is the density of the fluid, u is the fluid velocity vector.
  • 20. Towards 3D Object Capture for Interactive CFD with Automotive Applications Malcolm O. Dias | 9803763 20 It is also interesting to note that the Navier-Stokes equations can be recovered from the Lattice Boltzmann equations with the help of Chapman-Enskog theory [2][6]. Like every method, LBM has its limitations one of which is that it is invalid for all but very low Mach numbers and it is difficult to model high Reynolds number flow. The main advantages of LBM over NS equations are that in the former method the operations are linear as compared to nonlinear in the latter, there is no pressure velocity coupling in LBM as in NS and the equations in LBM are local rather than elliptic. Additionally, the local nature of the LBM equations makes it efficient to program them on parallel processing machines (especially on Graphical Processing Units (GPUs)). In case of a moving boundary, the traditional CFD methods need to trace the boundary, which is not needed in LBM and hence makes it a suitable technique for multiphase flows like solidification and melting. The speed of LBM is what makes it ideal for attempting real-time3 CFD simulations [1]. 1.3. Geometry Capture In classic CFD methods, the subject on which the study is carried out is usually obtained from a 3D CAD design. This design is then fed into a mesh generating software (e.g. ANSYS etc.), and the mesh is constructed. However, the essence of the research carried out by Harwood et. al. is to replace this step by establishing an 3 Computational time equal to physical time
  • 21. Towards 3D Object Capture for Interactive CFD with Automotive Applications Malcolm O. Dias | 9803763 21 alternative for constructing the geometry of the subject. One method, which is widely used for 3D object capture generally, but not really for CFD, is 3D scanning. 3D scanning is the method (or technology) of converting geometrical parameters (sometimes appearance like colour etc.) of a physical object and/or scenes into a set of numerical data which can be interpreted and post processed using a suitable software. The devices which acquire the data are called 3D scanners [9][11]. The data from a 3D scanner is usually called a ‘Point Cloud’. A 3D scanner has many similar features of a camera; they both have a field of view which is usually in the shape of a cone and they capture information about planes and surfaces of a visible object. However, the latter captures the colour of the surface and former captures the distance of the surface from the sensor and hence captures the depth information. Please note that the words camera and scanner are used interchangeably in this report. Different scanners use different scanning technologies with their own advantages and drawbacks. These technologies can be broadly divided into two main categories: contact and non-contact. In contact 3D scanning, the depth is scanned (measured) by physically touching the object while it is resting on a precision flat surface. In case when the surface is not flat, suitable fixtures are used to hold the object. But one can intuitively suggest that physically touching the object can damage the object and hence this method is not advisable in case of sensitive or
  • 22. Towards 3D Object Capture for Interactive CFD with Automotive Applications Malcolm O. Dias | 9803763 22 precious objects. A coordinate measure machine (CMM) is a classic example of this technique. The non-contact 3D scanning technique is further divided into active and passive methods. Non-contact passive devices use the natural light or radiation available instead of using an emitter. This method is usually very cheap as it does not need any expensive hardware but can use a simple camera. A stereoscopic non-contact passive 3D scanner, which uses two cameras to capture the geometry [14], is shown in the figure below. Table 1.3.1: Stereoscopic scanner ‘Real-View 3D’ [13] Non-contact active scanners emit light or radiation (ultrasound, x-rays etc.) and detect the reflection (or radiation through the object) in order to sense the object and/or the environment. Some of the techniques used in 3D scanners are briefly described below.
  • 23. Towards 3D Object Capture for Interactive CFD with Automotive Applications Malcolm O. Dias | 9803763 23 (a)Time-of-Flight: Light (usually LASER) is emitted by the device in the direction of the object to be probed and the time taken to receive the light is measured. Since the speed of light is already known, the distance of the object from the sensor can be easily computed. The accuracy of the instrument usually relies on the time measuring precision, as the speed of light is of high magnitude [9]. Yet, these devices are used to measure buildings and geographical features as they are capable of working on a long distance (on the order of kilometres). (b)Triangulation: In this technique, an emitter is used to emit the laser light onto the object and a camera is used to find the location of the laser dot. The laser dot appears at different locations in the field of view of the camera which is proportional to how far the object is from the laser. The name triangulation comes due to the formation of a triangle by the laser emitter, the laser dot and the camera. The distance between the emitter and the camera and the angle of the laser emitter are prefixed parameters, hence they are already known. The second angle, which is formed at the camera corner, can be obtained by viewing the location laser spot in the camera field of view. With these three parameters, the distance (depth) of the object or the environment can be measured [9]. The scanners using this technique can be used only for short distance measurements, but they have high accuracy.
  • 24. Towards 3D Object Capture for Interactive CFD with Automotive Applications Malcolm O. Dias | 9803763 24 (a) : Time of Flight principle (b) : Triangulation principle – single camera (c) : Triangulation principle – double cameras Table 1.3.2: Scanning technology principles [9] 3D scanning devices typically use one of these two techniques to collect object data. Devices fall into one of two broad categories: Hand-held laser scanners: These types of scanners work on the triangulation technique and use an internal coordinate reference system. An additional reference system is used to calibrate the data when the scanner is in motion.
  • 25. Towards 3D Object Capture for Interactive CFD with Automotive Applications Malcolm O. Dias | 9803763 25 Structured-light scanners: A defined pattern of light is projected using a stable light source (e.g.: LCD Projector) onto the object or subject to be probed and a camera is used to analyse the deformation in the pattern. The camera is at an offset from the pattern emitter. In structured-light scanners, since multiple points in the field of view are scanned, the time taken for scanning is relatively low. Because the entire field of view is scanned, the distortions due to motion are reduced and hence the precision of the scanned image is high. 1.3.1. Microsoft Kinect Camera The Kinect is a motion sensing device which was developed by Microsoft in 2010 for the purpose of gaming with Xbox 360, but has attracted a lot of attention from the researchers due to the way the data is captured [12]. In 2015, Microsoft launched a new version of their new gaming system Xbox One which was accompanied with a fresh design for the Kinect camera (Kinect 2.0). The depth sensor on the initial model operated on the principle of triangulation with the structured light approach. However, the new one works on the Time-of-flight technique using Infra-red (IR) blasters and sensors for depth sensing. Moreover, the Kinect 2.0 has a wider field of view and an increase depth capture resolution than its predecessor. As an affordable solution, these devices are used in this research project for object capture.
  • 26. Towards 3D Object Capture for Interactive CFD with Automotive Applications Malcolm O. Dias | 9803763 26 1.4. Object Capture Pipeline (OCP) The data obtained from a 3D scanner (or any other scanner) needs to be processed such that the information captured can be used for the desired purpose. A sequence of tasks needs to be carried out on the data before it attains the required form and structure for use in a CFD simulation. A pipeline, with respect to computer programming and coding, refers to a sequence of processing parts or elements consisting of functions, subroutines etc. which are organised such that the input to each section is taken from the output of the previous section. It is analogous to a physical pipeline. Our Object Capture Pipeline (OCP), as the name suggests, is a pipeline which captures an image (or point cloud) from an image (depth) sensing device and then processed (aligned, filtered etc.) in order to achieve the necessary output. Its design is dependent on the methods used and so OCPs are usually of a variety of designs and processing done is variable based on the end result needed. In Figure 1.4.1, the red blocks indicate the main routine of the OCP, with green, orange and blue blocks indicating the subroutines and functions used within the main code. The current OCP has a number of free parameters which need to be optimally selected to maximise the performance.
  • 27. Towards 3D Object Capture for Interactive CFD with Automotive Applications Malcolm O. Dias | 9803763 27 Figure 1.4.1: General Flow Chart of an OCP used for the research [Harwood A., 2016] 1.5. Objectives The objectives of this dissertation are,  To upgrade the scanning laboratory from four cameras to six cameras and make the arrangement more stable.  To upgrade the object to be scanned by completing its surface construction.  To optimise, and where possible, remove the empiricism in the Object Capture Pipeline.  To study the effects of input parameters used for aligning and clipping the point cloud.  To study the effects of input parameters used for noise filtering and registration (point cloud alignment) on the time taken and compare the quality of the resulting point clouds.
  • 28. Towards 3D Object Capture for Interactive CFD with Automotive Applications Malcolm O. Dias | 9803763 28 2. Literature Review This section will discuss the prior work done by other researchers in a similar field. Attention is focussed towards different scanning technologies, with prime importance given to the Kinect camera, followed by different methods and approaches used for registration and processing point cloud data. The scope is limited to relatively recent approaches; however, where necessary old findings are considered. 2.1. 3D Scanning (Boehler and Marbs, 2008) discuss the principle of operation and accuracy considerations of different close range 3D scanning instruments. They have focused their study basically for the purpose of maintaining records of cultural heritage. The authors state that in case of ranging scanners, the scanners measure the horizontal and vertical angles and then compute the distance either by phase comparison method or by time of flight method. In the phase comparison method the light is sent out in the form of an organised harmonic wave, and the phase difference between the transmitted light and the light obtained by the receiver is used to compute the distance. However, this method may tend to produce some errors in the results, as a well-defined returning signal is required [9]. On the other hand, the
  • 29. Towards 3D Object Capture for Interactive CFD with Automotive Applications Malcolm O. Dias | 9803763 29 ranging scanners working on the time-of –flight principle compute the distance by calibrating it to the time required for the laser beam to travel from the transmitter to the receiver. These type of scanners may also lead to poor results since they use simpler algorithms, and also the angular pointing of the beam can affect the 3D accuracy of the results obtained [9]. Some 3D scanners work on Triangulation principle, and to identify the location of the light spot on the object one or two cameras are used (see Figure 1.3.1). The triangle formed is then used to obtain the 3D position of the object. Figure 2.1.1: Scanner accuracy (Small parabola: triangulation scanner with short base. Large parabola: triangulation scanner with long base. Straight line: Time of flight) [9] While discussing accuracy considerations, (Boehler and Marbs, 2008) mention that if the surfaces obtained are of irregular nature then modelling them particularly by mesh may be cumbersome since a smoothing operation cannot be applied (due to the presence of noisy points). Hence they suggest using an accurate scanner is desirable. The authors very briefly explain the importance of speed, resolution of
  • 30. Towards 3D Object Capture for Interactive CFD with Automotive Applications Malcolm O. Dias | 9803763 30 spot size, range limits and influence of radiation, field of view, registration devices, imaging cameras, ease of transportation power supply and scanning software that may be considered while selecting a 3D scanner although the main focus is kept on the accuracy and principles of operation. The discussion by (Boehler and Marbs, 2008) is very brief in terms of the matter discussed. However the parameters considered including the ones briefly discussed by them, can provide a guideline while selecting and comparing different 3D scanning devices. (Khoshelham et al., 2012) give a detailed study of the Kinect working on structured light triangulation principle. They provide a mathematical model to obtain the depth of point with respect to the sensor; it is given as follows, (2.1) Where Z0 is the distance of the reference plane from the sensor, Zk denotes the distance (depth) of the point k in object space, b is the base length, f is the focal length of the infrared camera, D is the displacement of the point k in object space, and d is the observed disparity in image space [15].
  • 31. Towards 3D Object Capture for Interactive CFD with Automotive Applications Malcolm O. Dias | 9803763 31 The object co-ordinates along the plane are given by (Khoshelham et al., 2012) as follows, (2.2) (2.3) Where and are the image coordinates of the point, and are the coordinates of the principal point, and and are corrections for lens distortion [15]. A relation between the normalised disparity and the inverse depth of a point is shown in Equation 1.4 [15] ( ) (2.4) (Khoshelham et al., 2012) suggest that the errors in a Kinect camera may originate in the sensor, the measurement setup and/or properties of the object surface. The depth determination diminishes quadratically with an increase in the distance from the sensor. The point dividing in the depth direction (along the optical pivot of the sensor) is as extensive as 7 cm and goes up to 5 meters maximum [15]. The random errors of depth estimations increments quadratically with expanding
  • 32. Towards 3D Object Capture for Interactive CFD with Automotive Applications Malcolm O. Dias | 9803763 32 separation from the sensor and achieves 4 cm at the greatest scope of 5 meters [15]. To dispose of bends in the point cloud and misalignments between the colour and depth information an exact stereo alignment of the IR camera and the RGB camera is important [15]. The analysis carried out by (Khoshelham et al., 2012) would be very crucial in setting up the cameras in the workspace. Also, the results obtained help in understanding the potential errors that can creep in the data while obtaining the results. (Smisek, Jancosek, & Pajdla, 2013) give a geometrical investigation of the Kinect camera, outline its geometrical model, suggest a calibration technique and also show its performance by comparing the results with a SwissRanger SR-4000 and 3.5 megapixel SLR stereo. As suggested by the authors, the Kinect functions as a depth camera and a colour (RGB) camera, which can be utilised to perceive picture substance and surface 3-D points. They adjusted Kinect cameras by showing the same alignment point to the IR and RGB cameras. Along these lines, both cameras are adjusted with respect to the same 3D points. The results demonstrate that much better picture is obtained by obstructing the IR projector and lighting up the objective by an incandescent (halogen) light for the purpose of calibration [12]. Also, the complex residual errors were studied by them.
  • 33. Towards 3D Object Capture for Interactive CFD with Automotive Applications Malcolm O. Dias | 9803763 33 (a) : IR image of a calibration checkerboard illuminated by IR pattern (b) : IR image of the calibration checkerboard illuminated by a halogen lamp with the IR projection blocked Figure 2.1.2: The calibration board in the IR They mounted the Kinect and SLR stereo firmly and aligned together, and gauged the same planar focuses in 315 control adjustment points on each of the 14 targets. SR- 4000 3D TOF measured diverse planar targets however in a practically identical scope of separations 0.9 − 1.4 meters from the sensor in 88 control alignment points on each of the 11 alignment targets [12]. The authors conclude that in the nature of the multi-view reconstruction, Kinect accomplished better results than SwissRanger SR-4000 and was near to 3.5 M pixel SLR Stereo. The research by (Smisek, Jancosek, & Pajdla, 2013) gives a detailed idea of how to calibrate Kinect camera which can be used for calibrating the Kinect cameras in the working area. The geometric models for Kinect can be effectively used for calibration and also the results obtained by comparing the three types of cameras can be utilised while selecting the scanning devices. The results obtained can be said to agree with the previous study done by (Khoshelham et al., 2012).
  • 34. Towards 3D Object Capture for Interactive CFD with Automotive Applications Malcolm O. Dias | 9803763 34 A study carried out by (Sarbolandi, Lefloch, and Kolb, 2015) introduces a definite correlation between two types of range sensing Kinect cameras; structured light and time-of-flight. With a specific end goal to direct the correlation, they propose a structure of seven diverse trial setups. Their objective was to identify impacts of the Kinect cameras such that they can be applied to other range sensing devices. The general objective of their work gives a strong understanding into the upsides and downsides of either gadget. In this manner, utilising Kinect range detecting cameras in particular applications can specifically evaluate advantages and potential issues of either device. The seven set-ups were used to study the performance of each device under seven different conditions; they are ambient background light, dynamic inhomogeneity and dynamic scenery, semi-transparent media and scattering, effects of having multiple cameras, error due to linearity, due to planarity and finally for the device heat up. Table 2.1.1 shows the comparison in performances of the two types of the Kinect cameras, where the ratios of infinity indicate high failure rates of the time-of- flight Kinect cameras, whereas ratios close to zero indicates the same for structured light Kinect device. Ratios close to one indicate that both devices perform particularly in the same manner under certain conditions [16].
  • 35. Towards 3D Object Capture for Interactive CFD with Automotive Applications Malcolm O. Dias | 9803763 35 Table 2.1.1: Device failure ratios for two application modes for the major error sources discussed [16] The comparison by (Sarbolandi, Lefloch, and Kolb, 2015) may be very crucial in selecting the type of camera depending on the experimental setup. They have considered most of the parameters that can help select the type of Kinect to use. Also, it can be seen that ‘multi-device’ interference is present only in structured light Kinect which suggests that if multiple cameras are needed, it would not be wrong to suggest that one should use the Kinect camera working on the time-of-flight principle. 2.2. Post-processing Point Cloud (Schnabel, Wahl, and Klein, 2007) suggest a method for identifying certain geometric shapes like planes, spheres, cylinders, cones and tori using an efficient
  • 36. Towards 3D Object Capture for Interactive CFD with Automotive Applications Malcolm O. Dias | 9803763 36 RANSAC (Random Sample Consensus) method. The shapes mentioned have between three to seven parameters and every 3D point pi fixes one parameter of the shape. Approximate surface normal ni is computed so that two surface parameters are per sample are obtained which in turn would reduce the required number of points. However, the authors suggest that having extra parameters can be utilised in identifying the shapes faster. Appropriate methods for the detection of each type of shape are explained in detail by (Schnabel, Wahl, and Klein, 2007). The number of shapes to be considered depends on the probability that the exact shape is actually detected, and the authors devise a probability distribution for the same. Successively, they devise the sampling strategy to be used to find the sample sets which has an effect on the runtime complexity. Hence, a good sampling strategy is essential to select the number of possible shapes. Each shape is then given a particular score and this score is then evaluated to obtain the shape [18]. (Schnabel, Wahl, and Klein, 2007) suggest that their method effectively predicts the correct shape. Also, in case of complex geometries, the shapes are separated into a basic shape and the remaining points display the complexity of remaining surface. They also conduct experiments which imply that their algorithm handles noisy situations also quite efficiently. (Schnabel, Wahl, and Klein, 2007) state that the speed of their method, quality of results and fewer data requirement makes it a practical choice for shape detection in many cases.
  • 37. Towards 3D Object Capture for Interactive CFD with Automotive Applications Malcolm O. Dias | 9803763 37 This work was published in 2007, which suggests that perhaps new methods are available to post process the point clouds. Nevertheless, their study and results are definitely worth considering in understanding the algorithm for detecting shapes and hence can be considered an important parameter in post processing the point clouds into certain shapes. (Rusu et al., 2008) discuss the persistent feature histogram algorithm in obtaining a global model by arranging the point cloud data. They state that the algorithm is steady when dealing with noisy point data. At different levels, the persistence of a certain feature is analysed in order to optimally classify a point cloud. The persistent features provide a fair starting point for the Iterative Closest Point (ICP) algorithm. They discuss the point feature histograms and the algorithm that computes them. A compact point subset Pf is found which best represents the point cloud obtained for the point features. With a specific end goal directed towards selecting the best element points for a given cloud, they break down the area of every point p multiple times, by enclosing p on a sphere of radius ri and p as its centre. Then they shift r over an interval relying upon the point cloud size and density and process the neighbourhood point feature histograms for each point. Then they select all the points in the cloud and register the mean of the feature distribution (μ-histogram), by looking at the feature histogram of every point against the μ-histogram utilising a separation metric and building a distribution of separations. Multiple radii can be considered to calculate the persistence of a
  • 38. Towards 3D Object Capture for Interactive CFD with Automotive Applications Malcolm O. Dias | 9803763 38 feature statistically [19]. Subsequently, they select the arrangement of points (Pfi), whose feature distances are outside a defined limit, as distinctive features. This is done for each r. At the end, the similar components of the two point subsets which are persistent in both ri and ri+1 [19] are selected, that is: ⋃ (2.4) Figure 2.2.1: Feature Histograms for corresponding points on different point cloud datasets [19.]
  • 39. Towards 3D Object Capture for Interactive CFD with Automotive Applications Malcolm O. Dias | 9803763 39 (Rusu et al., 2008) also lay a discussion on the estimation of good alignments. Their algorithm proves out to be robust even when using high dimensionality (16D). With the assistance of an initial alignment algorithm based on the geometric restrictions, datasets that are partially overlapping can be successfully aligned in the convergence region of the ICP [19]. Their discussion certainly helps in understanding the point cloud registration methods and the measures to take while considering certain conditions. The test results show that the algorithm by (Rusu et al., 2008) performs better than the Integral Volume Descriptors (IVD) approach or the surface curvature estimates. Also, the convergence rate improves when the persistent feature histogram method is utilised. (Holz, et al, 2015) explain the tools present in the open-source Point Cloud Library (PCL) for point cloud registration. PCL consolidates strategies for the initial arrangement of a point cloud, utilising different local shape feature descriptors also with respect to refining starting arrangements, utilising diverse variations of the Iterative Closest Point (ICP) calculation. The authors give a review of the different algorithms used for registration and also use illustrative examples of the PCL implementations. Three complete examples and their respective registration pipeline in PCL are considered; these cases incorporate dense RGB-D point clouds obtained by consumer colour and depth cameras, high-determination laser checks from commercial 3D scanners, and low-resolution scanty point cloud obtained by a custom lightweight 3D scanner on a micro aerial vehicle.
  • 40. Towards 3D Object Capture for Interactive CFD with Automotive Applications Malcolm O. Dias | 9803763 40 According to (Holz, et al, 2015) when registering two point cloud data the steps that can be considered are:  Selection of the sampling points in the input clouds.  Matching the corresponding data between the subsampled point clouds.  Filtering and rejecting the outliers.  Aligning the data to find the optimal model. The above steps are explained in detail while performing the iterative registration of the closest point. Also, the algorithms involved for each step are explained along with respective models used for each step. Estimating the key points, describing them by feature descriptors is done first which is then followed by correspondence estimation. Filtering needs to be performed before estimation of the transformation is done for the coarse alignment method via descriptor matching. While conducting the experiment using large-scale 3-D scanners, (Holz, et al, 2015) compute the initial alignment and then refine the initial estimate to obtain a perfect overlap. PCL gives joint estimation parts that find key points and descriptors which can be used to obtain the initial alignment. An iterative algorithm is then used to precisely align the point cloud and obtain the fine registration. While using the sparse low-resolution 3-D laser scan, the non-uniform densities degrade the results obtained by the Generalized ICP algorithm, (Holz, et al.,2015) also show how to use custom covariance with the Generalized ICP algorithm. While registering RGB-D
  • 41. Towards 3D Object Capture for Interactive CFD with Automotive Applications Malcolm O. Dias | 9803763 41 images pre-processing is done in-order filter any noise data (which is most likely to be present in this type of cameras) [20]. A bilateral filter from the 2D image processing is used to perform edge preserving filtering. This filter smoothens more if similar pixels are present in the neighbourhood and less when there are irregularities present. For pre-processing by fast feature estimation, a grid structure of the RGB-D data may be used to speed up the computation of each normal. The advantage of fast feature estimation over filtering is that the former needs only linear pre-processing stage [20]. A hybrid registration pipeline is used for aligning the sequence of images. These type of sensors acquire a large amount of measurements so sampling is done while registering the pipeline. The normal space subsampling is considered to be robust [20] to register the RGB-D point clouds. (Holz, et al, 2015) mention that this method has a high probability of converging to the global minimum as this sampling method takes care that the point cloud has sample data from all the surfaces which have different positioning. This is followed by correspondence estimation and filtering which would not take much time since the data is relatively less. Transformation estimation and weighting are done before registering the final results. The Point Cloud Library (PCL) is a standalone, large scale, open project for 2D/3D image and point cloud processing [17]. If one looks into an experiment wherein scanning is involved it is a definite to have an understanding of PCL. (Holz, et al., 2015) provide details which are very useful in understanding the working of the
  • 42. Towards 3D Object Capture for Interactive CFD with Automotive Applications Malcolm O. Dias | 9803763 42 library and also provide listing form with the PCL code. A detailed explanation is provided which would certainly be useful in developing (or refining) an object capture pipeline (OCP) and also for the general registration pipelines. The steps involved in the Iterative Closest Point (ICP) method and the coarse alignment via descriptor matching are mentioned in detail allowing one to understand these methods.
  • 43. Towards 3D Object Capture for Interactive CFD with Automotive Applications Malcolm O. Dias | 9803763 43 3. Methodology This section explains the steps used to work towards the objectives of the research. It begins with laboratory upgrade, followed by the parameter testing in the OCP and mentions the manner in which it was done. 3.1. Laboratory upgrade Figure 3.1.1: Original laboratory set-up (before upgrade) The four cameras originally were positioned on tripods at four corners and the object to be scanned was placed in the centre as shown in Figure 3.1.1. This was upgraded to a six-camera setup, for which an aluminium frame was ordered from an
  • 44. Towards 3D Object Capture for Interactive CFD with Automotive Applications Malcolm O. Dias | 9803763 44 external source. Special camera fixtures were ordered to clamp the cameras to the aluminium frame. Four cameras were clamped to the vertical support of the frame, the 5th and 6th camera were clamped, one each, to the ceiling and to the base of the frame. The object used as the subject for scanning is a wooden car model, which was modified in order to attain completeness of the model. The original design did not include a base to the model; it was hollow from underneath, hence a wooden base was designed so that the model could be scanned from all directions. The design was done in SolidWorks and the wood was cut using a laser-cutter machine. The base was fixed to the main car body using nails and glue. 3.2. Rough Alignment Rough alignment, with respect to point clouds, is the process of aligning all the point clouds captured by two or more cameras such that the final result will display the fields in their approximate relative positions. In other words, it is the process of aligning the coordinate system of each camera to an imaginary global coordinate system which is fixed in space. This scene then forms the initial conditions for more precise alignment later. Since the positioning of the cameras was changed and two extra cameras were added, the transformation sequence of the camera coordinates to the world system
  • 45. Towards 3D Object Capture for Interactive CFD with Automotive Applications Malcolm O. Dias | 9803763 45 was updated for the four cameras and two extra transformations were added to account for the two new cameras. The x, y and z distance from a fixed point in space (the origin of the global coordinate system) to the camera was approximately measured using a measuring tape, which was then used as the translation value for the rough alignment. The rotation angles were selected by approximately measuring the ‘pitch’ angle, the ‘yaw’ angle and the ‘roll’ angle using the ‘compass’ application on iPhone 6S Plus. The pitch angle was taken as the rotation of the Kinect around the x-axis of the camera, the yaw angle as the rotation around the y-axis of the camera, and consequently, the roll angle as the rotation around the z-axis of the camera. The camera axes are shown in Figure 3.2.1. Figure 3.2.1: Kinect Camera Local Axes [21] The OCP code was run using the approximated values and the rough aligned space (point cloud) was analysed using MATLAB and then fine adjustments were made by changing the values in the input file and running the code again. First, the translation parameters (See Figure 3.2.2) were changed such that the displacement
  • 46. Towards 3D Object Capture for Interactive CFD with Automotive Applications Malcolm O. Dias | 9803763 46 of the two point clouds was aligned followed by adjusting the rotation parameters (See figure 3.2.3). Then the aligned point cloud of one and two, from the previous step, was aligned with the third point cloud. This was repeated until a desirable alignment was achieved. (a) : Side View (b) : Top View Figure 3.2.2: Translation Parameters for Rough Alignment (a) : Side View (b) : Top View (c) : Front View Figure 3.2.3: Rotation Parameters for Rough Alignment
  • 47. Towards 3D Object Capture for Interactive CFD with Automotive Applications Malcolm O. Dias | 9803763 47 3.3. Varying SOR parameters The statistical outlier removal operation requires two parameters which are user defined; they are ‘Neighbours’ (surrounding points in the point cloud) and ‘Standard Deviation’. The SOR algorithm computes the mean distance from the point of consideration to its closest neighbours. The number of neighbours the algorithm checks for is keyed in as the value for the ‘neighbours’ parameter. It then assumes all points are distributed with Gaussian separation and only points with a mean distance from their neighbours within the specified ‘standard deviation’ are kept. This method works well when a very regular point data is available. Figure 3.3.1: Schematic of SOR filtering (For neighbours = 10) The lines used in the code to define these parameters is shown below, // SOR parameters #define cNeighbours 0 #define cStdDev 0
  • 48. Towards 3D Object Capture for Interactive CFD with Automotive Applications Malcolm O. Dias | 9803763 48 A set of 5 values were chosen for each. The values for neighbours were chosen as 1, 5, 10, 50 and 100 and similarly, for standard deviation, the values chosen were 0.1, 0.5, 1.0, 5.0 and 10.0 such that they covered a decent range. For each combination (25 combinations in total) the code was run and the time required was noted. Also, the SOR filtered point clouds were saved for qualitative comparison using MATLAB. Please note that SOR filtering was performed immediately after rough alignment and clip box (See section 3.5 for Clip box). Clip box was done before SOR filtering so that the SOR algorithm would not have to deal with unwanted data which would slow the process down. 3.4. Varying Registration Parameters The registration algorithm required one physical parameter called the correspondence distance and two criteria; maximum iteration and registration tolerance. The algorithm for registration fixes the value for correspondence distance (See Figure 3.4.1) and then checks for the corresponding points. Once the corresponding surface has been detected, the algorithm shifts (rotates and translates) the target surface to merge with the source surface until one of the two criteria is met.
  • 49. Towards 3D Object Capture for Interactive CFD with Automotive Applications Malcolm O. Dias | 9803763 49 Figure 3.4.1: Schematic of registration The lines from the code are shown below, // Registration parameters #define cCorrespondDist 0 #define cMaxItr 0 #define cRegistrationTolerance 0 The correspondence distance is measured in meters. It was given the values of 0.1m, 1m and 10m. The values selected for the registration tolerance were 0.1, 0.01 and 0.001. Two values were chosen for the maximum iterations criterion; 10 and 20. The program was run for each arrangement resulting in 18 combinations in total. The time required to perform the registration was noted. The point clouds were obtained using MATLAB and selected images were saved for qualitative analysis.
  • 50. Towards 3D Object Capture for Interactive CFD with Automotive Applications Malcolm O. Dias | 9803763 50 3.5. Setting up Clip Box The clip box is used retain only the required subject and eliminate the unwanted data. The lines used in the code to input the clip box limits are shown below, // Camera Space bounding box limits (relative to world origin) 'clip box' #define CSxLow 0 #define CSxHigh 0 #define CSyLow 0 #define CSyHigh 0 #define CSzLow 0 #define CSzHigh 0 These values were modified and the code was run until a very compact clip box was built. The purpose of this stage is to allow the OCP to only process the data required for that part of the pipeline to maintain efficiency. An average, only about 20% of the captured data actually represents the car with the rest representing the rest of the room. 3.6. Modifying Point Cloud using Axis Alignment - Bounding Box (AABB) This was the final step of the OCP. The point cloud data, after all the previous steps were performed, was moved to the first quadrant such that the flat surfaces of the model were parallel to the axes plane. This is a necessary condition for inclusion in the CFD simulation as it ensures the vehicle is oriented correctly with respect to the flow axes.
  • 51. Towards 3D Object Capture for Interactive CFD with Automotive Applications Malcolm O. Dias | 9803763 51 // Axis-Aligned bounding box rotation and limits #define cAAxAngle 0 #define cAAyAngle 0 #define cAAzAngle 0 #define cAAxLow 0 #define cAAxHigh 0 #define cAAyLow 0 #define cAAyHigh 0 #define cAAzLow 0 #define cAAzHigh 0 The lines used in the code are shown above. These values were modified relative to the values used in rough alignment for angles and clip box for translations.
  • 52. Towards 3D Object Capture for Interactive CFD with Automotive Applications Malcolm O. Dias | 9803763 52 4. Results and Discussion 4.1. Laboratory Upgrade Figure 3.1.1 shows the initial setup of the laboratory, in which the cameras were mounted on tripod stands and the object to be scanned was placed in the centre on a transparent table. It included four cameras at four corners which would capture the 3D data of the object. There was a high tendency of the camera positioning to be affected if any individual working in the laboratory accidently hit the tripod stand or the camera. Hence, a robust design was required so that the camera calibration would not need to be repeated every time. Figure 4.1.1, shows the schematic of the proposed laboratory design which consists of a frame constructed of aluminium. The specifications of the Aluminium frame are shown in Figure 4.1.2. Special clamping jaws were ordered to fix the cameras to the frame (See Figure 4.1.3). Four cameras were clamped to the four vertical supports, similar to that of the previous design, however, there was a smaller chance of the camera positioning being affected by accidental collision. Two cameras were added to the original design to capture the maximum possible information of the object to be analysed. These were clamped to the horizontal supports on the ceiling and on the floor. Figure 4.1.4 shows the upgraded laboratory design with the Aluminium frame and the six cameras with the car model placed in the centre. Please note the table used to place the car model is not from the proposed design. In reality, a metal frame was used.
  • 53. Towards 3D Object Capture for Interactive CFD with Automotive Applications Malcolm O. Dias | 9803763 53 Figure 4.1.1: Schematic of proposed laboratory design Figure 4.1.2: Schematic of the aluminium frame ordered
  • 54. Towards 3D Object Capture for Interactive CFD with Automotive Applications Malcolm O. Dias | 9803763 54 Figure 4.1.3: Kinect camera clamped to the aluminium frame Figure 4.1.4: Laboratory setup after upgrading
  • 55. Towards 3D Object Capture for Interactive CFD with Automotive Applications Malcolm O. Dias | 9803763 55 The initial car model did not include a base, hence a base was designed and affixed to the main car body as shown in Figure 4.1.5. This allowed the possibility of capturing the object from all directions leading to a better 3D constructed image for CFD analysis. Figure 4.1.5: Upgraded car model base 4.2. Rough Alignment Figure 4.2.1 shows the raw 3D point cloud data captured by the Kinect camera. The point cloud from each camera can be identified based on different colours. The legend provided in Table 4.2.1 can be used to identify these different point clouds and is valid for all the point cloud figures used in Section 4.2 and Section 4.3. The point cloud data shown in Figure 4.2.1 is aligned according to the camera coordinate system and hence is of not much use in understanding the geometric features of the
  • 56. Towards 3D Object Capture for Interactive CFD with Automotive Applications Malcolm O. Dias | 9803763 56 entire scanned space. This is because the camera coordinate systems are coincident and each view overlaps the others. This data needs to be processed in order to align the camera coordinate system to a selected global coordinate system so that the entire 3D space can be recreated from the point cloud data. Figure 4.2.1: Point Cloud raw data from Kinect (Before Processing) Camera 1 Camera 2 Camera 3 Camera 4 Camera 5 Camera 6 Table 4.2.1: Camera colour code legend for SOR point cloud data XY Z
  • 57. Towards 3D Object Capture for Interactive CFD with Automotive Applications Malcolm O. Dias | 9803763 57 The axes of the camera local coordinate system are shown in Figure 3.2.1, and Figure 4.2.2 shows the axes of the global coordinate system, constructed schematically. The reference names used for each camera are also shown in Figure 4.2.2. Please note that the global axes shown in the Figure 4.2.2 are meant to depict only the direction of the axes and not the origin which was taken to be at the level of the table. Figure 4.2.2: Laboratory arrangement with global axes After performing the axes transformation, the resulting point cloud data is aligned with respect to the direction of the global coordinate system (Figure 4.2.3). However, further processing is needed so that the data could be aligned with
  • 58. Towards 3D Object Capture for Interactive CFD with Automotive Applications Malcolm O. Dias | 9803763 58 respect to a common origin and this was achieved by ‘translating’ and ‘rotating’ the data about the origin. Figure 4.2.3: Point Cloud Data after aligning Camera axes to Global axes The code reads the translation and rotation parameters from an input file, one for each, which contains tab separated values. Each camera requires three values X, Y and Z for translation (See Figures 3.2.2), similarly three angles a, b and c for rotation (See Figures 3.2.3). The values chosen for rough alignment were selected by changing each value and checking the resulting point cloud plot using MATLAB. XY Z
  • 59. Towards 3D Object Capture for Interactive CFD with Automotive Applications Malcolm O. Dias | 9803763 59 Table 4.2.2 and Table 4.2.3 display the values selected for translation and rotation respectively. Camera Distance from the origin (mm) X Y Z CAMERA 1 0 900 0 CAMERA 2 -80 1280 350 CAMERA 3 30 1365 420 CAMERA 4 -120 1400 350 CAMERA 5 -130 1320 450 CAMERA 6 30 800 30 Table 4.2.2: Values used for Translation input file for Rough Alignment Camera Angle (degrees) A B c CAMERA 1 -3 0 0 CAMERA 2 -15 0 0 CAMERA 3 -15 1 0 CAMERA 4 -11 0 0 CAMERA 5 -20 3 0 CAMERA 6 0 -5 0 Table 4.2.3: Values for Rotation input file for Rough Alignment
  • 60. Towards 3D Object Capture for Interactive CFD with Automotive Applications Malcolm O. Dias | 9803763 60 Figure 4.2.4: Point Cloud Data after Rough Alignment Figure 4.2.4 shows the final point cloud data, and it can be seen that the entire 3D space, i.e. the laboratory, can be seen recreated using the point clouds. The rough alignment of the surfaces of the car model can be seen in the zoomed-in view of the point cloud in Figure 4.2.5. The 3D image of the car is formed with satisfactory accuracy. X Y Z
  • 61. Towards 3D Object Capture for Interactive CFD with Automotive Applications Malcolm O. Dias | 9803763 61 Figure 4.2.5: Zoomed in view to locate the rough alignment of Car model 4.3. SOR Filtering 4.3.1. Time Analysis As mentioned before, a set of five values were chosen for each of the two parameters and the time taken4 (in milliseconds) to perform filtering each combination is shown in Table 4.3.1. It can be seen that with an increase in the number of Neighbours, the time required to perform the SOR filtering increases 4 Please refer appendix A for the specifications of the system on which the code was run. y z x
  • 62. Towards 3D Object Capture for Interactive CFD with Automotive Applications Malcolm O. Dias | 9803763 62 significantly. However, this is not the same with the increase in the standard deviation. The time taken does not get affected much with change in standard deviation, but remains almost the same for a particular choice of the neighbours parameter. Time (ms) STANDARD DEVIATION 0.1 0.5 1 5 10 NEIGHBOURS 1 1611 1782 1712 1599 1569 5 2078 2126 2034 2088 3520 10 2293 2564 2803 2431 2372 50 5294 5620 5338 5454 5326 100 10428 10302 10288 10012 10629 Table 4.3.1: Time Taken for SOR filtering Figure 4.3.1 shows the variation of time with respect to standard deviation for different neighbours. The time is made non-dimensional by dividing it with the minimum value of time for the particular standard deviation (Tmin@SD). A linear increase can be noticed from the graph, resulting in maximum time for the highest neighbour chosen for a particular standard deviation. The non-dimensional time, ((T/ Tmin@SD)-1) for SOR filtering increased by 5% when the parameter for ‘neighbours’ was increased from 1 to 100. Figure 4.3.2 gives the variation of time with respect to the neighbours for difference values of standard deviation. Here, the time is made non-dimensional by subtracting the time from the average time (Tavg@N) value for a specific neighbour and then
  • 63. Towards 3D Object Capture for Interactive CFD with Automotive Applications Malcolm O. Dias | 9803763 63 dividing by Tavg@N. As was noticed from Table 4.3.1, there is not much variation in the time taken for different standard deviation for a definite neighbour. Figure 4.3.1: Variation of Time (T/TSD-max) for ‘Standard Deviation’ for various ‘Neighbours’ 0 1 2 3 4 5 6 0 20 40 60 80 100 Non-dimensionTime((T/Tmin@SD)-1) Neighbours Std. Dev. = 0.1 Std. Dev. = 0.5 Std. Dev. = 1.0 Std. Dev. = 5.0 Std. Dev. = 10.0
  • 64. Towards 3D Object Capture for Interactive CFD with Automotive Applications Malcolm O. Dias | 9803763 64 Figure 4.3.2: Variation of Time (1-T/Tavg@N) for ‘Neighbours’ for various ‘Standard Deviation’ Hence, it can be fairly said that the time taken for SOR filtering is directly proportional to the value of the parameter for ‘neighbours’ chosen. (4.1) (4.2) Where, is a constant which is approximately equal to -0.07, from the data obtained from the tests. -0.5 0 0.5 0.0 2.0 4.0 6.0 8.0 10.0 Non-dimensionalTime(1-T/Tavg@N) Standard Deviation Neighbours = 1 Neighbours = 5 Neighbours = 10 Neighbours = 50 Neighbours = 100
  • 65. Towards 3D Object Capture for Interactive CFD with Automotive Applications Malcolm O. Dias | 9803763 65 4.3.2. Qualitative Analysis 4.3.2.1. Varying Neighbours for a fixed Standard Deviation Figure 4.3.3 shows the images obtained for different neighbours for a fixed standard deviation of 1.0. It can be seen that when the parameter for neighbours is chosen as 1, significant amount of noise is present. However, as the number of neighbours increases the point cloud data becomes cleaner and sharper. The edges are well defined and the object takes perfect shape. But a closer look reveals that as the number of neighbours is increased, some useful data is lost. The algorithm becomes more aggressive and tends to filter out necessary information from the point cloud. No doubt the images are becoming sharper but losing data would not provide true information about the object. A value of neighbours in the vicinity of 5 would be a satisfactory choice.
  • 66. Towards 3D Object Capture for Interactive CFD with Automotive Applications Malcolm O. Dias | 9803763 66 (a) : Neighbours=1 Std.Dev.=1.0 (b) : Neighbours=5 Std.Dev.=1.0 (c) : Neighbours=10 Std.Dev.=1.0 (d) : Neighbours=50 Std.Dev.=1.0 (e) : Neighbours=100 Std.Dev.=1.0 Figure 4.3.3: SOR Filtered – Camera 1 for Std.Dev. = 1.0 and various Neighbours x x x x y y y y z z z z x y z
  • 67. Towards 3D Object Capture for Interactive CFD with Automotive Applications Malcolm O. Dias | 9803763 67 4.3.2.2. Varying Standard Deviation for fixed Neighbours Figure 4.3.4 depicts the output for various values for standard deviation, when the value for the ‘neighbours’ parameter was fixed at 10. At a standard deviation of 1.0, least amount is noise is present; however, a lot of data is lost. The amount noise increases as the standard deviation is increased, however, the required data is recovered as well. But, a very high standard deviation would result in the presence of a significant amount of noise. Hence the following equation can be obtained to summarise the effect of selecting the parameters for SOR Filtration. (4.2)
  • 68. Towards 3D Object Capture for Interactive CFD with Automotive Applications Malcolm O. Dias | 9803763 68 (a) : Neighbours = 10 Std.Dev.=0.1 (b) : Neighbours = 10 Std.Dev.=0.5 (c) : Neighbours = 10 Std.Dev.=1.0 (d) : Neighbours = 10 Std.Dev.=5.0 (e) : Neighbours = 10 Std.Dev.=10.0 Figure 4.3.4: SOR Filtered – Camera 1 for Neighbours = 10 and various Standard Deviation x x x x x y yy y y zz z zz
  • 69. Towards 3D Object Capture for Interactive CFD with Automotive Applications Malcolm O. Dias | 9803763 69 Therefore, selecting an intermediate value would be a decent approach in selecting these parameters. An extreme value of either one, would lead a poor resulting data. Standard deviation of 1.0 and Neighbours of 10 or 5 would be a satisfactory selection. But, considering the time taken for SOR which is dependant only on neighbours, a lower parameter value for neighbours would be ideal. Hence, Neighbours = 5 and Standard Deviation = 1.0 can be considered as the best set of values. The point cloud obtained from these parameters is shown in the figure below. Figure 4.3.5: SOR Filtered – All Cameras for Neighbours = 5 and Standard Deviation = 1.0 x y z
  • 70. Towards 3D Object Capture for Interactive CFD with Automotive Applications Malcolm O. Dias | 9803763 70 4.4. Registration 4.4.1. Time Analysis Maximum Iterations =10 Time (s) Registration Tolerance 0.1 0.01 0.001 Correspondence Distance 0.1 1769.4* 1663.62* 1993.72* 1 1629.69 1599.65 1414.83 10 7267.25 8518.31 8834.44 *With PCL error : Iterative Closest Point Non-linear Table 4.4.1: Time taken for registration at Maximum Iteration =10 Maximum Iterations =20 Time (s) Registration Tolerance 0.1 0.01 0.001 Correspondence Distance 0.1 1645.34* 1579.56* 1582.27* 1 1729.4 1959.35 2324.3 10 13611.5 16863.9 16037.7 *With PCL error : Iterative Closest Point Non-linear Table 4.4.2: Time taken for registration at Maximum Iteration =20 The tables above give the variation of time taken5 to perform registration with different values for correspondence distance, registration tolerance and maximum iterations. It is, however, important to note that the code was currently designed to save a lot of data for qualitative analysis, while performing registration, which may not be required while running it on a regular basis. Therefore, the value of time tabulated in Table 4.4.1 and Table 4.4.2 are relatively higher than the time it may require to run generally. Nevertheless, the data is useful for laying down comparisons for selecting the input values. 5 Please refer appendix A for machine specifications on which the code was run.
  • 71. Towards 3D Object Capture for Interactive CFD with Automotive Applications Malcolm O. Dias | 9803763 71 The time required to achieve registration does not increase much when the correspondence distance is changed from 0.1 to 1, but increases considerably when it is changed to 10 when the two criteria are fixed. The effect of changing the registration tolerance on time is very less much when the correspondence distance value is at 0.1 or 1; however, it tends to increase slightly when the tolerance is decreased for a correspondence distance of 10 for fixed value of maximum iterations. Similarly, there are minor differences in the time required for all values of registration tolerance and lower values of correspondence distance for the two values of maximum iteration chosen, but an overall increase in the time for maximum iteration = 10 to maximum iteration = 20 when the correspondence distance is changed to 10. It is also important to note that an error message is thrown out by the ICP algorithm while performing registration. The error message given out states ‘[pcl::IterativeClosestPointNonLinear::ComputeTransformation] Not enough correspondences found. Relax your threshold parameters’. This error is because a low value for correspondence distance is used and the algorithm cannot find enough points to do the registration completely. The figure below shows the effect of selecting a small value for the correspondence distance parameter.
  • 72. Towards 3D Object Capture for Interactive CFD with Automotive Applications Malcolm O. Dias | 9803763 72 Figure 4.4.1: Effect of selecting low Correspondence Distance value Figure 4.4.2 displays the variation of time required for registration graphically. Tmin@MI,RT is the minimum time for that particular combination of maximum iteration and registration tolerance (depicted by the subscript), i.e. at constant maximum iterations and registration tolerance values, for the set of correspondence distance tested. As discussed before, not much variation can be seen at lower values of correspondence distance, but a linear increase in the non-dimensional time when the correspondence distance is increased from 1m to 10m. It is interesting to note that at 10m, there is an increase of about 10% in the non-dimensional time ((T/Tmin@MI,RT) – 1) when the registration tolerance is decreased by a factor of 10, for a maximum iterations value of 10 and an increase of 20% when the maximum iterations value is 20 for the same change in registration tolerance. For the case when maximum iterations =20 and registration tolerance = 0.001, the non- dimensional time ((T/Tmin@MI,RT) – 1) is almost the same as the previous case at
  • 73. Towards 3D Object Capture for Interactive CFD with Automotive Applications Malcolm O. Dias | 9803763 73 correspondence distance of 10m, this is perhaps due to fact that the maximum iterations criterion is satisfied before the desired tolerance is achieved and hence the loop in the registration code is terminated. Also, when the maximum iterations criterion is increased to 20 from 10, the increase in the non-dimensional time is almost double, for the same registration tolerance and 10m correspondence distance. Figure 4.4.2: Variation of Time (T/Tmin@MI,RT) with respect to ‘Correspondence Distance’ for different values of ‘Registration Tolerance’ and ‘Maximum Iterations’ The graph in Figure 4.4.3 displays the fact that there is not much variation in the non-dimensional time (T/Tavg@MI,CD), where Tavg@MI,CD is the average time at a constant value of maximum iterations and correspondence distance, when the registration tolerance is changed. 0 1 2 3 4 5 6 7 8 9 10 0.1 1 10 Non-dimensionalTime((T/Tmin@MI,RT)-1) log Correspondence Distance Max.Itr.=10 Reg.Tol.=0.1 Max.Itr.=10 Reg.Tol.=0.01 Max.Itr=10 Reg.Tol.=0.001 Max.Itr.=20 Reg.Tol.=0.1 Max.Itr.=20 Reg.Tol.=0.01 Max.Itr.=20 Reg.Tol.=0.001
  • 74. Towards 3D Object Capture for Interactive CFD with Automotive Applications Malcolm O. Dias | 9803763 74 Figure 4.4.3: Variation of Time (T/Tavg@MI,CD) with respect to ‘Registration Tolerance’ for different ‘Correspondence Distance’ and ‘Maximum Iterations’ values. Therefore, the relation of time for the registration can be simply inscribed as, (4.3) But, for higher values of correspondence distance, it can be written as follows, (4.4) -0.5 0 0.5 0 0.02 0.04 0.06 0.08 0.1 Non-dimensionalTime(1-T/Tavg@MI,CD) Registration Tolerance Max.Itr.=10 Cor.Dist.=0.1 Max.Itr.=10 Cor.Dist.=0.01 Max.Itr.=10 Cor.Dist.=0.001 Max.Itr.=20 Cor.Dist.=0.1 Max.Itr.=20 Cor.Dist.=0.01 Max.Itr.=20 Cor.Dist.=0.001
  • 75. Towards 3D Object Capture for Interactive CFD with Automotive Applications Malcolm O. Dias | 9803763 75 4.4.2. Qualitative Analysis Figure 4.4.5 shows the point cloud data obtained for various tested cases at registration tolerance of 0.01 before performing SOR filtering. Figure 4.4.4 shows the point cloud obtained from the CAD model of the car which can be used as a reference for comparing the data. Figure 4.4.4: Point Cloud obtained from the CAD design of the car model As was noticed in the time analysis for registration in the previous section, not much variation is observed in the point cloud data for correspondence distance of 0.1m and 1m (See Figure 4.4.5). These two point cloud maps show similar registered data, with perhaps some negligible differences. However, when the correspondence distance is increased to 10m, some significant variations can be observed. The portion circled green indicates good registration (Figure 4.4.5 (c) and (d)) and the Y Z
  • 76. Towards 3D Object Capture for Interactive CFD with Automotive Applications Malcolm O. Dias | 9803763 76 circled region in red in Figure 4.4.5 (e) highlights the raised portion on the bonnet of the car model which is definitely an error. This is present even when the number of iterations is increased 20 (See Figure 4.4.5 (f)). Figure 4.4.6 shows the registered point cloud for a correspondence distance of 10m with varying the two criteria, and the raised bonnet error is present here as well indicating that the error is definitely due to the increase the correspondence distance to a very high value. Some differences can also be noted in the base of the car as well, which again looks a bit raised.
  • 77. Towards 3D Object Capture for Interactive CFD with Automotive Applications Malcolm O. Dias | 9803763 77 (a) : Cor.Dist.=0.1 Max.Itr.=10 Reg.Tol.=0.01 (b) : Cor.Dist.=0.1 Maxt.Itr.=20 Reg.Tol.=0.01 (c) : Cor.Dist.=1 Max.Itr.=10 Reg.Tol.=0.01 (d) : Cor.Dist.=1 Max.Itr.=20 Reg.Tol.=0.01 (e) : Cor.Dist.=10 Max.Itr.=10 Reg.Tol.=0.01 (f) : Cor.Dist.=10 Max.Itr.=20 Reg.Tol.=0.01 Figure 4.4.5: Registered Point Cloud for Registration Tolerance = 0.01, Maximum Iterations of 10 and 20 and Correspondence Distance of 0.1m, 1m and 10m y y yy y y zz z z z z
  • 78. Towards 3D Object Capture for Interactive CFD with Automotive Applications Malcolm O. Dias | 9803763 78 (a) : Cor.Dist.=10 Max.Itr.=10 Reg.Tol.=0.1 (b) : Cor.Dist.=10 Max.Itr.=20 Reg.Tol.=0.1 (c) : Cor.Dist.=10 Max.Itr.=10 Reg.Tol=0.001 (d) : Cor.Dist.=10 Max.Itr.=20 Reg.Tol.=0.001 Figure 4.4.6: Registered Point Cloud for Correspondence Distance =10m, Registration Tolerance = 0.1 and 0.001 and Maximum Iterations of 10 and 20 Figure 4.4.7: Effect of selecting a high Correspondence distance value z z z z yy y y
  • 79. Towards 3D Object Capture for Interactive CFD with Automotive Applications Malcolm O. Dias | 9803763 79 This error can be explained with the help of Figure 4.4.7, where an increased correspondence distance input value may include points of a different surface and hence result in improper registration. Hence, selecting the correct correspondence distance is a crucial aspect for performing registration. Selecting a value of correspondence distance of the order of 1m, would give decent point cloud results and with a relatively moderate time, based on the discussion from the time analysis for registration. 4.5. Clip Box A clip-box is set up around the object or scene of focus in-order to eliminate the unnecessary data. Setting up the clip box as close to the subject as possible is important so that maximum amount of noise can be removed before performing SOR. Figure 4.5.1 shows the effect of using different clip box values. Using a large clip box (See Figure 4.5.1 (e)) would include unnecessary data (and noise). The SOR filtering will then have to try to process this data, for which aggressive SOR parameters may have to be used. This would subsequently lead to either loss of data or increase in time or perhaps both, as was discussed in section 4.3. Using a smaller clip box limits would delete some parts of the object and hence would result in improper point cloud. Therefore, selecting the correct balance is required; such that maximum unwanted data is clipped off but all the data required for constructing the
  • 80. Towards 3D Object Capture for Interactive CFD with Automotive Applications Malcolm O. Dias | 9803763 80 subject (car model in this case) is preserved. The clip box limits selected for this case which resulted in a ‘good’ clip box are tabulated in Table 4.5.1. (a) : Large Clip Box (b) : Good Clip Box (c) : Small Clip Box Figure 4.5.1: Effect of using different Clip Box limits x y z z z y x x y
  • 81. Towards 3D Object Capture for Interactive CFD with Automotive Applications Malcolm O. Dias | 9803763 81 Limits (m) x-Low -290 x-High 240 y-Low -650 y-High 550 z-Low -220 z-High 300 Table 4.5.1: Clip Box limits selected 4.6. Axis Alignment – Bounding Box (AABB) The final processing required to be done on the registered point cloud is done here. It is a combination of Rough Alignment and Clip Box. The entire subject is moved into the first quadrant and aligned to the axes. This ensures that when the object is read into the CFD simulation, it is correctly and automatically aligned with the flow axis without user intervention. The values selected for this particular case are tabulated in Table 4.6.1. It can be seen that they are just the difference of the two limits used in clip box for X, Y and Z. The rotation done during Rough Alignment was good enough, so there was no need to modify the angles here. The point cloud after AABB is shown in Figure 4.6.1.
  • 82. Towards 3D Object Capture for Interactive CFD with Automotive Applications Malcolm O. Dias | 9803763 82 Angle Limits (m) Angle a 0 x - Low 0 Angle b 0 x - High 530 Angle c 0 y - Low 0 y - High 1200 z - Low 0 z - High 520 Table 4.6.1: Parameters used for AABB Figure 4.6.1: Point cloud after AABB z y x
  • 83. Towards 3D Object Capture for Interactive CFD with Automotive Applications Malcolm O. Dias | 9803763 83 5. Conclusions The research completed here has addressed all the objectives specified. The car model was upgraded by fixing a base to it, which allowed setting up a camera to capture the bottom of the car. The laboratory was upgraded with the new aluminium frame and the addition of two extra cameras was made in the new setup. The new frame is steady and robust; hence there is no need to align the cameras every time the space is scanned. Additional cameras resulted in increased model information and hence better reconstruction than previously. The Object Capture Pipeline (OCP) was updated to handle with the new setup. The orientation and calibration of the cameras was successfully performed to ensure the resulting alignment was of desired accuracy. There was no prior information on the input parameters used for SOR filtering and registration. The input values were selected arbitrarily. However, the study carried out in this report gives detailed information about the effect of each input parameter on the output and also the time taken to process it. It was observed that for SOR filtering, as the value for ‘neighbours’ was increased, the time taken to perform filtering increased as well. A graph for a particular non- dimensional time suggested that there is a linear relationship between the time taken and the value for neighbours used for a fixed standard deviation. The second
  • 84. Towards 3D Object Capture for Interactive CFD with Automotive Applications Malcolm O. Dias | 9803763 84 input parameter for SOR filtering is the standard deviation, the variation of which did not show much influence on the time taken for a particular choice of neighbours. However, the qualitative analysis suggested that a low value of standard deviation gave a very good SOR filtering, but at the expense of data loss. Increasing the standard deviation allowed data recovery but simultaneously increased the amount of noise present. On the other hand, a low value for neighbours gave an output with a lot of noisy points present, nevertheless increasing the value for the same resulted in better filtered images, but again some of the important information of the point cloud was lost. Hence, it can be justly said that selecting an intermediate value for the input parameters would be the best possible choice. An equation was also obtained based on the results for the time taken for SOR as a function of the input parameters, and a similar equation was found for the ‘noise’ as well. As mentioned before, as was the case for SOR filtering input parameters, previous parameters used for registration were arbitrarily selected. The values were used with knowledge of neither the time taken for registration nor the resulting point cloud obtained. Out of the three inputs used for registration; one physical parameter – correspondence distance and two criteria – maximum iterations and registration tolerance, the physical parameter played a major part in influencing the time taken for registration. There was little influence on the time taken at low correspondence distance value, but steep gradients were observed when the value was increase to high value (10m). However, a low value for the correspondence distance (0.1m)
  • 85. Towards 3D Object Capture for Interactive CFD with Automotive Applications Malcolm O. Dias | 9803763 85 resulted in an error message thrown out by the PCL algorithm stating that not enough correspondences were found. The variation for the time taken had negligible influence of the two criteria at lower correspondence values, but an increase in the correspondence distance value increased the effect of the two criteria on the time taken. In particular, the non-dimensional time doubled when the maximum iterations was increased from 10 to 20 at higher correspondence distance values. The same time increased in steps of about 5% when the registration tolerance was decreased by a factor of 10 (0.1 to 0.01 to 0.001). A similar trend was noticed even for the point cloud data wherein there was little difference in the registered point cloud at lower value of correspondence distance for variable criteria, however, drastic errors were spotted when the correspondence distance was increased to a value of 10. The criteria had no real effect on the registered point cloud. A set of equations were developed based on the results for registration time based on the correspondence distance used (one for lower correspondence distance and one for higher). Based on the results it can be concluded that it would not be wrong to choose a lower correspondence distance value, however, it should not be very low otherwise the algorithm will fail. Choosing any value from those tested in this report would not affect the results much. The clip box limits were set up such that only the required point cloud data was stored and the unwanted points were clipped off. Similarly, a final axis alignment was performed to shift the point cloud in the first quadrant and any minor rotations