4. Augmented Reality (AR) is a variation of Virtual Environments (VE), or Virtual
Reality as it is more commonly called. VE technologies completely immerse a
user inside a synthetic environment.
While immersed, the user cannot see the real world around him. In contrast, AR
allows the user to see the real world, with virtual objects superimposed upon or
composited with the real world.
5.
6.
7.
Free kick radius / Offside.
Advertising spots
Weather Forecast
Stock
Already common in TV shows
8. • Invented Head-Mounted Display which was the first
step in making AR a possibility
• Coined the term “Augmented Reality”
• Developed Complex Software at Boeing to help
technicians assemble cables into aircraft
Prof. Tom Caudell
9. •
In 1999, Hirokazu Kato of the Nara Institute of Science and
Technology released the ARToolKit to the open source
community.
•
Although the smartphone was yet to be invented, it was what
allowed a simple, handheld device with a camera and an
internet connection to bring AR to the masses.
Hirokazu Kato
10. •
GPS + Compass + Gyro + Accelerometer
•
Marker (Fiduciary, frame, etc)
•
NFT (2D images)
•
3D (Pre-trained point cloud)
•
Live 3D (SLAM)
•
Face, Fingers, Body
11. •
Marker-based AR uses a camera and a visual marker to determine
the center, orientation, and range of its spherical coordinate
system.
•
•
ARToolkit is the first fully-featured toolkit for marker-based AR
Markers work by having software recognise a particular pattern,
such as a barcode or symbol, when a camera points at it, and
overlaying a digital image at that point on the screen.
12. •
As the name implies, image targets are images that the AR
SDK can detect and track. Unlike traditional markers, data
matrix codes and QR codes, image targets do not need
special black and white regions or codes to be recognized.
The AR SDK uses sophisticated algorithms to detect and track
the features that are naturally found in the image itself.
13. •
GPS + Compass + Gyro + Accelerometer
•
Location-based
applications
use
the
ability of a particular device to record its
position in the world and then offer data
that’s relevant to that location: finding
your way around a city, remembering
where you parked the car, naming the
mountains around you or the stars in the
sky.
14.
15. •
Computer scientists get output images from the computer
tomography (CT) for the virtually produced image of the
inner body. A modern spiral-CT makes several X-Ray
photographs
with
diverse
perspectives
and
then
reconstructs their 3-dimensional perspective.
A computer-aided tomogram is clearer than a normal X-ray
photograph because it enables differentiation of the body's
various types of tissue. The computer scientist then
superimposes the saved CT scans with a real photo of the
patient on the operating table. For surgeons the impression
produced is that of lookinv through the skin and throughout
the various layers of the body in 3-dimensions and in color.
16. •
The “virtual watch” is created by real-time lightreflecting technology that allows the consumer to
interact with the design by twisting their wrist for a
360 degree view. Shoppers will be able to “try on”
28 different watches from the Touch collection by
the Swiss watch maker Tissot, and can also
experiment with different dials and straps.
17. •
Fitting
Reality
is
based
on
Augmented Reality. Be it style or
comfort. This is the Virtual Shopping
Mall of the Future. You can sit at
home, try your clothing on our
virtual shop and shop interactively.
It is Designed for both at-home and
in-store experience.
18. •
The military has been using displays in cockpits
that present information to the pilot on the
windshield of the cockpit or the visor of the flight
helmet. This is a form of augmented reality display.
20. •
Augmented reality also provides the ability to
recreate the sights and sounds of the ancient
world, allowing a tourist to experience a place in
time as if he or she were actually present when a
given event in history occurred. By viewing a
physical
environment
whose
elements
are
augmented by computer generated images, the
viewer can actually experience a historic place or
event as if he or she has traveled back in time.
21. •
AR can aid in visualizing building projects.
Computer-generated images of a structure can be
superimposed into a real life local view of a
property
before
the
physical
building
is
constructed there. AR can also be employed within
an architect's work space, rendering into their view
animated 3D visualizations of their 2D drawings.
Architecture sight-seeing can be enhanced with AR
applications allowing users viewing a building's
exterior to virtually see through its walls, viewing
its interior objects and layout.
22. •
AR technology has been successfully used in
various educational institutes to act as add-ons to
the textbook material or as a virtual, 3d textbook in
itself. Normally done with head mounts the AR
experience allows the students to ‘‘relive’’ events
as they are known to have happened, while never
leaving their class. These apps can be implemented
on the Android platform, but you need the backing
of some course material provider. Apps like these
also have the potential to push AR to the forefront
because they have a very large potential user base.
23. •
Word Lens has its limits. The
translation will have mistakes,
and may be hard to understand,
but it usually gets the point
across. If a translation fails, there
is a way to manually look up
words by typing them in. Word
Lens does not read very stylized
fonts, handwriting, or cursive.
24. •
There are many, many more uses of AR that
cannot be categorized so easily. They are mostly
still in the designing and planning stages, but have
the potential to forward AR technology to the
forefront of daily gadgets.
28. •
Vuforia is a Augmented Reality framework which is developing by
Qualcomm company.
•
The Vuforia platform uses superior, stable, and technically
efficient computer vision-based image recognition and offers the
widest set of features and capabilities, giving developers the
freedom to extend their visions without technical limitations. With
support for iOS, Android, and Unity 3D, the Vuforia platform
allows you to write a single native app that can reach the most
users across the widest range of smartphones and tablets.
32. •
Cygwin is a Unix-like environment and
command-line interface for Microsoft
Windows.
•
Cygwin provides native integration of
Windows-based applications, data,
and other system resources with
applications, software tools, and data
of the Unix-like environment.
33. •
Android apps are typically written in Java, with
its elegant object-oriented design. However, at
times, you need to overcome the limitations of
Java, such as memory management and
performance, by programming directly into
Android native interface. Android provides
Native Development Kit (NDK) to support native
development in C/C++, besides the Android
Software Development Kit (Android SDK) which
supports Java.
34. It provides a set of system headers for stable native APIs that are guaranteed to be supported in all later releases of the platform:
•
libc (C library) headers
•
libm (math library) headers
•
JNI interface headers
•
libz (Zlib compression) headers
•
liblog (Android logging) header
•
OpenGL ES 1.1 and OpenGL ES 2.0 (3D graphics libraries) headers
•
A Minimal set of headers for C++ support
•
OpenSL ES native audio libraries
35. •
Download the Vuforia SDK (you need to accept the license agreement before the download can start)
•
Extract the contents of the ZIP package and put it into <DEVELOPMENT_ROOT>
•
Adjust the Vuforia Environment settings in Eclipse
•
<DEVELOPMENT_ROOT>
• android-ndk-r8
• android-sdk-windows
• vuforia-sdk-android-xx-yy-zz
36. •
Type of Augmented reality: Image –
based
•
SDK of demo: Vuforia
•
Mobile platform: Android (NDK)
•
3D Content Rendering with OPENGL
ES 1.1
•
3D model is an obj. file format.
3D Model
Marker
37. • Android NDK applications that include Java code and resource files as well as C/C++ source code and sometimes assembly code. All
native code is compiled into a dynamic linked library (.so file) and then called by Java in the main program using a JNI mechanism.
NDK application development can be divided into five steps;
38. •
Creating a sub-directory called "jni" and place
all the native sources here.
•
Creating a "Android.mk" to describe our
native sources to the NDK build system.
•
By
default,
the
NDK
build
doesn’t
automatically build for x86 ABI. We will need
Application.Mk
to either create a build file “Application.mk”
to explicitly specify our build targets
Android.Mk
39. •
Building our native code by running the
"ndk-build" (in NDK installed directory)
script from our project's directory.
•
Note that the build system will
automatically add proper # prefix and
suffix to the corresponding generated
file. In other words, a shared library
module named ‘DevFestArDemo' will
generate ‘DevFestArDemo.so'.
40. •
Loading Native Libs
•
Making a few JNI calls out of the box. In java
class, look for method declarations starting
with "public native".
...
ImageTargets.Java
ImageTargets.cpp
41. •
We create and ImageTargets class to use and manage the Augmented Reality SDK.
•
Initialize application GUI elements that are not related to AR.
42. •
InitQCARTask An async task to initialize QCAR asynchronously.
•
Done the Initializing QCAR, Then Initialize the image tracker.
44. •
This is texture for our 3d model. We are just calling our texture is in
assets folder.
texture.png
imagetarget.ccp
45. •
Do application initialization in native code (e.g.
Registering callbacks, etc.)
•
Creating a texture of 3d Content and loading from
Texture.java
ImageTargets.cpp
46. •
An async task to load the tracker data
asynchronously.
ImageTargets.Java
47. •
In this step we are defining our marker in ImageTargets.ccp file. But firstly let me explain the general structure and working principle of marker.
48. •
Image targets can be created with the online Target Manager tool from JPG or PNG input images (only RGB or grayscale images are
supported) 2 MB or less in size. Features extracted from these images are stored in a database which can then be downloaded and
packaged together with your application. The database can then be used by Vuforia for runtime comparisons.
51. •
A feature is a sharp, spiked, chiseled detail in the image, such as the ones present in textured objects. The image analyzer
represents features as small yellow crosses. Increase the number of these details in your image, and verify that the details create a
non-repeating pattern.
Adding a Target
52. •
Not enough features. More visual details are
required to increase the total number of features.
•
Poor feature distribution. Features are present in
some areas of this image but not in others. Features
need to be distributed uniformly across the image.
•
Poor local contrast. The objects in this image need
sharper edges or clearly defined shapes in order to
provide better local contrast
53. •
This image is not suitable for detection and
tracking. We should consider an alternative image
or significantly modify this one.
•
Although this image may contain enough features
and good contrast, repetitive patterns hinder
detection performance. For best results, choose
an image without repeated motifs (even if rotated
and scaled) or strong rotational symmetry.
54.
55. •
Loading our Data Sets to Image Tracker.
DevFestTest.xml
ImageTargets.cpp
58. •
OpenGL for Embedded Systems (OpenGL ES) is a subset of the OpenGL
computer graphics rendering application programming interface (API) for
rendering 2D and 3D computer graphics such as those used by video
games, typically hardware-accelerated using a graphics processing unit
(GPU).
•
android.GLSurfaceView
•
android.GLSurfaceView.Renderer
onDrawFrame(GL10 gl)
Called to draw the current frame.
onSurfaceChanged(GL10 gl, int width, int height)
Called when the surface changed size.
onSurfaceCreated(GL10 gl, EGLConfig config)
Called when the surface is created or recreated.
ImageTargetRenderer.java
59. •
First, for each active (visible) trackable we create a
modelview matrix from its pose. Then we apply transforms
to this matrix in order to scale and position our model.
Finally we multiply it by the projection matrix to create the
MVP (model view projection) matrix that brings the 3D
content to the screen. Later in the code, we bind this MVP
matrix to the uniform variable in our shader. Each vertex of
our 3D model will be multiplied by this matrix, effectively
bringing that vertex from world space to screen space (the
transforms are actually object > world > eye > window).
•
Next, we need to feed the model arrays (vertices, normals,
and texture coordinates) to our shader. We start by binding
our shader, then assigning our model arrays to the attribute
fields in our shader
60. •
I am using obj2opengl tool for this.
•
OBJ2OPENGL does the latter and acts as a converter from model files to C/C++ headers
that describe vertices of the faces, normals and texture coordinates as simple arrays of
floats. OBJ2OPENGL is a Perl script that reads a Wavefront OBJ file describing a 3D
object and writes a C/C++ include file describing the object in a form suitable for use
with OpenGL ES. It is compatible with java and the libraries of the android SDK.
Heiko Behrens
61. •
In this step we create a folder which is name
“Devfest” on the Desktop. And putting our
model and obj2opengl.pl file in “Devfest
folder”. And than we need to install a Perl
script interpreter. Installing on our computer
to use obj2opengl.pl Perl codes. Now we are
opening windows command page and we are
writing codes as this figure
62.
63. •
Now we hava a “helicopter.h” file which has OpenGL ES
vertex array, to implement our project.
•
Adding in jni folder to helicopter.h file
helicopter.h
64. •
In this step we are setting vertex array in our opengl
ImageTarget.ccp file to use our 3d model.
•
Include generated arrays which is ImageTargets.ccp.
•
Set input data arrays to draw.
glTexCoordPointer(2, GL_FLOAT, 0, (const GLvoid*) helicopterTexCoords);
glVertexPointer(3, GL_FLOAT, 0, (const GLvoid*) helicopterVerts);
glNormalPointer(GL_FLOAT, 0, (const GLvoid*) helicopterNormals);
glDrawArrays(GL_TRIANGLES, 0, helicopterNumVerts);
ImageTargets.cpp
65. •
Now we are adding our activities and adding some permisition on our
project’s Androidmanifest.xml file
AndroidManifest.xml