SlideShare une entreprise Scribd logo
1  sur  30
Baselios Mathews II College of Engineering, Sasthamcotta, Kollam
Dept of Electronics & Communication Engineering 1
CHAPTER 1
INTRODUCTION
Space Glasses build a three dimensional model of the world, as you walk
around, with the help of an algorithm that tracks flat surface in real time. Computers
that can be mounted on our heads seem like a concept straight out of science fiction
movies. Steve Mann, a professor at University of Toronto, and a leader in the field of
wearable computing. A company, calledMeta, in collaborationwithMann, is creating
glasses that can loosen the grip of Google Glass on the emerging market with its
ability to merge the real and the virtual. Meta is building headwear that can
superimpose 3D content on the real world. Mann works as the chief scientist in the
company founded by Ben Sand and Meron Gribetz. The first product, the Space
Glasses comes with a projectable LCD that you can see through for each eye, an
infrared depth camera, and a standard colour camera, in addition to a gyroscope,
accelerometer, and compass. Space Glasses build a three dimensional model of the
world, as you walk around, with the help of an algorithm that tracks flat surface in
real time. It is not like augmented reality systems which needs special markers. The
coordinates obtained from the tracking are sent to the computer which sends over a
3D model of your surroundings. You can use this to project a movie on a piece of
paper. Various people can come at the same object from various angles, or you could
have a 3D model to follow you around. The company envisions a future where its
technologywill replace the regular computer and as something that people can use to
work together – from architectsbent over a table with their teams to design buildings,
to people playing around with friends.
Baselios Mathews II College of Engineering, Sasthamcotta, Kollam
Dept of Electronics & Communication Engineering 2
CHAPTER 2
HYSTORY OF WEARABLE DIGITAL EYEGLASSES &
TECHNOLOGY
Wearable technology, wearable’s, fashionable technology, wearable
devices, tech togs, or fashion electronics are clothing and accessories incorporating
computer and advanced electronic technologies. The designs often incorporate
practical functions and features, but may also have a purely critical or aesthetic
agenda. Wearable devices such as activity trackers are a good example of the Internet
of Things, since theyare part of the network of physical objects or "things"embedded
with electronics, software, sensors and connectivity to enable objects to exchange
data with a manufacturer, operator and/or other connected devices, without requiring
human intervention. Wearable technologyis relatedto both ubiquitous computing and
the history and development of wearable computers. Wearable’s make technology
pervasive by interweaving it into daily life. Through the history and development of
wearable computing, pioneers have attempted to enhance or extend the functionality
of clothing, or to create wearable’s as accessories able to provide users
with sousveillance - the recording of an activity typically by way of small wearable or
portable personal technologies. Tracking information like movement, steps and heart
rate are all part of the quantified self movement. The origins of wearable technology
are influenced by both of these responses to the vision of ubiquitous computing. One
early piece of widely-adopted wearable technology was the calculator watch,
introducedin the 1980s. In2008, incorporated a hidden Bluetooth microphone into a
pair of earrings. Around the same time, the Spy Tie appeared, a "stylish neck tie with
a hidden color camera".
Baselios Mathews II College of Engineering, Sasthamcotta, Kollam
Dept of Electronics & Communication Engineering 3
Smart glasses or smart glasses or Digital Eye Glass or Personal Imaging
System are a wearable computer that adds information to what the wearer
sees. Typically this is achieved through an optical head-mounted display (OHMD)
or computerized internet-connected glasses with transparent heads-up display (HUD)
or augmented reality (AR) overlay that has the capability of reflecting projected
digital images as well as allowing the user to see through it, or see better with it.
While early models can perform basic tasks, such as just serve as a front end display
for a remote system, as in the case of smart glasses utilizing cellular technology or
Wi-Fi, modern smart glasses are effectively wearable computers which can run self-
contained mobile. Some are hands free that can communicate with the Internet
via natural language voice commands, while other use touch buttons. Like
other computers, smart glasses may collect information from internal or external
sensors. It may control, or retrieve data from, other instruments or computers. It may
support wireless technologies like Bluetooth, WI-Fi, and GPS. While a smaller
number of models run a mobile operating system and function as portable media
players to send audio and video files to the user via a Bluetooth or Wi-Fi
headset. Some smart glasses models also feature full life logging and tracker
capability.
Baselios Mathews II College of Engineering, Sasthamcotta, Kollam
Dept of Electronics & Communication Engineering 4
CHAPTER 3
COMPUTER MEDIATED REALITY
Computer-mediatedreality refers to the ability to add to, subtract information
from, or otherwise manipulate one's perception of reality through the use of
a wearable computer or hand-held device such as a smart phone. Typically, it is the
user's visual perception of the environment that is mediated. This is done through the
use of some kind of electronic device, such as an Eye Tap device or smart phone,
which can act as a visual filter between the real world and what the user perceives.
Computer-mediated reality has been used to enhance visual perception as an aid to
the visually impaired. This example achieves a mediated reality by altering a video
input stream light that would have normally reached the user's eyes, and
computationally altering it to filter it into a more useful form. It has also been used
for interactive computer interfaces. The use of computer-mediated reality
to diminish perception, by the removal or masking of visual data, has been used for
architectural applications, and is an area of ongoing research.
Fig:3.1 Mediated reality
Baselios Mathews II College of Engineering, Sasthamcotta, Kollam
Dept of Electronics & Communication Engineering 5
3.1 AUGMENTED REALITY
Fig 3.2: Augmediated Reality
Simply AR is a field of computer research and which is the combination of
real world and computer generated data. And it has the ability to overlay computer
graphics on to the real world. Differ from virtual reality the users maintain a sense of
presence in real world. Broadly, augmented reality is the (most frequently visual)
superposition of real and virtual objects or information in one environment. As a
research area, augmented reality has been pursued for many years with a number of
wide-ranging applications. Many of these systems have never left the laboratory due
to cost or other constraints rendering them impractical. However, due to the adoption
of mobile devices with powerful processors, built-in cameras, and fast internet
connections, augmented reality is beginning to infiltrate the average individual’s life.
A number of augmented reality applications have appeared in the Apple and Google
application stores. These applications range from spur-of the moment information
overlays, like location guides, reviews and ratings, to games that observe the user’s
motions to create virtual effects. One good example is Google's Goggle program an
application that accepts photos of landmarks, books, artwork, and many other object
types and then returns a Google visual search on the object.
Baselios Mathews II College of Engineering, Sasthamcotta, Kollam
Dept of Electronics & Communication Engineering 6
3.2 VIRTUAL REALITY
Fig:3.3 Virtual rreality
Virtual realityis a term that applies to computer-simulated environments that
can simulate physical presence in places in the real world, as well as in
imaginary worlds. It covers remote communication environments whichprovide
virtual presence ofuserswiththe concepts of tele presence and tele existence or
a virtual artifact (VA). The simulated environment can be similar to the real
world in order to create a life like experience. Virtual reality is often used to
describe a wide variety of applications commonly associated with immersive,
highly visual, 3D environments. The development of CAD software, graphics
hardware acceleration, head mounted displays, database gloves, and
miniaturization.
Baselios Mathews II College of Engineering, Sasthamcotta, Kollam
Dept of Electronics & Communication Engineering 7
CHAPTER 4
META PRO SPACEGLASSES
The Meta Pro Space Glasses are the most innovative glasses in the wearable
technology world at the moment. They are the future of having a laptop computer,
phone, and tablet all in one. They are augmented realityspecs that layer digital reality
and content that users canmanipulate over their fieldof vision. Meta Pro brings about
a mind boggling world, where free flowing virtual shapes can be molded and actually
produced with 3D printers. The Pro is the first pair of smart glasses that stuffs the
technology. This technology is capable of delivering the stunning holographic
interfaces.
Fig 4.1: Meta pro spaceglasses
The SpaceGlasses boasts agiant 3D holographic HD screen, 40 degfieldof view (15
x Google Glass) Ultrahigh-endsleeklight weight design. Independent pocket
computingpower. You can change your screensize to suit youon the fly, and have
access to all your apps. Link with your laptopwhich is now a hologram and you can
place it anywhere in the world around you.
Baselios Mathews II College of Engineering, Sasthamcotta, Kollam
Dept of Electronics & Communication Engineering 8
FATHER OF META & AUGMENTED REALITY
Fig 4.2:Gribetz Meron & Prof.Steve Mann
Metais a 40 personstart up company that is led by Founder & CEO Meron
Gribetz(Forbes 30 under 30), Chief Scientist Prof. Steve Mann (inventor of wearable
computing&pioneer of augmented reality).
4.1 SPECIFICATIONS OF META
HARDWARE SOFTWARE
Projection: two individual screen Supported platform: currently Windows 32/
64 bits
Weight: 0.3Kg Intel core i5 CPU
Operating temperature: -30 to 300 deg 4GB of RAM
Poweredby a 32WHr battery 802.11nWi-Fi, Bluetooth4.0
Resolution: 1280x720 for eacheye Metaapp store
Tabe 4.1: Specifications
Baselios Mathews II College of Engineering, Sasthamcotta, Kollam
Dept of Electronics & Communication Engineering 9
CHAPTER 5
OVERVIEW
Fig 5.1: overview of Meta pro
The glasses feature two 1280×720-pixel LCD displays, each with 40 degree
fields of view and aligned for stereoscopic 3D; twin RGB cameras; 3D surround
sound; 3D time of flight depth; and a 9-axis integrated motion unit with
accelerometer, gyroscope and compass. If you order a pair of Pros, not only do you
get this innovative technology, but you also get a wearable computer that is very
powerful to run them — an Intel Core i5 CPU, 4GB of RAM, 128 GB of storage,
802.11nWi-Fi, Bluetooth4.0 poweredby a 32WHr battery. It has the abilityto create
a 3D image of your smartphone, tablet and laptop, allowing you to control your
devices through a pair of glasses.
Baselios Mathews II College of Engineering, Sasthamcotta, Kollam
Dept of Electronics & Communication Engineering 10
5.1 OPTICAL HEADMOUNTED DISPLAY
An optical head-mounted display (OHMD) is a wearable device that has the
capability of reflectingprojectedimages as well as allowing the user to see through it
that is augmented reality. Head-mounteddisplays are not designedto be workstations,
and traditional input devices such as keyboards do not support the concept of smart
glasses. Input devices that lend themselves to mobilityand/or hands-free use are good
candidate.
5.1.1 VIRTUAL RETINAL DISPLAY
A virtual retinal display (VRD), also known as a retinal scan display (RSD) or retinal
projector (RP), is a display technology that draws a raster display (like a television)
directly onto the retina of the eye. The user sees what appears to be a conventional
display floating in space in front of them.
Fig 5.1.1 VRD
In a conventional display a real image is produced. The real image is either
viewed directlyor, as in the case with most head-mounted displays projected through
an optical system and the resulting virtual image is viewed. The projection moves the
virtual image to a distance that allows the eye to focus comfortably. In a VRD no real
image is ever produced. Rather, an image is formeddirectlyonthe retina of the user's
eye. A block diagram of the VRD is shown in the Figure above. To create an image
with the VRD a photon source is usedto generate a coherent beam of light. The use of
a coherent source allows the system to draw a diffraction limited spot on the retina.
Baselios Mathews II College of Engineering, Sasthamcotta, Kollam
Dept of Electronics & Communication Engineering 11
The light beam is intensity modulated to match the intensity of the image being
rendered. The modulation can be accomplished after the beam is generated. If the
source has enough modulation bandwidth, as in the case of a laser diode, the source
can be modulated directly. The resulting modulated beam is then scanned to place
each image point, or pixel, at the proper position on the retina. A variety of scan
patterns are possible. The scanner could be used in a calligraphic (vector) mode, in
which the lines that form the image are drawn directly, or in a raster mode, much like
standard computer monitorsor television. Use of the raster method of image scanning
allows the VRD to be driven by standard video sources. To draw the raster, a
horizontal scanner moves the beam to draw a row of pixels. The vertical scanner then
moves the beam to the next line where another row of pixels is drawn. After scanning,
the optical beam must be properlyprojectedinto the eye. The goal is for the exit pupil
of the VRD to be coplanar with the entrance pupil of the eye. The lens and cornea of
the eye will then focus the beam on the retina, forming a spot. The position on the
retina where the eye focuses the spot is determined by the angle at which light enters
the eye. This angle is determinedby the scanners and is continually varying in a raster
pattern. The brightness of the focused spot is determined by the intensity modulation
of the light beam. The intensity modulated moving spot, focused through the eye,
draws an image on the retina. The eye's persistence allows the image to appear
continuous and stable. Finally, the drive electronics synchronize the scanners and
intensity modulator with the incoming video signal in such a manner that a stable
image is formed
Baselios Mathews II College of Engineering, Sasthamcotta, Kollam
Dept of Electronics & Communication Engineering 12
5.1.2 HOLOGRAPHIC INTERFACE
A holographic interface is a computer input method that utilizes a projected imagine
insteadof a physical device. Holographic interfaces are popular on computer systems
that see a variety of uses, as well as those that require extremely complex inputs.
Because they are more expensive than physical interfaces, theytendto be restrictedto
wealthy areas. Holographic interfaces utilize holoprojectors to create a 3d image in
whatever configurationis most appropriate to the program used. Because the image is
constructed only of light, the configuration can be as simple or complex as needed,
and easily scaled to a size that the user finds comfortable. Additionally, it can be
dynamically reconfigured, so a user can quickly switch from a simplified interface to
a more involved one as need. Interactionwith the image is recorded via sensors in the
projector, not through direct manipulation of the hologram. These sensors can record
the precise locationof the user's fingers (or otherappendages for interfaces that utilize
them), enabling pinpoint control. A holographic interface is a way to interact with
electronics without coming into physical contact with the machine. Though the
holographic interface was only developed in the 2010s, it is often compatible with
contemporary computer systems and programs. The creation of the hologram is a
relativelycomplexcomponent of the interface, whereas the ability of the interface to
recognize the commands of the user is achieved through the use of motiondetectors, a
technologythat has beenin use for decades. In order to create aholographic interface,
a special holographic projector is needed that can display a three dimensional image
in space. A holographic interface is away to interact with electronics without coming
into physical contact with the machine. Though the holographic interface was only
developed in the 2010s, it is often compatible with contemporary computer systems
and programs. The creationof the hologram is a relatively complex component of the
interface, whereas the ability of the interface to recognize the commands of the user is
achieved through the use of motion detectors, a technology that has been in use for
decades. In order to create a holographic interface, a special holographic projector is
needed that can display a three dimensional image in space.
Baselios Mathews II College of Engineering, Sasthamcotta, Kollam
Dept of Electronics & Communication Engineering 13
CHAPTER 6
WORKING OF META PRO SPACEGLASS
The glasses will be running a Unity-based desktop, and through that desktop the user
will be able to open certain files and programs. For access to all windows-based
programs, the team suggests using Unity's own Virtual Desktop (I haven't actually
found a Virtual Desktop that can be run in Unity). Meta doesn't officially support
this, though. What is displayed by the glasses doesn't necessarily move with the
movement of the head. For example, say you are looking at some virtual picture, and
you wan to fix that picture in some point in space. You can do that, and look a way.
When you look back, it's exactly where you hung it. Or, if you want that picture to be
fixed in your Field of View, you can do that, too. The device will be using the Soft
Kinect DS325 for monitoring movements, and the IISU middleware for programming
needs. The Soft Kinect does not do body tracking, only short range tracking and hand
tracking. The IISU pro will be shipping free with the Meta SDK. They know about
the problems Oculus Rift had with Customs, and they're going to try and avoid them.
6.1 SPEECH RECOGNITION
In computer science and electrical engineering, speech recognition (SR) is the
translation of spoken words into text. It is also known as "automatic speech
recognition" (ASR), "computer speech recognition", or just "speech to text" (STT).
Some SR systems use "training" ( "enrolment") where an individual speaker reads
text or isolated vocabulary into the system. The system analyzes the person's specific
voice and uses it to fine-tune the recognition of that person's speech, resulting in
increased accuracy. Systems that do not use training are called "speaker
independent" systems.
Baselios Mathews II College of Engineering, Sasthamcotta, Kollam
Dept of Electronics & Communication Engineering 14
The advances are evidenced not only by the surge of academic papers published in the
field, but more importantly by the world-wide industry adoption of a variety of deep
learning methods in designing and deploying speech recognition systems.
Both acoustic modeling and language modeling are important parts of modern
statistically-based speech recognition algorithms. Hidden Markov models (HMMs)
are widely used in many systems. Language modeling is also used in many other
natural language processingapplications suchas document classification or statistical
machine translation.
6.1.1 Hidden Markov models
Modern general-purpose speech recognition systems are based on Hidden Markov
Models. These are statistical models that output a sequence of symbols or quantities.
HMMs are used in speech recognition because a speech signal can be viewed as a
piecewise stationary signal or a short-time stationary signal. In a short time-scale
(e.g., 10 milliseconds), speech can be approximated as a stationary process. Speech
can be thought of as a Markov model for many stochastic purposes. Another reason
why HMMs are popular is because they can be trained automatically and are simple
and computationallyfeasible to use. In speech recognition, the hidden Markovmodel
would output a sequence of n-dimensional real-valued vectors (with n being a small
integer, suchas 10), outputting one of these every 10 milliseconds. The vectors would
consist of cepstral coefficients, which are obtained by taking a Fourier transform of a
short time window of speechand decorrelatingthe spectrum usinga cosine transform,
then taking the first (most significant) coefficients. The hidden Markov model will
tend to have in each state a statistical distribution that is a mixture of diagonal
covariance Gaussians, which will give a likelihood for each observed vector. Each
word, or (for more general speech recognition systems), each phoneme, will have a
different output distribution; a hidden Markov model for a sequence of words or
phonemes is made by concatenating the individual trained hidden Markovmodels for
the separate words and phonemes.
Baselios Mathews II College of Engineering, Sasthamcotta, Kollam
Dept of Electronics & Communication Engineering 15
6.2 GESTURE RECOGNITION
Gesture recognition is a topic in computer science and language technology with the
goal of interpreting human gestures via mathematical algorithms. Gestures can
originate from any bodily motion or state but commonly originate from
the face or hand. Current focuses in the field include emotion recognition from face
and hand gesture recognition. Many approaches have been made using cameras
and computer vision algorithms to interpret sign language. However, the
identification and recognition of posture, gait, proxemics, and human behaviors is
also the subject of gesture recognitiontechniques Gesture recognitioncanbe seen as a
way for computers to begin to understand human body language, thus building a
richer bridge between machines and humans than primitive text user interfaces or
even GUIs (graphical user interfaces), which still limit the majority of input to
keyboard and mouse. Gesture recognition enables humans to communicate with the
machine (HMI) and interact naturally without any mechanical devices. Using the
concept of gesture recognition, it is possible to point a finger at the computer
screen so that the cursor will move accordingly. This could potentially make
conventional input devices such as mouse, keyboards and even touch-
screens redundant. Gesture recognition can be conducted with techniques
from computer vision and image processing. The ability to track a person's
movements and determine what gestures they may be performing can be achieved
through various tools. Although there is a large amount of research done in
image/video based gesture recognition, there is some variation within the tools and
environments used between implementations.
Wired gloves: These can provide input to the computer about the position and
rotation of the hands using magnetic or inertial tracking devices. Furthermore, some
gloves can detect finger bending with a high degree of accuracy (5-10 degrees), or
even provide haptic feedback to the user, which is a simulation of the sense of touch.
Baselios Mathews II College of Engineering, Sasthamcotta, Kollam
Dept of Electronics & Communication Engineering 16
Depth-aware cameras: Using specialized cameras such as structured light or time-
of-flight cameras, one can generate a depth map of what is being seen through the
camera at a short range, and use this data to approximate a 3d representation of what
is being seen. These can be effective for detection of hand gestures due to their short
range capabilities.
Stereo cameras. Using two cameras whose relations to one another are known, a 3d
representation can be approximated by the output of the cameras. To get the cameras'
relations, one can use a positioning reference such as a lexian-
stripe or infrared emitters. In combination with direct motion measurement (6D-
Vision) gestures can directly be detected.
Controller-basedgestures. These controllers act as an extension of the body so that
when gestures are performed, some of their motion can be conveniently captured by
software. Mouse gestures are one such example, where the motion of the mouse is
correlated to a symbol being drawn by a person's hand.
Single camera. A standard 2D camera can be used for gesture recognition where the
resources/environment would not be convenient for other forms of image-based
recognition. Earlier it was thought that single cameramay not be as effective as stereo
or depth aware cameras, but some companies are challenging this theory.
6.2.1 Algorithms
Fig 6.1 Algorithm
Baselios Mathews II College of Engineering, Sasthamcotta, Kollam
Dept of Electronics & Communication Engineering 17
Different ways of tracking and analyzing gestures exist, and some basic layout is
given is in the diagram above. For example, volumetric models convey the necessary
information required for an elaborate analysis, however they prove to be very
intensive in terms of computational power and require further technological
developments in order to be implemented for real-time analysis. On the other hand,
appearance-based models are easier to process but usually lack the generalityrequired
for Human-Computer Interaction.
Depending on the type of the input data, the approach for interpreting a gesture could
be done in different ways. However, most of the techniques rely on key pointers
represented in a 3D coordinate system. Based on the relative motion of these, the
gesture can be detected with a high accuracy, depending on the quality of the input
and the algorithm’s approach. In order to interpret movements of the body, one has to
classify them according to common properties and the message the movements may
express. For example, in sign language each gesture represents a word or phrase. The
taxonomy that seems very appropriate for Human-Computer Interaction has been
proposed by Quek in "Toward aVision-Based Hand Gesture Interface". He presents
several interactive gesture systems inorder to capture the whole space of the gestures:
1. Manipulative; 2. Semaphoric; 3. Conversational. Some literature differentiates 2
different approaches in gesture recognition: a 3D model based and an appearance-
based.[21] The foremost method makes use of 3D information of key elements of the
body parts in order to obtain several important parameters, like palm position or joint
angles. On the other hand, Appearance-based systems use images or videos for direct
interpretation.
Baselios Mathews II College of Engineering, Sasthamcotta, Kollam
Dept of Electronics & Communication Engineering 18
.Fig.6.2 A real hand (left) is interpretedas a collectionof vertices andlines in the 3D
meshversion (right), and the software uses their relative positionand interactionin
order to infer the gesture
a) 3D model-basedalgorithms
The 3D model approach can use volumetric or skeletal models, or even a combination
of the two. Volumetric approaches have been heavily used in computer animation
industry and for computer vision purposes. The models are generally created of
complicated 3D surfaces, like NURBS or polygon meshes.
Fig.6.3: 3D model-based algorithms
The skeletal version(right) is effectivelymodelling the hand (left). This has fewer
parameters than the volumetric versionand it's easier to compute, making it suitable
for real-time gestureanalysis systems
b) Skeletal-basedalgorithms
Instead of using intensive processing of the 3D models and dealing with a lot of
parameters, one can just use a simplified version of joint angle parameters along with
segment lengths. This is known as a skeletal representation of the body, where a
virtual skeletonof the personis computedand parts of the body are mapped to certain
segments.
Baselios Mathews II College of Engineering, Sasthamcotta, Kollam
Dept of Electronics & Communication Engineering 19
6.3 EYE TRACKING
Eye tracking is the process of measuring either the point of gaze (where one is
looking) or the motion of an eye relative to the head. Aneye tracker is a device for
measuring eye positions and eye movement. Eye trackers are used in research on
the visual system, in psychology, in psycholinguistics, marketing, as an input device
for human computer interaction, and in product design. There are a number of
methods for measuring eye movement. The most popular variant uses video images
from which the eye position is extracted.
6.3.1 OPTICAL TRACKING
Light, typically infrared, is reflected from the eye and sensed by a video camera or
some other specially designed optical sensor. The information is then analyzed to
extract eye rotation from changes in reflections. Video-based eye trackers typically
use the corneal reflection and the center of the pupil as features to track over time.
Still more sensitive method of tracking is to image features from inside the eye, such
as the retinal blood vessels, and follow these features as the eye rotates. Optical
methods, particularly those based on video recording, are widely used for gaze
tracking and are favored for being non-invasive and inexpensive.
Fig 6.3.1 An eye-tracking head-mounted display
An eye-tracking head-mounteddisplay. Each eye has an LED light source (gold-color
metal) on the side of the display lens, and a cameraunder the display lens.
Baselios Mathews II College of Engineering, Sasthamcotta, Kollam
Dept of Electronics & Communication Engineering 20
CHAPTER 7
TECHNOLOGY USED
7.1SDK(SOFTWERE DEVELOPMENT KIT)
Meta Software Developer Kit (SDK) is a bundled package for developers to
start Meta applications with. It contains access to prefabs and classes you can use to
manipulate the 3D holographic content you view through the Meta glasses. The Meta
SDK is designedto let you develop your concepts and ideas into working applications
as quickly and efficiently as possible. Built on top of the Unity Engine, the SDK has
full support for hand gestures, marker-based and markerless surface tracking,
imported graphics, 3D models.
Fig 7.1 SDK
There are any number of uses for the Meta Pro, as the company has an SDK which
allows developers to create programs to use with the glasses. In our visit, Meta’s CEO
and founder Meron Gribetz showed us how the glasses can be used in place of
traditional CAD software to design a 3D printed object using only your hands.
Baselios Mathews II College of Engineering, Sasthamcotta, Kollam
Dept of Electronics & Communication Engineering 21
7.2 BRAIN–COMPUTER INTERFACE (BCI)
A brain–computer interface (BCI), sometimes called a mind-machine
interface (MMI), direct neural interface (DNI), or brain–machine interface (BMI), is a
direct communication pathway between the brain and an external device. BCIs are
often directed at assisting, augmenting, or repairing human cognitive or sensory-
motor functions. ResearchonBCIs began in the 1970s at the Universityof California,
Los Angeles (UCLA) under a grant from the National Science Foundation, followed
by a contract from DARPA. The papers published after this research also mark the
first appearance of the expression brain–computer interface in scientific literature.
The field of BCI research and development has since focused primarily on
neuroprosthetics applications that aim at restoring damaged hearing, sight and
movement. Thanks to the remarkable cortical plasticity of the brain, signals from
implanted prostheses can, after adaptation, be handled by the brain like natural sensor
or effector channels.[3] Following years of animal experimentation, the first
neuroprosthetic devices implanted in humans appeared in the mid-1990s.
7.3 3-D STERIOSCOPIC DISPLAY
Stereoscopy (also called stereoscopics) is a technique for creating or enhancing
the illusion of depth in an image by means of stereopsis for binocular vision. Any
stereoscopic image is calledastereogram. Originally, stereogram referred to a pair of
stereo images whichcould be viewed using a stereoscope. Most stereoscopic methods
present two offset images separately to the left and right eye of the viewer.
These two-dimensional images are then combined in the brain to give the perception
of 3D depth. This technique is distinguished from 3D displays that display an image
in three full dimensions, allowing the observer to increase information about the 3-
dimensional objects being displayed by head and eye movements.
Baselios Mathews II College of Engineering, Sasthamcotta, Kollam
Dept of Electronics & Communication Engineering 22
Autostereoscopic display principles multiview and head-tracked autostereoscopic
displays combine the effects of bothstereo parallax and movement parallax to give 3d
without glasses. The best implementations produce a perceived effect similar to a
white-light hologram.
• TrueScaleStereoTM mode
• Dimensions, depth and 3D position of graphics match real-life.
• Real occlusion
• 1280x720 pixels (metaPro)
7.4 Tegra K1 - NEXT AR ENABLER CHIP
Fig 7.2 Tegra K1
The Tegra K1 features a heart built out of the Kepler architecture which has
spanned two years inside desktop computers, notebooks and the world’s fastest
TITAN supercomputer. Featuring upto 192 Kepler cores, the Tegra K1 is indeed a
next generation mobile super chip aimed towards high-performance mobile devices.
Has also announced their highly anticipated Denver CPU which utilizes specialized
64-bit ARM Cores which would be fused alongside the GPU die. The Tegra K1 chip
would be available in two variants, the dual core variant with Denver CPU and 7 Way
superscalar compute will feature the 64-bit ARM v8 cores and 192 Kepler cores with
clock frequency of 2.5 GHz while the 32-bit Quad core ARM variant will be clocked
at 2.3 GHz. The 64-bit Denver model comes with 128KB + 64KB while the 32-bit
variant comes with 328KB + 32KB L1 Cache.
Baselios Mathews II College of Engineering, Sasthamcotta, Kollam
Dept of Electronics & Communication Engineering 23
7.5 ZERO Ui TECHNOLOGY
It seems impossible for the behaviors, habits, devices, and interactions we
have so rapidly absorbedinto the fabric of our lives to not continue indefinitely from
here on in. This is probably what society thought about the steam engine in the 18th
Century as well. We are entering a new era of Zero UI: not only is the technology
already being invented, but emergent products and services are already in the market
that will usher in this shift. Zero UI refers to aparadigm where our movements, voice,
glances, and even thoughts can all cause systems to respond to us through our
environment. At its extreme, it implies a screen-less, invisible user interface where
natural gestures trigger interactions, as if the user was communicating to another
person. It would require many technologies to converge and become significantly
more sophisticated, particularly voice recognition and motion sensing, but these
technologies are evolving rapidly. The word “screen” itself has tension: It
simultaneously means an object we look at and something that we hide behind. Even
the small hand-sized device becomes abarrier in social situations, absorbing our gaze
and taking us away elsewhere. The replacement of these monolithic screen-based
devices by ambient technology that surrounds and immerses us may in the end be a
very good thing; social interactions could become more natural again and not as
obviously mediated by devices. Our attention could again return to the people sitting
across the dining table, instead of those half a continent away. In this talk we will
explore the technologies, theories, and possible futures of Zero UI; what it means to
design and build these interfaces; and what it will mean to live alongside or even
“inside” them. Zero UI will not be limited to personal devices but will extend to
homes, entire cities, even environments and ecosystems, and as a result will have a
massive impact on society as a whole.
Baselios Mathews II College of Engineering, Sasthamcotta, Kollam
Dept of Electronics & Communication Engineering 24
CHAPTER 8
DATA PROCESSING
8.1 META COMPUTER VISION PROCESSORS
Fig 8.1 meta computer vision processors
Baselios Mathews II College of Engineering, Sasthamcotta, Kollam
Dept of Electronics & Communication Engineering 25
8.2 HAND SURFACE INTERACTION
Several studies have been carried out on augmented reality (AR)-based
environments that deal with user interfaces for manipulating and interacting with
virtual objects aimedat improving immersive feeling and natural interaction. Most of
these studies have utilized AR paddles or AR cubes for interactions. However, these
interactions overly constrain the users in their ability to directly manipulate AR
objects and are limited in providing natural feeling in the user interface. This paper
presents a novel approach to natural and intuitive interactions through a direct hand
touchable interface in various AR-based user experiences. It combines markerless
augmented reality with a depth camera to effectively detect multiple hand touches in
an AR space. Furthermore, to simplify hand touch recognition, the point cloud
generated by Kinect is analyzed and filtered out. The proposed approach can easily
trigger AR interactions, and allows users to experience more intuitive and natural
sensations and provides much control efficiency in diverse AR environments.
Furthermore, it can easily solve the occlusion problem of the hand and arm region
inherent in conventional AR approaches through the analysis of the extracted point
cloud. We present the effectiveness and advantages of the proposed approach by
demonstrating several implementation results such as interactive AR car design and
touchable AR pamphlet. We also present an analysis of a usability study to compare
the proposed approach with other well-known AR interactions.
Baselios Mathews II College of Engineering, Sasthamcotta, Kollam
Dept of Electronics & Communication Engineering 26
CHAPTER 9
ADVANTAGES OF META PRO SPACEGLASSES
1. 3D visualization
2. No needof pc’s, laptops, etc
3. Supports large number of Apps
4. More number of screencanbe extracted
5. Fastest wearable device along the universe
Baselios Mathews II College of Engineering, Sasthamcotta, Kollam
Dept of Electronics & Communication Engineering 27
CHAPTER 10
APPLICATIONS
Meta’s Augmented Reality platform has attracted over a thousand development
groups that are building applications in the areas of
 Productivity
 Architecture
Also applicable in
 Industrial design
 Data visualization
 Medical, simulation and training,
 Communications
 Gaming.
Baselios Mathews II College of Engineering, Sasthamcotta, Kollam
Dept of Electronics & Communication Engineering 28
CHAPTER 11
CONCLUSION
Augmented Realityis just around the corner for the excited consumer market.
MetaPro Spaceglasses are one of the most highly anticipated smartglasses brands for
many reasons. Not only have they been hailed as the best looking smart glasses by
Forbes Magazine but they also claim to offer the most advanced AR experience in the
world. MetaPro offer a true 3D holographic augmented reality experience to their
users. One of the features is ZeroUi, a virtual 3D modelling technology that enables
you to create holographic models by essentially shaping the ether in front of the
glasses. If you have access to a 3D printer, you can then transfer the data from
MetaPro and bring your hologram to life in the physical world. These super
smartglasses also replicate your hardware devices and display them as holograms, for
instance MetaPro wouldcreate a hologram of your laptop which you can then interact
with via the gesture recognition. In terms of Specs, MetaPro Spaceglasses win against
any other smartglasses. One of the reasons is that they havent crammed all the
technologyinto the frames of the glasses but instead come with a high power external
pocket computer with 4GB of RAM, 128 GB SSD and an i5 Intel processor.
Baselios Mathews II College of Engineering, Sasthamcotta, Kollam
Dept of Electronics & Communication Engineering 29
REFERENCE
[1]. www.wikipedia.com
[2]. www.spceglass.com
[3]. www.metapro.com
[4] Meta 3Dglasses-Businessinsider
[5] www.techcrunch/meta.com
[6] www.kickstarter.com//meta
Baselios Mathews II College of Engineering, Sasthamcotta, Kollam
Dept of Electronics & Communication Engineering 30
APPENDIX

Contenu connexe

Tendances

Virtual reality (vr) presentation
Virtual reality (vr) presentation Virtual reality (vr) presentation
Virtual reality (vr) presentation Ranjeet Kumar
 
Marker Based Augmented Reality
Marker Based Augmented RealityMarker Based Augmented Reality
Marker Based Augmented RealityArshiya Sayyed
 
Sixth sense technology ppt
Sixth sense technology pptSixth sense technology ppt
Sixth sense technology pptMohammad Adil
 
Empathic Computing: Designing for the Broader Metaverse
Empathic Computing: Designing for the Broader MetaverseEmpathic Computing: Designing for the Broader Metaverse
Empathic Computing: Designing for the Broader MetaverseMark Billinghurst
 
20 Latest Computer Science Seminar Topics on Emerging Technologies
20 Latest Computer Science Seminar Topics on Emerging Technologies20 Latest Computer Science Seminar Topics on Emerging Technologies
20 Latest Computer Science Seminar Topics on Emerging TechnologiesSeminar Links
 
Introduction to Extended Reality - XR
Introduction to Extended Reality - XRIntroduction to Extended Reality - XR
Introduction to Extended Reality - XRKumar Ahir
 
Comp4010 lecture11 VR Applications
Comp4010 lecture11 VR ApplicationsComp4010 lecture11 VR Applications
Comp4010 lecture11 VR ApplicationsMark Billinghurst
 
Virtual reality ppt
Virtual reality pptVirtual reality ppt
Virtual reality pptdiksha gaur
 
Comp4010 Lecture9 VR Input and Systems
Comp4010 Lecture9 VR Input and SystemsComp4010 Lecture9 VR Input and Systems
Comp4010 Lecture9 VR Input and SystemsMark Billinghurst
 
Blockchain Explained | How Does A Blockchain Work | Blockchain Explained Simp...
Blockchain Explained | How Does A Blockchain Work | Blockchain Explained Simp...Blockchain Explained | How Does A Blockchain Work | Blockchain Explained Simp...
Blockchain Explained | How Does A Blockchain Work | Blockchain Explained Simp...Simplilearn
 
Comp4010 2021 Lecture2-Perception
Comp4010 2021 Lecture2-PerceptionComp4010 2021 Lecture2-Perception
Comp4010 2021 Lecture2-PerceptionMark Billinghurst
 
The sixth sense technology complete ppt
The sixth sense technology complete pptThe sixth sense technology complete ppt
The sixth sense technology complete pptatinav242
 
2022 COMP4010 Lecture2: Perception
2022 COMP4010 Lecture2: Perception2022 COMP4010 Lecture2: Perception
2022 COMP4010 Lecture2: PerceptionMark Billinghurst
 

Tendances (20)

Virtual reality (vr) presentation
Virtual reality (vr) presentation Virtual reality (vr) presentation
Virtual reality (vr) presentation
 
Marker Based Augmented Reality
Marker Based Augmented RealityMarker Based Augmented Reality
Marker Based Augmented Reality
 
Sixth sense technology ppt
Sixth sense technology pptSixth sense technology ppt
Sixth sense technology ppt
 
Empathic Computing: Designing for the Broader Metaverse
Empathic Computing: Designing for the Broader MetaverseEmpathic Computing: Designing for the Broader Metaverse
Empathic Computing: Designing for the Broader Metaverse
 
20 Latest Computer Science Seminar Topics on Emerging Technologies
20 Latest Computer Science Seminar Topics on Emerging Technologies20 Latest Computer Science Seminar Topics on Emerging Technologies
20 Latest Computer Science Seminar Topics on Emerging Technologies
 
Augmented Reality
Augmented RealityAugmented Reality
Augmented Reality
 
Introduction to Extended Reality - XR
Introduction to Extended Reality - XRIntroduction to Extended Reality - XR
Introduction to Extended Reality - XR
 
Comp4010 lecture11 VR Applications
Comp4010 lecture11 VR ApplicationsComp4010 lecture11 VR Applications
Comp4010 lecture11 VR Applications
 
Virtual reality ppt
Virtual reality pptVirtual reality ppt
Virtual reality ppt
 
Comp4010 Lecture9 VR Input and Systems
Comp4010 Lecture9 VR Input and SystemsComp4010 Lecture9 VR Input and Systems
Comp4010 Lecture9 VR Input and Systems
 
Blockchain Explained | How Does A Blockchain Work | Blockchain Explained Simp...
Blockchain Explained | How Does A Blockchain Work | Blockchain Explained Simp...Blockchain Explained | How Does A Blockchain Work | Blockchain Explained Simp...
Blockchain Explained | How Does A Blockchain Work | Blockchain Explained Simp...
 
Augmented reality..
Augmented reality..Augmented reality..
Augmented reality..
 
Introduction to IoT (Internet of Things)
Introduction to IoT (Internet of Things)Introduction to IoT (Internet of Things)
Introduction to IoT (Internet of Things)
 
Augmented reality
Augmented realityAugmented reality
Augmented reality
 
Blockchain
BlockchainBlockchain
Blockchain
 
Virtual Reality
Virtual RealityVirtual Reality
Virtual Reality
 
Comp4010 2021 Lecture2-Perception
Comp4010 2021 Lecture2-PerceptionComp4010 2021 Lecture2-Perception
Comp4010 2021 Lecture2-Perception
 
The sixth sense technology complete ppt
The sixth sense technology complete pptThe sixth sense technology complete ppt
The sixth sense technology complete ppt
 
2022 COMP4010 Lecture2: Perception
2022 COMP4010 Lecture2: Perception2022 COMP4010 Lecture2: Perception
2022 COMP4010 Lecture2: Perception
 
Introduction to Internet of Things (IoT)
Introduction to Internet of Things (IoT)Introduction to Internet of Things (IoT)
Introduction to Internet of Things (IoT)
 

En vedette

Resume desktop support 2016
Resume desktop support 2016Resume desktop support 2016
Resume desktop support 2016Henry Sunarto
 
Business ethics
Business ethics Business ethics
Business ethics Niki2993
 
Multi agency working in Leicestershire - Liz Clark, Leicestershire County Cou...
Multi agency working in Leicestershire - Liz Clark, Leicestershire County Cou...Multi agency working in Leicestershire - Liz Clark, Leicestershire County Cou...
Multi agency working in Leicestershire - Liz Clark, Leicestershire County Cou...CentreOf Excellence
 
المحاضرة الثالثة / حق الحياة
المحاضرة الثالثة / حق الحياة المحاضرة الثالثة / حق الحياة
المحاضرة الثالثة / حق الحياة khadijaelboaishi
 
Recent budgeting developments in the MENA region - Waeel AL MUTAWA, Kuwait (A...
Recent budgeting developments in the MENA region - Waeel AL MUTAWA, Kuwait (A...Recent budgeting developments in the MENA region - Waeel AL MUTAWA, Kuwait (A...
Recent budgeting developments in the MENA region - Waeel AL MUTAWA, Kuwait (A...OECD Governance
 
Vertical Farming Solutions
Vertical Farming SolutionsVertical Farming Solutions
Vertical Farming Solutionsaks85
 

En vedette (14)

Resume desktop support 2016
Resume desktop support 2016Resume desktop support 2016
Resume desktop support 2016
 
The Monarch
The MonarchThe Monarch
The Monarch
 
Example infographics
Example infographicsExample infographics
Example infographics
 
Ccilc Introducion EN
Ccilc Introducion ENCcilc Introducion EN
Ccilc Introducion EN
 
products
productsproducts
products
 
Business ethics
Business ethics Business ethics
Business ethics
 
my cv. PH add
my cv. PH addmy cv. PH add
my cv. PH add
 
Multi agency working in Leicestershire - Liz Clark, Leicestershire County Cou...
Multi agency working in Leicestershire - Liz Clark, Leicestershire County Cou...Multi agency working in Leicestershire - Liz Clark, Leicestershire County Cou...
Multi agency working in Leicestershire - Liz Clark, Leicestershire County Cou...
 
Food Innovation and Renovation-SG Health Promotion Board 2015
Food Innovation and Renovation-SG Health Promotion Board 2015Food Innovation and Renovation-SG Health Promotion Board 2015
Food Innovation and Renovation-SG Health Promotion Board 2015
 
Italy vs Uk
Italy vs UkItaly vs Uk
Italy vs Uk
 
المحاضرة الثالثة / حق الحياة
المحاضرة الثالثة / حق الحياة المحاضرة الثالثة / حق الحياة
المحاضرة الثالثة / حق الحياة
 
Recent budgeting developments in the MENA region - Waeel AL MUTAWA, Kuwait (A...
Recent budgeting developments in the MENA region - Waeel AL MUTAWA, Kuwait (A...Recent budgeting developments in the MENA region - Waeel AL MUTAWA, Kuwait (A...
Recent budgeting developments in the MENA region - Waeel AL MUTAWA, Kuwait (A...
 
Intro to Vertical Farming in China 2015
Intro to Vertical Farming in China 2015Intro to Vertical Farming in China 2015
Intro to Vertical Farming in China 2015
 
Vertical Farming Solutions
Vertical Farming SolutionsVertical Farming Solutions
Vertical Farming Solutions
 

Similaire à Seminar report meta

Project glass ieee document
Project glass ieee documentProject glass ieee document
Project glass ieee documentbhavyakishore
 
Raspberry Pi Augmentation: A Cost Effective Solution To Google Glass
Raspberry Pi Augmentation: A Cost Effective Solution To Google GlassRaspberry Pi Augmentation: A Cost Effective Solution To Google Glass
Raspberry Pi Augmentation: A Cost Effective Solution To Google GlassIRJET Journal
 
google glases document
google glases documentgoogle glases document
google glases documentmahesh b
 
Google Glass seminar complete
Google Glass seminar completeGoogle Glass seminar complete
Google Glass seminar completeRaju kumar
 
Google Glass: A Futuristic Fashion Failure Gadget
Google Glass: A Futuristic Fashion Failure  GadgetGoogle Glass: A Futuristic Fashion Failure  Gadget
Google Glass: A Futuristic Fashion Failure GadgetMd. Salim Reza Jony
 
Glimpses into the future of mobile devices, the internet, and more - updated ...
Glimpses into the future of mobile devices, the internet, and more - updated ...Glimpses into the future of mobile devices, the internet, and more - updated ...
Glimpses into the future of mobile devices, the internet, and more - updated ...Michael Harries
 
Computer generation presentation by zohaib akram
Computer generation presentation by zohaib akramComputer generation presentation by zohaib akram
Computer generation presentation by zohaib akramhassanIrshad20
 
[GE207] Session03: Digital Technology Trends
[GE207] Session03: Digital Technology Trends[GE207] Session03: Digital Technology Trends
[GE207] Session03: Digital Technology TrendsSukanya Ben
 
augmented reality paper presentation
augmented reality paper presentationaugmented reality paper presentation
augmented reality paper presentationVaibhav Mehta
 
ILTA Future Horizons Technology Timeline 2014 - 2030
ILTA Future Horizons Technology Timeline 2014 - 2030ILTA Future Horizons Technology Timeline 2014 - 2030
ILTA Future Horizons Technology Timeline 2014 - 2030Rohit Talwar
 
BetaGroup - Tech Trends in 2017, a snap shot by BetaGroup
BetaGroup - Tech Trends in 2017, a snap shot by BetaGroupBetaGroup - Tech Trends in 2017, a snap shot by BetaGroup
BetaGroup - Tech Trends in 2017, a snap shot by BetaGroupMohammed Cherif
 
ARTIFICIAL INTELLIGENCE IN METAVERSE
ARTIFICIAL INTELLIGENCE IN METAVERSEARTIFICIAL INTELLIGENCE IN METAVERSE
ARTIFICIAL INTELLIGENCE IN METAVERSEIRJET Journal
 
Seminar report on Google Glass, Blu-ray & Green IT
Seminar report on Google Glass, Blu-ray & Green ITSeminar report on Google Glass, Blu-ray & Green IT
Seminar report on Google Glass, Blu-ray & Green ITAnjali Agrawal
 

Similaire à Seminar report meta (20)

Project glass ieee document
Project glass ieee documentProject glass ieee document
Project glass ieee document
 
Raspberry Pi Augmentation: A Cost Effective Solution To Google Glass
Raspberry Pi Augmentation: A Cost Effective Solution To Google GlassRaspberry Pi Augmentation: A Cost Effective Solution To Google Glass
Raspberry Pi Augmentation: A Cost Effective Solution To Google Glass
 
google glases document
google glases documentgoogle glases document
google glases document
 
Google Glass seminar complete
Google Glass seminar completeGoogle Glass seminar complete
Google Glass seminar complete
 
finalgoogle
finalgooglefinalgoogle
finalgoogle
 
Google Glass: A Futuristic Fashion Failure Gadget
Google Glass: A Futuristic Fashion Failure  GadgetGoogle Glass: A Futuristic Fashion Failure  Gadget
Google Glass: A Futuristic Fashion Failure Gadget
 
Virtual reality
Virtual realityVirtual reality
Virtual reality
 
Glimpses into the future of mobile devices, the internet, and more - updated ...
Glimpses into the future of mobile devices, the internet, and more - updated ...Glimpses into the future of mobile devices, the internet, and more - updated ...
Glimpses into the future of mobile devices, the internet, and more - updated ...
 
Computer generation presentation by zohaib akram
Computer generation presentation by zohaib akramComputer generation presentation by zohaib akram
Computer generation presentation by zohaib akram
 
Google glass
Google glassGoogle glass
Google glass
 
[GE207] Session03: Digital Technology Trends
[GE207] Session03: Digital Technology Trends[GE207] Session03: Digital Technology Trends
[GE207] Session03: Digital Technology Trends
 
augmented reality paper presentation
augmented reality paper presentationaugmented reality paper presentation
augmented reality paper presentation
 
CMPE- 280-Research_paper
CMPE- 280-Research_paperCMPE- 280-Research_paper
CMPE- 280-Research_paper
 
Art intelligence
Art intelligenceArt intelligence
Art intelligence
 
ILTA Future Horizons Technology Timeline 2014 - 2030
ILTA Future Horizons Technology Timeline 2014 - 2030ILTA Future Horizons Technology Timeline 2014 - 2030
ILTA Future Horizons Technology Timeline 2014 - 2030
 
BetaGroup - Tech Trends in 2017, a snap shot by BetaGroup
BetaGroup - Tech Trends in 2017, a snap shot by BetaGroupBetaGroup - Tech Trends in 2017, a snap shot by BetaGroup
BetaGroup - Tech Trends in 2017, a snap shot by BetaGroup
 
Google glasses
Google glassesGoogle glasses
Google glasses
 
Google glasses
Google glassesGoogle glasses
Google glasses
 
ARTIFICIAL INTELLIGENCE IN METAVERSE
ARTIFICIAL INTELLIGENCE IN METAVERSEARTIFICIAL INTELLIGENCE IN METAVERSE
ARTIFICIAL INTELLIGENCE IN METAVERSE
 
Seminar report on Google Glass, Blu-ray & Green IT
Seminar report on Google Glass, Blu-ray & Green ITSeminar report on Google Glass, Blu-ray & Green IT
Seminar report on Google Glass, Blu-ray & Green IT
 

Dernier

Online electricity billing project report..pdf
Online electricity billing project report..pdfOnline electricity billing project report..pdf
Online electricity billing project report..pdfKamal Acharya
 
Bhubaneswar🌹Call Girls Bhubaneswar ❤Komal 9777949614 💟 Full Trusted CALL GIRL...
Bhubaneswar🌹Call Girls Bhubaneswar ❤Komal 9777949614 💟 Full Trusted CALL GIRL...Bhubaneswar🌹Call Girls Bhubaneswar ❤Komal 9777949614 💟 Full Trusted CALL GIRL...
Bhubaneswar🌹Call Girls Bhubaneswar ❤Komal 9777949614 💟 Full Trusted CALL GIRL...Call Girls Mumbai
 
Hostel management system project report..pdf
Hostel management system project report..pdfHostel management system project report..pdf
Hostel management system project report..pdfKamal Acharya
 
1_Introduction + EAM Vocabulary + how to navigate in EAM.pdf
1_Introduction + EAM Vocabulary + how to navigate in EAM.pdf1_Introduction + EAM Vocabulary + how to navigate in EAM.pdf
1_Introduction + EAM Vocabulary + how to navigate in EAM.pdfAldoGarca30
 
COST-EFFETIVE and Energy Efficient BUILDINGS ptx
COST-EFFETIVE  and Energy Efficient BUILDINGS ptxCOST-EFFETIVE  and Energy Efficient BUILDINGS ptx
COST-EFFETIVE and Energy Efficient BUILDINGS ptxJIT KUMAR GUPTA
 
NO1 Top No1 Amil Baba In Azad Kashmir, Kashmir Black Magic Specialist Expert ...
NO1 Top No1 Amil Baba In Azad Kashmir, Kashmir Black Magic Specialist Expert ...NO1 Top No1 Amil Baba In Azad Kashmir, Kashmir Black Magic Specialist Expert ...
NO1 Top No1 Amil Baba In Azad Kashmir, Kashmir Black Magic Specialist Expert ...Amil baba
 
Generative AI or GenAI technology based PPT
Generative AI or GenAI technology based PPTGenerative AI or GenAI technology based PPT
Generative AI or GenAI technology based PPTbhaskargani46
 
scipt v1.pptxcxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx...
scipt v1.pptxcxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx...scipt v1.pptxcxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx...
scipt v1.pptxcxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx...HenryBriggs2
 
Introduction to Data Visualization,Matplotlib.pdf
Introduction to Data Visualization,Matplotlib.pdfIntroduction to Data Visualization,Matplotlib.pdf
Introduction to Data Visualization,Matplotlib.pdfsumitt6_25730773
 
Computer Networks Basics of Network Devices
Computer Networks  Basics of Network DevicesComputer Networks  Basics of Network Devices
Computer Networks Basics of Network DevicesChandrakantDivate1
 
Digital Communication Essentials: DPCM, DM, and ADM .pptx
Digital Communication Essentials: DPCM, DM, and ADM .pptxDigital Communication Essentials: DPCM, DM, and ADM .pptx
Digital Communication Essentials: DPCM, DM, and ADM .pptxpritamlangde
 
A CASE STUDY ON CERAMIC INDUSTRY OF BANGLADESH.pptx
A CASE STUDY ON CERAMIC INDUSTRY OF BANGLADESH.pptxA CASE STUDY ON CERAMIC INDUSTRY OF BANGLADESH.pptx
A CASE STUDY ON CERAMIC INDUSTRY OF BANGLADESH.pptxmaisarahman1
 
Standard vs Custom Battery Packs - Decoding the Power Play
Standard vs Custom Battery Packs - Decoding the Power PlayStandard vs Custom Battery Packs - Decoding the Power Play
Standard vs Custom Battery Packs - Decoding the Power PlayEpec Engineered Technologies
 
Employee leave management system project.
Employee leave management system project.Employee leave management system project.
Employee leave management system project.Kamal Acharya
 
Introduction to Serverless with AWS Lambda
Introduction to Serverless with AWS LambdaIntroduction to Serverless with AWS Lambda
Introduction to Serverless with AWS LambdaOmar Fathy
 
💚Trustworthy Call Girls Pune Call Girls Service Just Call 🍑👄6378878445 🍑👄 Top...
💚Trustworthy Call Girls Pune Call Girls Service Just Call 🍑👄6378878445 🍑👄 Top...💚Trustworthy Call Girls Pune Call Girls Service Just Call 🍑👄6378878445 🍑👄 Top...
💚Trustworthy Call Girls Pune Call Girls Service Just Call 🍑👄6378878445 🍑👄 Top...vershagrag
 
Thermal Engineering-R & A / C - unit - V
Thermal Engineering-R & A / C - unit - VThermal Engineering-R & A / C - unit - V
Thermal Engineering-R & A / C - unit - VDineshKumar4165
 
School management system project Report.pdf
School management system project Report.pdfSchool management system project Report.pdf
School management system project Report.pdfKamal Acharya
 

Dernier (20)

Online electricity billing project report..pdf
Online electricity billing project report..pdfOnline electricity billing project report..pdf
Online electricity billing project report..pdf
 
Call Girls in South Ex (delhi) call me [🔝9953056974🔝] escort service 24X7
Call Girls in South Ex (delhi) call me [🔝9953056974🔝] escort service 24X7Call Girls in South Ex (delhi) call me [🔝9953056974🔝] escort service 24X7
Call Girls in South Ex (delhi) call me [🔝9953056974🔝] escort service 24X7
 
Bhubaneswar🌹Call Girls Bhubaneswar ❤Komal 9777949614 💟 Full Trusted CALL GIRL...
Bhubaneswar🌹Call Girls Bhubaneswar ❤Komal 9777949614 💟 Full Trusted CALL GIRL...Bhubaneswar🌹Call Girls Bhubaneswar ❤Komal 9777949614 💟 Full Trusted CALL GIRL...
Bhubaneswar🌹Call Girls Bhubaneswar ❤Komal 9777949614 💟 Full Trusted CALL GIRL...
 
Hostel management system project report..pdf
Hostel management system project report..pdfHostel management system project report..pdf
Hostel management system project report..pdf
 
1_Introduction + EAM Vocabulary + how to navigate in EAM.pdf
1_Introduction + EAM Vocabulary + how to navigate in EAM.pdf1_Introduction + EAM Vocabulary + how to navigate in EAM.pdf
1_Introduction + EAM Vocabulary + how to navigate in EAM.pdf
 
COST-EFFETIVE and Energy Efficient BUILDINGS ptx
COST-EFFETIVE  and Energy Efficient BUILDINGS ptxCOST-EFFETIVE  and Energy Efficient BUILDINGS ptx
COST-EFFETIVE and Energy Efficient BUILDINGS ptx
 
NO1 Top No1 Amil Baba In Azad Kashmir, Kashmir Black Magic Specialist Expert ...
NO1 Top No1 Amil Baba In Azad Kashmir, Kashmir Black Magic Specialist Expert ...NO1 Top No1 Amil Baba In Azad Kashmir, Kashmir Black Magic Specialist Expert ...
NO1 Top No1 Amil Baba In Azad Kashmir, Kashmir Black Magic Specialist Expert ...
 
Generative AI or GenAI technology based PPT
Generative AI or GenAI technology based PPTGenerative AI or GenAI technology based PPT
Generative AI or GenAI technology based PPT
 
scipt v1.pptxcxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx...
scipt v1.pptxcxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx...scipt v1.pptxcxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx...
scipt v1.pptxcxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx...
 
Introduction to Data Visualization,Matplotlib.pdf
Introduction to Data Visualization,Matplotlib.pdfIntroduction to Data Visualization,Matplotlib.pdf
Introduction to Data Visualization,Matplotlib.pdf
 
Computer Networks Basics of Network Devices
Computer Networks  Basics of Network DevicesComputer Networks  Basics of Network Devices
Computer Networks Basics of Network Devices
 
Digital Communication Essentials: DPCM, DM, and ADM .pptx
Digital Communication Essentials: DPCM, DM, and ADM .pptxDigital Communication Essentials: DPCM, DM, and ADM .pptx
Digital Communication Essentials: DPCM, DM, and ADM .pptx
 
FEA Based Level 3 Assessment of Deformed Tanks with Fluid Induced Loads
FEA Based Level 3 Assessment of Deformed Tanks with Fluid Induced LoadsFEA Based Level 3 Assessment of Deformed Tanks with Fluid Induced Loads
FEA Based Level 3 Assessment of Deformed Tanks with Fluid Induced Loads
 
A CASE STUDY ON CERAMIC INDUSTRY OF BANGLADESH.pptx
A CASE STUDY ON CERAMIC INDUSTRY OF BANGLADESH.pptxA CASE STUDY ON CERAMIC INDUSTRY OF BANGLADESH.pptx
A CASE STUDY ON CERAMIC INDUSTRY OF BANGLADESH.pptx
 
Standard vs Custom Battery Packs - Decoding the Power Play
Standard vs Custom Battery Packs - Decoding the Power PlayStandard vs Custom Battery Packs - Decoding the Power Play
Standard vs Custom Battery Packs - Decoding the Power Play
 
Employee leave management system project.
Employee leave management system project.Employee leave management system project.
Employee leave management system project.
 
Introduction to Serverless with AWS Lambda
Introduction to Serverless with AWS LambdaIntroduction to Serverless with AWS Lambda
Introduction to Serverless with AWS Lambda
 
💚Trustworthy Call Girls Pune Call Girls Service Just Call 🍑👄6378878445 🍑👄 Top...
💚Trustworthy Call Girls Pune Call Girls Service Just Call 🍑👄6378878445 🍑👄 Top...💚Trustworthy Call Girls Pune Call Girls Service Just Call 🍑👄6378878445 🍑👄 Top...
💚Trustworthy Call Girls Pune Call Girls Service Just Call 🍑👄6378878445 🍑👄 Top...
 
Thermal Engineering-R & A / C - unit - V
Thermal Engineering-R & A / C - unit - VThermal Engineering-R & A / C - unit - V
Thermal Engineering-R & A / C - unit - V
 
School management system project Report.pdf
School management system project Report.pdfSchool management system project Report.pdf
School management system project Report.pdf
 

Seminar report meta

  • 1. Baselios Mathews II College of Engineering, Sasthamcotta, Kollam Dept of Electronics & Communication Engineering 1 CHAPTER 1 INTRODUCTION Space Glasses build a three dimensional model of the world, as you walk around, with the help of an algorithm that tracks flat surface in real time. Computers that can be mounted on our heads seem like a concept straight out of science fiction movies. Steve Mann, a professor at University of Toronto, and a leader in the field of wearable computing. A company, calledMeta, in collaborationwithMann, is creating glasses that can loosen the grip of Google Glass on the emerging market with its ability to merge the real and the virtual. Meta is building headwear that can superimpose 3D content on the real world. Mann works as the chief scientist in the company founded by Ben Sand and Meron Gribetz. The first product, the Space Glasses comes with a projectable LCD that you can see through for each eye, an infrared depth camera, and a standard colour camera, in addition to a gyroscope, accelerometer, and compass. Space Glasses build a three dimensional model of the world, as you walk around, with the help of an algorithm that tracks flat surface in real time. It is not like augmented reality systems which needs special markers. The coordinates obtained from the tracking are sent to the computer which sends over a 3D model of your surroundings. You can use this to project a movie on a piece of paper. Various people can come at the same object from various angles, or you could have a 3D model to follow you around. The company envisions a future where its technologywill replace the regular computer and as something that people can use to work together – from architectsbent over a table with their teams to design buildings, to people playing around with friends.
  • 2. Baselios Mathews II College of Engineering, Sasthamcotta, Kollam Dept of Electronics & Communication Engineering 2 CHAPTER 2 HYSTORY OF WEARABLE DIGITAL EYEGLASSES & TECHNOLOGY Wearable technology, wearable’s, fashionable technology, wearable devices, tech togs, or fashion electronics are clothing and accessories incorporating computer and advanced electronic technologies. The designs often incorporate practical functions and features, but may also have a purely critical or aesthetic agenda. Wearable devices such as activity trackers are a good example of the Internet of Things, since theyare part of the network of physical objects or "things"embedded with electronics, software, sensors and connectivity to enable objects to exchange data with a manufacturer, operator and/or other connected devices, without requiring human intervention. Wearable technologyis relatedto both ubiquitous computing and the history and development of wearable computers. Wearable’s make technology pervasive by interweaving it into daily life. Through the history and development of wearable computing, pioneers have attempted to enhance or extend the functionality of clothing, or to create wearable’s as accessories able to provide users with sousveillance - the recording of an activity typically by way of small wearable or portable personal technologies. Tracking information like movement, steps and heart rate are all part of the quantified self movement. The origins of wearable technology are influenced by both of these responses to the vision of ubiquitous computing. One early piece of widely-adopted wearable technology was the calculator watch, introducedin the 1980s. In2008, incorporated a hidden Bluetooth microphone into a pair of earrings. Around the same time, the Spy Tie appeared, a "stylish neck tie with a hidden color camera".
  • 3. Baselios Mathews II College of Engineering, Sasthamcotta, Kollam Dept of Electronics & Communication Engineering 3 Smart glasses or smart glasses or Digital Eye Glass or Personal Imaging System are a wearable computer that adds information to what the wearer sees. Typically this is achieved through an optical head-mounted display (OHMD) or computerized internet-connected glasses with transparent heads-up display (HUD) or augmented reality (AR) overlay that has the capability of reflecting projected digital images as well as allowing the user to see through it, or see better with it. While early models can perform basic tasks, such as just serve as a front end display for a remote system, as in the case of smart glasses utilizing cellular technology or Wi-Fi, modern smart glasses are effectively wearable computers which can run self- contained mobile. Some are hands free that can communicate with the Internet via natural language voice commands, while other use touch buttons. Like other computers, smart glasses may collect information from internal or external sensors. It may control, or retrieve data from, other instruments or computers. It may support wireless technologies like Bluetooth, WI-Fi, and GPS. While a smaller number of models run a mobile operating system and function as portable media players to send audio and video files to the user via a Bluetooth or Wi-Fi headset. Some smart glasses models also feature full life logging and tracker capability.
  • 4. Baselios Mathews II College of Engineering, Sasthamcotta, Kollam Dept of Electronics & Communication Engineering 4 CHAPTER 3 COMPUTER MEDIATED REALITY Computer-mediatedreality refers to the ability to add to, subtract information from, or otherwise manipulate one's perception of reality through the use of a wearable computer or hand-held device such as a smart phone. Typically, it is the user's visual perception of the environment that is mediated. This is done through the use of some kind of electronic device, such as an Eye Tap device or smart phone, which can act as a visual filter between the real world and what the user perceives. Computer-mediated reality has been used to enhance visual perception as an aid to the visually impaired. This example achieves a mediated reality by altering a video input stream light that would have normally reached the user's eyes, and computationally altering it to filter it into a more useful form. It has also been used for interactive computer interfaces. The use of computer-mediated reality to diminish perception, by the removal or masking of visual data, has been used for architectural applications, and is an area of ongoing research. Fig:3.1 Mediated reality
  • 5. Baselios Mathews II College of Engineering, Sasthamcotta, Kollam Dept of Electronics & Communication Engineering 5 3.1 AUGMENTED REALITY Fig 3.2: Augmediated Reality Simply AR is a field of computer research and which is the combination of real world and computer generated data. And it has the ability to overlay computer graphics on to the real world. Differ from virtual reality the users maintain a sense of presence in real world. Broadly, augmented reality is the (most frequently visual) superposition of real and virtual objects or information in one environment. As a research area, augmented reality has been pursued for many years with a number of wide-ranging applications. Many of these systems have never left the laboratory due to cost or other constraints rendering them impractical. However, due to the adoption of mobile devices with powerful processors, built-in cameras, and fast internet connections, augmented reality is beginning to infiltrate the average individual’s life. A number of augmented reality applications have appeared in the Apple and Google application stores. These applications range from spur-of the moment information overlays, like location guides, reviews and ratings, to games that observe the user’s motions to create virtual effects. One good example is Google's Goggle program an application that accepts photos of landmarks, books, artwork, and many other object types and then returns a Google visual search on the object.
  • 6. Baselios Mathews II College of Engineering, Sasthamcotta, Kollam Dept of Electronics & Communication Engineering 6 3.2 VIRTUAL REALITY Fig:3.3 Virtual rreality Virtual realityis a term that applies to computer-simulated environments that can simulate physical presence in places in the real world, as well as in imaginary worlds. It covers remote communication environments whichprovide virtual presence ofuserswiththe concepts of tele presence and tele existence or a virtual artifact (VA). The simulated environment can be similar to the real world in order to create a life like experience. Virtual reality is often used to describe a wide variety of applications commonly associated with immersive, highly visual, 3D environments. The development of CAD software, graphics hardware acceleration, head mounted displays, database gloves, and miniaturization.
  • 7. Baselios Mathews II College of Engineering, Sasthamcotta, Kollam Dept of Electronics & Communication Engineering 7 CHAPTER 4 META PRO SPACEGLASSES The Meta Pro Space Glasses are the most innovative glasses in the wearable technology world at the moment. They are the future of having a laptop computer, phone, and tablet all in one. They are augmented realityspecs that layer digital reality and content that users canmanipulate over their fieldof vision. Meta Pro brings about a mind boggling world, where free flowing virtual shapes can be molded and actually produced with 3D printers. The Pro is the first pair of smart glasses that stuffs the technology. This technology is capable of delivering the stunning holographic interfaces. Fig 4.1: Meta pro spaceglasses The SpaceGlasses boasts agiant 3D holographic HD screen, 40 degfieldof view (15 x Google Glass) Ultrahigh-endsleeklight weight design. Independent pocket computingpower. You can change your screensize to suit youon the fly, and have access to all your apps. Link with your laptopwhich is now a hologram and you can place it anywhere in the world around you.
  • 8. Baselios Mathews II College of Engineering, Sasthamcotta, Kollam Dept of Electronics & Communication Engineering 8 FATHER OF META & AUGMENTED REALITY Fig 4.2:Gribetz Meron & Prof.Steve Mann Metais a 40 personstart up company that is led by Founder & CEO Meron Gribetz(Forbes 30 under 30), Chief Scientist Prof. Steve Mann (inventor of wearable computing&pioneer of augmented reality). 4.1 SPECIFICATIONS OF META HARDWARE SOFTWARE Projection: two individual screen Supported platform: currently Windows 32/ 64 bits Weight: 0.3Kg Intel core i5 CPU Operating temperature: -30 to 300 deg 4GB of RAM Poweredby a 32WHr battery 802.11nWi-Fi, Bluetooth4.0 Resolution: 1280x720 for eacheye Metaapp store Tabe 4.1: Specifications
  • 9. Baselios Mathews II College of Engineering, Sasthamcotta, Kollam Dept of Electronics & Communication Engineering 9 CHAPTER 5 OVERVIEW Fig 5.1: overview of Meta pro The glasses feature two 1280×720-pixel LCD displays, each with 40 degree fields of view and aligned for stereoscopic 3D; twin RGB cameras; 3D surround sound; 3D time of flight depth; and a 9-axis integrated motion unit with accelerometer, gyroscope and compass. If you order a pair of Pros, not only do you get this innovative technology, but you also get a wearable computer that is very powerful to run them — an Intel Core i5 CPU, 4GB of RAM, 128 GB of storage, 802.11nWi-Fi, Bluetooth4.0 poweredby a 32WHr battery. It has the abilityto create a 3D image of your smartphone, tablet and laptop, allowing you to control your devices through a pair of glasses.
  • 10. Baselios Mathews II College of Engineering, Sasthamcotta, Kollam Dept of Electronics & Communication Engineering 10 5.1 OPTICAL HEADMOUNTED DISPLAY An optical head-mounted display (OHMD) is a wearable device that has the capability of reflectingprojectedimages as well as allowing the user to see through it that is augmented reality. Head-mounteddisplays are not designedto be workstations, and traditional input devices such as keyboards do not support the concept of smart glasses. Input devices that lend themselves to mobilityand/or hands-free use are good candidate. 5.1.1 VIRTUAL RETINAL DISPLAY A virtual retinal display (VRD), also known as a retinal scan display (RSD) or retinal projector (RP), is a display technology that draws a raster display (like a television) directly onto the retina of the eye. The user sees what appears to be a conventional display floating in space in front of them. Fig 5.1.1 VRD In a conventional display a real image is produced. The real image is either viewed directlyor, as in the case with most head-mounted displays projected through an optical system and the resulting virtual image is viewed. The projection moves the virtual image to a distance that allows the eye to focus comfortably. In a VRD no real image is ever produced. Rather, an image is formeddirectlyonthe retina of the user's eye. A block diagram of the VRD is shown in the Figure above. To create an image with the VRD a photon source is usedto generate a coherent beam of light. The use of a coherent source allows the system to draw a diffraction limited spot on the retina.
  • 11. Baselios Mathews II College of Engineering, Sasthamcotta, Kollam Dept of Electronics & Communication Engineering 11 The light beam is intensity modulated to match the intensity of the image being rendered. The modulation can be accomplished after the beam is generated. If the source has enough modulation bandwidth, as in the case of a laser diode, the source can be modulated directly. The resulting modulated beam is then scanned to place each image point, or pixel, at the proper position on the retina. A variety of scan patterns are possible. The scanner could be used in a calligraphic (vector) mode, in which the lines that form the image are drawn directly, or in a raster mode, much like standard computer monitorsor television. Use of the raster method of image scanning allows the VRD to be driven by standard video sources. To draw the raster, a horizontal scanner moves the beam to draw a row of pixels. The vertical scanner then moves the beam to the next line where another row of pixels is drawn. After scanning, the optical beam must be properlyprojectedinto the eye. The goal is for the exit pupil of the VRD to be coplanar with the entrance pupil of the eye. The lens and cornea of the eye will then focus the beam on the retina, forming a spot. The position on the retina where the eye focuses the spot is determined by the angle at which light enters the eye. This angle is determinedby the scanners and is continually varying in a raster pattern. The brightness of the focused spot is determined by the intensity modulation of the light beam. The intensity modulated moving spot, focused through the eye, draws an image on the retina. The eye's persistence allows the image to appear continuous and stable. Finally, the drive electronics synchronize the scanners and intensity modulator with the incoming video signal in such a manner that a stable image is formed
  • 12. Baselios Mathews II College of Engineering, Sasthamcotta, Kollam Dept of Electronics & Communication Engineering 12 5.1.2 HOLOGRAPHIC INTERFACE A holographic interface is a computer input method that utilizes a projected imagine insteadof a physical device. Holographic interfaces are popular on computer systems that see a variety of uses, as well as those that require extremely complex inputs. Because they are more expensive than physical interfaces, theytendto be restrictedto wealthy areas. Holographic interfaces utilize holoprojectors to create a 3d image in whatever configurationis most appropriate to the program used. Because the image is constructed only of light, the configuration can be as simple or complex as needed, and easily scaled to a size that the user finds comfortable. Additionally, it can be dynamically reconfigured, so a user can quickly switch from a simplified interface to a more involved one as need. Interactionwith the image is recorded via sensors in the projector, not through direct manipulation of the hologram. These sensors can record the precise locationof the user's fingers (or otherappendages for interfaces that utilize them), enabling pinpoint control. A holographic interface is a way to interact with electronics without coming into physical contact with the machine. Though the holographic interface was only developed in the 2010s, it is often compatible with contemporary computer systems and programs. The creation of the hologram is a relativelycomplexcomponent of the interface, whereas the ability of the interface to recognize the commands of the user is achieved through the use of motiondetectors, a technologythat has beenin use for decades. In order to create aholographic interface, a special holographic projector is needed that can display a three dimensional image in space. A holographic interface is away to interact with electronics without coming into physical contact with the machine. Though the holographic interface was only developed in the 2010s, it is often compatible with contemporary computer systems and programs. The creationof the hologram is a relatively complex component of the interface, whereas the ability of the interface to recognize the commands of the user is achieved through the use of motion detectors, a technology that has been in use for decades. In order to create a holographic interface, a special holographic projector is needed that can display a three dimensional image in space.
  • 13. Baselios Mathews II College of Engineering, Sasthamcotta, Kollam Dept of Electronics & Communication Engineering 13 CHAPTER 6 WORKING OF META PRO SPACEGLASS The glasses will be running a Unity-based desktop, and through that desktop the user will be able to open certain files and programs. For access to all windows-based programs, the team suggests using Unity's own Virtual Desktop (I haven't actually found a Virtual Desktop that can be run in Unity). Meta doesn't officially support this, though. What is displayed by the glasses doesn't necessarily move with the movement of the head. For example, say you are looking at some virtual picture, and you wan to fix that picture in some point in space. You can do that, and look a way. When you look back, it's exactly where you hung it. Or, if you want that picture to be fixed in your Field of View, you can do that, too. The device will be using the Soft Kinect DS325 for monitoring movements, and the IISU middleware for programming needs. The Soft Kinect does not do body tracking, only short range tracking and hand tracking. The IISU pro will be shipping free with the Meta SDK. They know about the problems Oculus Rift had with Customs, and they're going to try and avoid them. 6.1 SPEECH RECOGNITION In computer science and electrical engineering, speech recognition (SR) is the translation of spoken words into text. It is also known as "automatic speech recognition" (ASR), "computer speech recognition", or just "speech to text" (STT). Some SR systems use "training" ( "enrolment") where an individual speaker reads text or isolated vocabulary into the system. The system analyzes the person's specific voice and uses it to fine-tune the recognition of that person's speech, resulting in increased accuracy. Systems that do not use training are called "speaker independent" systems.
  • 14. Baselios Mathews II College of Engineering, Sasthamcotta, Kollam Dept of Electronics & Communication Engineering 14 The advances are evidenced not only by the surge of academic papers published in the field, but more importantly by the world-wide industry adoption of a variety of deep learning methods in designing and deploying speech recognition systems. Both acoustic modeling and language modeling are important parts of modern statistically-based speech recognition algorithms. Hidden Markov models (HMMs) are widely used in many systems. Language modeling is also used in many other natural language processingapplications suchas document classification or statistical machine translation. 6.1.1 Hidden Markov models Modern general-purpose speech recognition systems are based on Hidden Markov Models. These are statistical models that output a sequence of symbols or quantities. HMMs are used in speech recognition because a speech signal can be viewed as a piecewise stationary signal or a short-time stationary signal. In a short time-scale (e.g., 10 milliseconds), speech can be approximated as a stationary process. Speech can be thought of as a Markov model for many stochastic purposes. Another reason why HMMs are popular is because they can be trained automatically and are simple and computationallyfeasible to use. In speech recognition, the hidden Markovmodel would output a sequence of n-dimensional real-valued vectors (with n being a small integer, suchas 10), outputting one of these every 10 milliseconds. The vectors would consist of cepstral coefficients, which are obtained by taking a Fourier transform of a short time window of speechand decorrelatingthe spectrum usinga cosine transform, then taking the first (most significant) coefficients. The hidden Markov model will tend to have in each state a statistical distribution that is a mixture of diagonal covariance Gaussians, which will give a likelihood for each observed vector. Each word, or (for more general speech recognition systems), each phoneme, will have a different output distribution; a hidden Markov model for a sequence of words or phonemes is made by concatenating the individual trained hidden Markovmodels for the separate words and phonemes.
  • 15. Baselios Mathews II College of Engineering, Sasthamcotta, Kollam Dept of Electronics & Communication Engineering 15 6.2 GESTURE RECOGNITION Gesture recognition is a topic in computer science and language technology with the goal of interpreting human gestures via mathematical algorithms. Gestures can originate from any bodily motion or state but commonly originate from the face or hand. Current focuses in the field include emotion recognition from face and hand gesture recognition. Many approaches have been made using cameras and computer vision algorithms to interpret sign language. However, the identification and recognition of posture, gait, proxemics, and human behaviors is also the subject of gesture recognitiontechniques Gesture recognitioncanbe seen as a way for computers to begin to understand human body language, thus building a richer bridge between machines and humans than primitive text user interfaces or even GUIs (graphical user interfaces), which still limit the majority of input to keyboard and mouse. Gesture recognition enables humans to communicate with the machine (HMI) and interact naturally without any mechanical devices. Using the concept of gesture recognition, it is possible to point a finger at the computer screen so that the cursor will move accordingly. This could potentially make conventional input devices such as mouse, keyboards and even touch- screens redundant. Gesture recognition can be conducted with techniques from computer vision and image processing. The ability to track a person's movements and determine what gestures they may be performing can be achieved through various tools. Although there is a large amount of research done in image/video based gesture recognition, there is some variation within the tools and environments used between implementations. Wired gloves: These can provide input to the computer about the position and rotation of the hands using magnetic or inertial tracking devices. Furthermore, some gloves can detect finger bending with a high degree of accuracy (5-10 degrees), or even provide haptic feedback to the user, which is a simulation of the sense of touch.
  • 16. Baselios Mathews II College of Engineering, Sasthamcotta, Kollam Dept of Electronics & Communication Engineering 16 Depth-aware cameras: Using specialized cameras such as structured light or time- of-flight cameras, one can generate a depth map of what is being seen through the camera at a short range, and use this data to approximate a 3d representation of what is being seen. These can be effective for detection of hand gestures due to their short range capabilities. Stereo cameras. Using two cameras whose relations to one another are known, a 3d representation can be approximated by the output of the cameras. To get the cameras' relations, one can use a positioning reference such as a lexian- stripe or infrared emitters. In combination with direct motion measurement (6D- Vision) gestures can directly be detected. Controller-basedgestures. These controllers act as an extension of the body so that when gestures are performed, some of their motion can be conveniently captured by software. Mouse gestures are one such example, where the motion of the mouse is correlated to a symbol being drawn by a person's hand. Single camera. A standard 2D camera can be used for gesture recognition where the resources/environment would not be convenient for other forms of image-based recognition. Earlier it was thought that single cameramay not be as effective as stereo or depth aware cameras, but some companies are challenging this theory. 6.2.1 Algorithms Fig 6.1 Algorithm
  • 17. Baselios Mathews II College of Engineering, Sasthamcotta, Kollam Dept of Electronics & Communication Engineering 17 Different ways of tracking and analyzing gestures exist, and some basic layout is given is in the diagram above. For example, volumetric models convey the necessary information required for an elaborate analysis, however they prove to be very intensive in terms of computational power and require further technological developments in order to be implemented for real-time analysis. On the other hand, appearance-based models are easier to process but usually lack the generalityrequired for Human-Computer Interaction. Depending on the type of the input data, the approach for interpreting a gesture could be done in different ways. However, most of the techniques rely on key pointers represented in a 3D coordinate system. Based on the relative motion of these, the gesture can be detected with a high accuracy, depending on the quality of the input and the algorithm’s approach. In order to interpret movements of the body, one has to classify them according to common properties and the message the movements may express. For example, in sign language each gesture represents a word or phrase. The taxonomy that seems very appropriate for Human-Computer Interaction has been proposed by Quek in "Toward aVision-Based Hand Gesture Interface". He presents several interactive gesture systems inorder to capture the whole space of the gestures: 1. Manipulative; 2. Semaphoric; 3. Conversational. Some literature differentiates 2 different approaches in gesture recognition: a 3D model based and an appearance- based.[21] The foremost method makes use of 3D information of key elements of the body parts in order to obtain several important parameters, like palm position or joint angles. On the other hand, Appearance-based systems use images or videos for direct interpretation.
  • 18. Baselios Mathews II College of Engineering, Sasthamcotta, Kollam Dept of Electronics & Communication Engineering 18 .Fig.6.2 A real hand (left) is interpretedas a collectionof vertices andlines in the 3D meshversion (right), and the software uses their relative positionand interactionin order to infer the gesture a) 3D model-basedalgorithms The 3D model approach can use volumetric or skeletal models, or even a combination of the two. Volumetric approaches have been heavily used in computer animation industry and for computer vision purposes. The models are generally created of complicated 3D surfaces, like NURBS or polygon meshes. Fig.6.3: 3D model-based algorithms The skeletal version(right) is effectivelymodelling the hand (left). This has fewer parameters than the volumetric versionand it's easier to compute, making it suitable for real-time gestureanalysis systems b) Skeletal-basedalgorithms Instead of using intensive processing of the 3D models and dealing with a lot of parameters, one can just use a simplified version of joint angle parameters along with segment lengths. This is known as a skeletal representation of the body, where a virtual skeletonof the personis computedand parts of the body are mapped to certain segments.
  • 19. Baselios Mathews II College of Engineering, Sasthamcotta, Kollam Dept of Electronics & Communication Engineering 19 6.3 EYE TRACKING Eye tracking is the process of measuring either the point of gaze (where one is looking) or the motion of an eye relative to the head. Aneye tracker is a device for measuring eye positions and eye movement. Eye trackers are used in research on the visual system, in psychology, in psycholinguistics, marketing, as an input device for human computer interaction, and in product design. There are a number of methods for measuring eye movement. The most popular variant uses video images from which the eye position is extracted. 6.3.1 OPTICAL TRACKING Light, typically infrared, is reflected from the eye and sensed by a video camera or some other specially designed optical sensor. The information is then analyzed to extract eye rotation from changes in reflections. Video-based eye trackers typically use the corneal reflection and the center of the pupil as features to track over time. Still more sensitive method of tracking is to image features from inside the eye, such as the retinal blood vessels, and follow these features as the eye rotates. Optical methods, particularly those based on video recording, are widely used for gaze tracking and are favored for being non-invasive and inexpensive. Fig 6.3.1 An eye-tracking head-mounted display An eye-tracking head-mounteddisplay. Each eye has an LED light source (gold-color metal) on the side of the display lens, and a cameraunder the display lens.
  • 20. Baselios Mathews II College of Engineering, Sasthamcotta, Kollam Dept of Electronics & Communication Engineering 20 CHAPTER 7 TECHNOLOGY USED 7.1SDK(SOFTWERE DEVELOPMENT KIT) Meta Software Developer Kit (SDK) is a bundled package for developers to start Meta applications with. It contains access to prefabs and classes you can use to manipulate the 3D holographic content you view through the Meta glasses. The Meta SDK is designedto let you develop your concepts and ideas into working applications as quickly and efficiently as possible. Built on top of the Unity Engine, the SDK has full support for hand gestures, marker-based and markerless surface tracking, imported graphics, 3D models. Fig 7.1 SDK There are any number of uses for the Meta Pro, as the company has an SDK which allows developers to create programs to use with the glasses. In our visit, Meta’s CEO and founder Meron Gribetz showed us how the glasses can be used in place of traditional CAD software to design a 3D printed object using only your hands.
  • 21. Baselios Mathews II College of Engineering, Sasthamcotta, Kollam Dept of Electronics & Communication Engineering 21 7.2 BRAIN–COMPUTER INTERFACE (BCI) A brain–computer interface (BCI), sometimes called a mind-machine interface (MMI), direct neural interface (DNI), or brain–machine interface (BMI), is a direct communication pathway between the brain and an external device. BCIs are often directed at assisting, augmenting, or repairing human cognitive or sensory- motor functions. ResearchonBCIs began in the 1970s at the Universityof California, Los Angeles (UCLA) under a grant from the National Science Foundation, followed by a contract from DARPA. The papers published after this research also mark the first appearance of the expression brain–computer interface in scientific literature. The field of BCI research and development has since focused primarily on neuroprosthetics applications that aim at restoring damaged hearing, sight and movement. Thanks to the remarkable cortical plasticity of the brain, signals from implanted prostheses can, after adaptation, be handled by the brain like natural sensor or effector channels.[3] Following years of animal experimentation, the first neuroprosthetic devices implanted in humans appeared in the mid-1990s. 7.3 3-D STERIOSCOPIC DISPLAY Stereoscopy (also called stereoscopics) is a technique for creating or enhancing the illusion of depth in an image by means of stereopsis for binocular vision. Any stereoscopic image is calledastereogram. Originally, stereogram referred to a pair of stereo images whichcould be viewed using a stereoscope. Most stereoscopic methods present two offset images separately to the left and right eye of the viewer. These two-dimensional images are then combined in the brain to give the perception of 3D depth. This technique is distinguished from 3D displays that display an image in three full dimensions, allowing the observer to increase information about the 3- dimensional objects being displayed by head and eye movements.
  • 22. Baselios Mathews II College of Engineering, Sasthamcotta, Kollam Dept of Electronics & Communication Engineering 22 Autostereoscopic display principles multiview and head-tracked autostereoscopic displays combine the effects of bothstereo parallax and movement parallax to give 3d without glasses. The best implementations produce a perceived effect similar to a white-light hologram. • TrueScaleStereoTM mode • Dimensions, depth and 3D position of graphics match real-life. • Real occlusion • 1280x720 pixels (metaPro) 7.4 Tegra K1 - NEXT AR ENABLER CHIP Fig 7.2 Tegra K1 The Tegra K1 features a heart built out of the Kepler architecture which has spanned two years inside desktop computers, notebooks and the world’s fastest TITAN supercomputer. Featuring upto 192 Kepler cores, the Tegra K1 is indeed a next generation mobile super chip aimed towards high-performance mobile devices. Has also announced their highly anticipated Denver CPU which utilizes specialized 64-bit ARM Cores which would be fused alongside the GPU die. The Tegra K1 chip would be available in two variants, the dual core variant with Denver CPU and 7 Way superscalar compute will feature the 64-bit ARM v8 cores and 192 Kepler cores with clock frequency of 2.5 GHz while the 32-bit Quad core ARM variant will be clocked at 2.3 GHz. The 64-bit Denver model comes with 128KB + 64KB while the 32-bit variant comes with 328KB + 32KB L1 Cache.
  • 23. Baselios Mathews II College of Engineering, Sasthamcotta, Kollam Dept of Electronics & Communication Engineering 23 7.5 ZERO Ui TECHNOLOGY It seems impossible for the behaviors, habits, devices, and interactions we have so rapidly absorbedinto the fabric of our lives to not continue indefinitely from here on in. This is probably what society thought about the steam engine in the 18th Century as well. We are entering a new era of Zero UI: not only is the technology already being invented, but emergent products and services are already in the market that will usher in this shift. Zero UI refers to aparadigm where our movements, voice, glances, and even thoughts can all cause systems to respond to us through our environment. At its extreme, it implies a screen-less, invisible user interface where natural gestures trigger interactions, as if the user was communicating to another person. It would require many technologies to converge and become significantly more sophisticated, particularly voice recognition and motion sensing, but these technologies are evolving rapidly. The word “screen” itself has tension: It simultaneously means an object we look at and something that we hide behind. Even the small hand-sized device becomes abarrier in social situations, absorbing our gaze and taking us away elsewhere. The replacement of these monolithic screen-based devices by ambient technology that surrounds and immerses us may in the end be a very good thing; social interactions could become more natural again and not as obviously mediated by devices. Our attention could again return to the people sitting across the dining table, instead of those half a continent away. In this talk we will explore the technologies, theories, and possible futures of Zero UI; what it means to design and build these interfaces; and what it will mean to live alongside or even “inside” them. Zero UI will not be limited to personal devices but will extend to homes, entire cities, even environments and ecosystems, and as a result will have a massive impact on society as a whole.
  • 24. Baselios Mathews II College of Engineering, Sasthamcotta, Kollam Dept of Electronics & Communication Engineering 24 CHAPTER 8 DATA PROCESSING 8.1 META COMPUTER VISION PROCESSORS Fig 8.1 meta computer vision processors
  • 25. Baselios Mathews II College of Engineering, Sasthamcotta, Kollam Dept of Electronics & Communication Engineering 25 8.2 HAND SURFACE INTERACTION Several studies have been carried out on augmented reality (AR)-based environments that deal with user interfaces for manipulating and interacting with virtual objects aimedat improving immersive feeling and natural interaction. Most of these studies have utilized AR paddles or AR cubes for interactions. However, these interactions overly constrain the users in their ability to directly manipulate AR objects and are limited in providing natural feeling in the user interface. This paper presents a novel approach to natural and intuitive interactions through a direct hand touchable interface in various AR-based user experiences. It combines markerless augmented reality with a depth camera to effectively detect multiple hand touches in an AR space. Furthermore, to simplify hand touch recognition, the point cloud generated by Kinect is analyzed and filtered out. The proposed approach can easily trigger AR interactions, and allows users to experience more intuitive and natural sensations and provides much control efficiency in diverse AR environments. Furthermore, it can easily solve the occlusion problem of the hand and arm region inherent in conventional AR approaches through the analysis of the extracted point cloud. We present the effectiveness and advantages of the proposed approach by demonstrating several implementation results such as interactive AR car design and touchable AR pamphlet. We also present an analysis of a usability study to compare the proposed approach with other well-known AR interactions.
  • 26. Baselios Mathews II College of Engineering, Sasthamcotta, Kollam Dept of Electronics & Communication Engineering 26 CHAPTER 9 ADVANTAGES OF META PRO SPACEGLASSES 1. 3D visualization 2. No needof pc’s, laptops, etc 3. Supports large number of Apps 4. More number of screencanbe extracted 5. Fastest wearable device along the universe
  • 27. Baselios Mathews II College of Engineering, Sasthamcotta, Kollam Dept of Electronics & Communication Engineering 27 CHAPTER 10 APPLICATIONS Meta’s Augmented Reality platform has attracted over a thousand development groups that are building applications in the areas of  Productivity  Architecture Also applicable in  Industrial design  Data visualization  Medical, simulation and training,  Communications  Gaming.
  • 28. Baselios Mathews II College of Engineering, Sasthamcotta, Kollam Dept of Electronics & Communication Engineering 28 CHAPTER 11 CONCLUSION Augmented Realityis just around the corner for the excited consumer market. MetaPro Spaceglasses are one of the most highly anticipated smartglasses brands for many reasons. Not only have they been hailed as the best looking smart glasses by Forbes Magazine but they also claim to offer the most advanced AR experience in the world. MetaPro offer a true 3D holographic augmented reality experience to their users. One of the features is ZeroUi, a virtual 3D modelling technology that enables you to create holographic models by essentially shaping the ether in front of the glasses. If you have access to a 3D printer, you can then transfer the data from MetaPro and bring your hologram to life in the physical world. These super smartglasses also replicate your hardware devices and display them as holograms, for instance MetaPro wouldcreate a hologram of your laptop which you can then interact with via the gesture recognition. In terms of Specs, MetaPro Spaceglasses win against any other smartglasses. One of the reasons is that they havent crammed all the technologyinto the frames of the glasses but instead come with a high power external pocket computer with 4GB of RAM, 128 GB SSD and an i5 Intel processor.
  • 29. Baselios Mathews II College of Engineering, Sasthamcotta, Kollam Dept of Electronics & Communication Engineering 29 REFERENCE [1]. www.wikipedia.com [2]. www.spceglass.com [3]. www.metapro.com [4] Meta 3Dglasses-Businessinsider [5] www.techcrunch/meta.com [6] www.kickstarter.com//meta
  • 30. Baselios Mathews II College of Engineering, Sasthamcotta, Kollam Dept of Electronics & Communication Engineering 30 APPENDIX