Within the emerging category of wearable computing, arguably the most characteristic product to emerge is "smart glasses" which mesh the communications capabilities of smartphones with additional visual and other sensual enhancements, including augmented reality. The primary selling feature of smart glasses is their ability to display video, navigation, messaging, augmented reality (AR) applications, and games on a large virtual screen, all completely hands-free. The current poster child for smart glasses is Google’s "Glass" product, but there are more than 20 firms offering smart glasses or planning to do so.
The hands-free nature of smart glasses opens up new possibilities for human-computer interfaces (HCI), drawing from smart phones as well as interfaces developed in other contexts (e.g. virtual reality). Early smart glasses models are leaning on mature and low-cost technologies with notable influence from smartphones; however we see a gradual trend for smart glasses (and other wearable computing devices) to be driven by more natural interface controls, once these technologies have time to mature as well -- and they're getting remarkably close.
Boost Fertility New Invention Ups Success Rates.pdf
Smart Glasses and the Evolution of Human-Computing Interfaces
1. Page | 1
Smart Glasses and the Evolution of Human-Computing Interfaces
Within the emerging category of wearable computing, arguably the most characteristic product to
emerge is "smart glasses" which mesh the communications capabilities of smartphones with
additional visual and other sensual enhancements, including augmented reality. The primary selling
feature of smart glasses is their ability to display video, navigation, messaging, augmented reality
(AR) applications, and games on a large virtual screen, all completely hands-free. The current
poster child for smart glasses is Google’s "Glass" product, but there are more than 20 firms offering
smart glasses or planning to do so.
The hands-free nature of smart glasses opens up new possibilities for human-computer interfaces
(HCI), drawing from smart phones as well as interfaces developed in other contexts (e.g. virtual
reality). Early smart glasses models are leaning on mature and low-cost technologies with notable
influence from smartphones; however we see a gradual trend for smart glasses (and other
wearable computing devices) to be driven by more natural interface controls, once these
technologies have time to mature as well -- and they're getting remarkably close.
Touch: A Logical First Step
The most prominent smart glasses user interface in these earliest iterations is touch sensitivity,
through a touchpad built into the glasses formfactor itself or through a separate pocket-sized
tethered accessory device which contains most of the actual componentry including a touchpad.
Most of the prominent smart glasses offerings utilize a touch interface, most notably Google Glass
which is so far the mind- and market-share leader in this market. This is influencing other OEMs,
and suppliers, to think about these first smart glasses products involving some kind of touch
interface. Leveraging mobile-device touch interfaces also leverages the benefits of lower costs of
the technology. Adding touch functionality to low-cost high-volume consumer products is
progressively becoming noticeably inexpensive.
NanoMarkets believes that touch likely will continue to play a role in smart glasses user interface
throughout the next several years -- but it will be a diminishing role, both as the technology moves
further away from its smartphone inspiration and established itself firmly, and as other supporting
interface capabilities improve.
Voice Recognition: Seeking a Balance
The next step for smart-glasses HCI is voice command recognition, arguably a better fit for such a
hands-free wearable application. Google Glass already uses voice recognition; other smart glasses
developers incorporating or considering it include Brilliant Labs (Mirama One), Kopin (Golden-i
HMD), Vuzix (M-100), and Epson (BT-200). Kopin's Pupil design eschews touch entirely in favor of
voice recognition. Samsung is among several companies reportedly in talks with Nuance, the
company that helped Google develop Siri, reportedly with an eye toward finding a use for its own
future smart glasses.
2. Page | 2
This technology is fairly mature. Various voice recognition applications such as Siri are proliferating
in the smartphone market. Chips that provide speech recognition, synthesis, and system control
on a single chip can be procured for $1-$5 (depending on volumes, packaging type, and memory
size), though at the component level another $10 or so will be needed for the audio system which
is required in any voice recognition system. Speech-to-text may have a future role to play in smart
glasses as well, though it does not appear to be widely used. Researchers at Oxford University
have already developed a smart glasses prototype with speech-text capability targeting the market
for the visually impaired. In the commercial area, two applications (WatchMeTalk and SubtleGlass)
are able to convert speech to text; and OrCam Technologies has developed a camera that attaches
to any glasses (not just "smart glasses") to "see" text on almost substrate and read the text aloud,
with future plans to include facial recognition and the ability to recognize places.
While voice is possibly the best balance (right now) between "naturalness" and technology maturity,
it is not really a selling feature per se, and our impression is that smart glasses OEMs are not
making much of a fuss about it yet. We expect more smart glasses OEMs to quietly adopt voice
recognition, and likely it will become a ubiquitous interface for the next several years, until the next
and future HCI technology matures and overtakes it: gesture control.
Gestural Control: The Future of Natural HCI
Neither touch nor voice, in our opinion, are the eventual winning user interface for smart glasses.
Touching one's glasses is not a commonly performed gesture, and a tactile interface action doesn't
seem to align with the hands-free paradigm. Meanwhile, talking out loud in some environments can
be challenging (e.g., a busy warehouse or metropolitan area) or incompatible with privacy or
security needs (e.g., a hospital; also perhaps awkward in general consumer use.) Instead, we
expect the eventual ascension of a truly more natural HCI for a wearable electronic device: gesture
recognition.
Several different technologies are currently available for tracking motions and gestures, but these
technologies are intended to support gaming, medical and systems that are not smart glasses in
any sense.
3. Page | 3
Technology Description
Companies
Involved
Commercial Prospects
Inertial Sensors
Inertial measurement unit
(IMU) with 3-axis
accelerometers and
gyroscopes to detect precise
3D motion; 9-axis option
adds magnetometer
(compass)
Freescale (U.S.),
Invensense (U.S.)
Likely solution for smart
phones and wearables, in
combination with optical or
other approaches
Ultrasonic
Uses ultrasonic speakers
and microphones to detect
disturbances in sound
waves
Elliptic Labs
(Norway), Chirp
Microsystems
(U.S.)
Reasonable chance of
increased commercialization
in the long term, especially if
it can be expanded into more
applications beyond laptops
Electrical Field
Detects disturbances in
electrical field caused by
hand gestures
Microchip
Technologies
(U.S.)
Facing an uphill battle against
other technologies in
consumer electronics but
compelling for smart home
and other applications that
are a few years out
Magnetic Field
Detects disturbances in
magnetic field caused by
hand gestures
Telekom
Innovation
Laboratories
(Germany)
Not likely to make it
commercially in high-volume
products; possible for VR
applications
Muscle
Movement
Detects electrical signals
generated by muscle
movement
Thalmic Labs
(Canada)
Requires an armband, but
compelling in medical
applications; market demand
remains to be seen
Eye Tracking
Tracks eye movements to
note where the user is
looking
EyeSight (Israel),
Tobii (Sweden),
Quantum Interface
(U.S.)
Likely to be combined with
other types of gesture control
4. Page | 4
Among these gestural options, eye tracking takes on special importance in the context of a smart-
glasses interface for obvious reasons. Yet this technology has some way to progress before it's
truly ready. In some of today's commercial smart glasses technologies, head tracking is often used
instead. APX Labs, for example, claims to have provisional patents on gesture and motion-based
input based on onboard sensors, tracking a user's head rather than eyes to interact with content
displayed on the screen. Improved accuracy and reliability are needed if eye tracking is to do more
than tell whether the user is looking at one of several specific locations.
We would note that for eye tracking, subsystems not long ago cost several thousands of dollars,
but low-cost (and low-performance) eye tracking sub-systems can be obtained now. For example,
The Eye Tribe offers a subsystem for around $100; and U.K. researchers claim a design that could
deliver eye tracking for around $30 per item.
Making Gesture Work for Smart Glasses
At the moment, gestural recognition suffers from a lack of technological maturity. In commercial
applications it doesn't quite deliver needed functionality, nor is it sufficiently reliable or robust to be
extended into a consumer electronics product. There are a multitude of improvements that
NanoMarkets believes will be needed if gestural recognition is ever to become common, especially
for consumer-oriented smart glasses:
Low power. Sensors in gestural recognition have tended to be power hogs, but sensor
makers are coming up with new IMU sensors that consume almost no power in standby mode
and reasonably little when fully active. Higher-capacity batteries and other energy sources
could be appropriate in this context, though there would be a trade-off with other system
requirements, ultimately with cost as the bottom line.
Better data capture. Subsystems must support the sophisticated movements that are a
normal part of the human gestural repertoire. For example, Atheer Labs is developing a
smart-glasses product that can track in all hand orientations, both hands, and up to 10-finger
identification (multiple fingers tracked separately). As part of this trend, NanoMarkets expects
more use of time-of-flight (ToF) sensors. Brilliant Labs is integrating ToF for gesture
recognition, since it does not depend on surrounding light (it has its own light source),
provides clear images, can cancel out noise easily, provides good depth measurements, and
can be used in most environments. This is seen as better than RGB cameras which have
difficulty tracking gestures and canceling background noise.
Improved cameras. To the extent that cameras are used for gestural recognition in smart
glasses, one can assume that they will grow beyond off-the-shelf 2D cameras to stereo
cameras and then to 3D cameras with image sensors that can detect image and depth
information at the same time. To this end, ToF cameras could very well be the next big thing
in optical gesture recognition. Instead of scanning a scene, ToF cameras emit IR light and
measure how long it takes the light to travel to the image, be reflected, and travel back to the
image sensor. An array of pixels image every point at once. While it is possible to obtain full
3D imaging by other methods, ToF promises very fast response times because it collects data
5. Page | 5
from an entire scene at once rather than scanning the field of view. ToF systems are not
cheap, however, which may limit their use in consumer devices.
Most smart glasses OEMs appear to view gesture recognition capability as still some ways off, but
the majority of OEMs are working on this issue. In some cases, this work is just R&D, but there are
also some gestural recognition that is already embodied in commercial smart glasses. Numerous
efforts from lab projects (SixthSense) to large company patents (Microsoft, Google) to product
development (Epson, Pivothead, Sony, Technical Illusions, Thalmic Labs, Vuzix) are exploring and
incorporating gesture control in a smart glasses product, from gesture to head tracking to eye
tracking interfaces.
We believe that smart glasses will follow a trajectory towards more natural HCIs, so that the smart
glasses gradually "merge" with the body. The problem is that gestural recognition is still not quite
reliable and does not yet offer the cost/performance ratio necessary for smart glasses to transition
into a profitable consumer electronics item. While gestural recognition is not quite ready for prime
time, it is getting close.
Into the Future: The Brain-Computer Interface
Ultimately the next step in the HCI paradigm with wearable electronics to establish the most natural
interface is to remove the proverbial middlemen and develop a direct communication pathway
between the brain and an external device. The brain-computer interface (BCI) has long been a sci-
fi staple, and the first neuroprosthetic devices implanted in humans appeared in the mid-1990s, but
there is still much R&D activity happening in this area. Mostly these have focused on medical
applications such as restoring damaged hearing, sight, and movement.
Recently, however, lower-cost BCIs have begun to emerge, aimed at R&D or gaming applications.
These could be transferred to smart glasses at some time, and we would not be surprised to see
such brain-computer interfaces appear in future generations of smart glasses in a few years time -
- however, most smart glasses OEMs appear to be not yet thinking about this.
The information contained in this article was drawn from the NanoMarkets report,
Smart Glasses: Component and Technology Markets: 2014
See more at: http://nanomarkets.net/market_reports/report/smart-glasses-component-and-
technology-markets-2014