As gestural interfaces become mainstream, they will change the way we work and play. Usability professionals will need new skills, new methods, and new ways of communicating ideas.
Presentation for Perth (Australia) UPA Chapter in Formation, September 2009.
Swan(sea) Song – personal research during my six years at Swansea ... and bey...
Usability of Gestural Interfaces
1. Usability of Gestural Interfaces UPA Perth Chapter in Formation September 2009 meeting Barbara Thomas, Clarity Web Planning barbara@claritywebplanning.com.au 1
98. “One of the things our grandchildren will find quaintest about us is that we distinguish the digital from the real” William Gibson in a Rolling Stone interview, 2007 quoted by Dan Saffer (2008) 57
100. REFERENCESVideo examples of gestural interfacesJeff Han TED Demo 2007 http://www.youtube.com/watch?v=zwGAKUForhMMinority Report http://www.youtube.com/watch?v=NwVBzx0LMNQ&NR=1Perceptive Pixel multitouchhttp://link.brightcove.com/services/player/bcpid769469373?bctid=769654555Sixth Sense www.pranavmistry.com/projects/sixthsenseWindows7 Touch http://video.msn.com/video.aspx?mkt=en-us&vid=891c68b3-a534-4159-b6b2-8e4ac56b6890Xbox Project Natal http://www.youtube.com/watch?v=g_txF7iETX0 BookDan Saffer, Designing Gestural Interfaces, O'Reilly Media, 2008 WebsiteGartner, Hype Cycle for Human-Computer Interaction www.gartner.com/DisplayDocument?doc_cd=169747 Articles and research papersAkers, D. (2006). Wizard of Oz for participatory design: inventing a gestural interface for 3D selection of neural pathway estimates. Conference on Human Factors in Computing Systems, (pp. 454-459). Montreal.Bellotti, V. (2002). Making Sense of Sensing Systems: Five Questions for Designers and Researchers. CHI '02: Proceedings of the SIGCHI conference on Human factors in computing systems (2002), 415-422.Hummels, C., & Stappers, P. J. (1998). Meaningful gestures for human computer interaction: beyond hand postures. Third IEEE International Conference on Automatic Face and Gesture Recognition (fg '98), (p. 591).Loke, L., Larssen, A., and Robertson, T. (2005). Labanotation for Design of Movement-Based Interaction. Proceedings of Second Australasian Conference on Interactive Entertainment. Sydney.McGookin, D., Brewster, S., & Jiang, W. (2008). Investigating touchscreen accessibility for people with visual impairments. Proceedings NordiCHI 2008, (pp. 298-307). Glascow.Oviatt, S. (1999). Ten Myths of Multimodal Interaction. Communications of the ACM, 42 (11).Reeves, L. M., Lai, J., Larson, J. A., Oviatt, S., Balaji, T. S., Buisine, S., et al. (2004). Guidelines for multimodal user interface design. Communications of the ACM, 47 (1), 57-59.Nielsen, M., Störring,M., Moeslund,T. and Granum,E. (2004). Procedure for Developing Intuitive and Ergonomic Gesture Interfaces for HCI Interfaces. Lecture Notes in Computer Science: 2915/2004. Springer. Berlin.Sinha, A. K., & Landay, J. A. (2002). Embarking on Multimodal Interface Design. Fourth IEEE International Conference on Multimodal Interfaces.W3C. (2002, December 4). Multimodal interaction use cases. Retrieved August 16, 2009, from W3C NOTE : www.w3org/TR/mmi-use-casesWaibel, A. (2005). CHIL computing to overcome techno-clutter. ACM International Conference Proceedings Series: Proceedings of the 2005 Joint Conference on Smart Objects and Ambient Intelligence: Innovative Context-Aware Services: Usages and Technologies, 121. Grenoble. 59
101. Usability of Gestural Interfaces UPA Perth Chapter in Formation September 2009 meeting Barbara Thomas, Clarity Web Planning barbara@claritywebplanning.com.au 60
Editor's Notes
Gestural interfaces are certainly not new. But gestural interfaces have only recently begun to emerge from the research labs and gamester dens into public awareness and the domain of mainstream applications. Over the next few years, we can expect to see gestural interfaces play an increasing role in the way we design and use systems. This presentation is a broad introduction to gestural interface usability. We’ll look at: what are we talking about?what will GIs mean for users?what will GIs mean for developers and usability practitioners?conclusions
The thing I like about this definition is that it captures the idea that we are part of a GI interface and we are unaware of it.The promise of GIs is that we will simply interact with our environment in ways that seem natural to us.
Here’s a more formal definition that speaks of the mechanics of GIs.
The other useful thing to realise about GIs is that we can usually interact with them in multiple ways.
This is a gestural interface. You walk towards a door and it opens. But when the early Star Trek episodes were filmed back in the 1960s, self-opening doors were just a twinkle in a sci-fi enthusiast’s eye. To make the Star Trek doors seem so cool, they were pulled and pushed manually behind the scenes.Today, of course, we routinely hurtle towards plate glass doors in shopping centres and offices. We simply expect them to open. We don’t even think about it. Here, then, are two common characteristics of gestural interfaces:1. GIs usually begin as an attractive but impossible idea – in fiction, in games and entertainment, or in research labs. Eventually, they change the way we interact with our world. 2. Good gestural interfaces become almost “invisible”. They just become part of the environment in which we live.
This is what my kids wanted for Christmas about 2 years ago. It’s a software version of the garage band. One of the cool new features at the time was the tilt shift. You tilt the guitar in a rakish, rockstar gesture and the point scoring doubles – double the rewards and double the penalties.
A year later, this is what my kids wanted for Christmas. Now we have a smorgasbord of gestural commands to play with. We’re all tilting and shaking, pinching and flicking as if we’ve done it all our lives.
This is what my kids want for Christmas next year. This year if they could get it! This is the promo for Microsoft’s Project Natal for Xbox. This example illustrates how gestural interfaces become intuitive and compelling when they’re based on gestures we already use to do the same thing in real life. I’m sure there are much more innovative interfaces out in the gaming and research worlds, but I chose these examples because they illustrate how quickly GIs are becoming commercialised and part of the mainstream.The commercialisation of games and family entertainment has become a strong driver of innovation. It’s a lucrative business. Universities now offer degrees in games technology and we see the widespread application of gaming software in industry, medicine, education and defence.As the latest cool thing becomes part of everyday mainstream experience, we typically see user expectations skyrocket – for all kinds of devices and software. Whether we approve or not, fun at home motivates (and indirectly funds) innovation at work.We can expect GIs to follow this pattern.
Back in 2002, we thought this preppie boy was cool. We now know it wasn’t Tom Cruise we salivated over . . . it was just his gestural interface.In addition to the drive to entertain us, a lot of good work is going on in research labs. Often, it’s easier to imagine immediate applications for GIs driven by research. The need is built in. So, here’s an example of rapid progression from fiction to research labs to reality..
Just 4 years later, in 2006, Jeff Han caused a stir when he demonstrated how a multitouch interface would change the way we manipulate maps, large data sets and photos. Jeff Han was added to Time Magazine’s 2008 list of the ‘100 most influential people in the world’ and his company, Perceptive Pixel, was snapped up for US Military research projects (going over to the dark side, according to critics).This kind of interface was commercialised in Microsoft Surface and was used by CNN during TV coverage of the 2008 US elections.
And here’s the cheesy mainstream commercial version.Not unexpectedly, Windows7 has attracted a lot of criticism and Mac users claim they have had these capabilities for ages with a touchpad.However, the point’s not so much whether it’s cool or new. It’s more a reminder that the marketing promos are already out. It won’t be long before this is the de facto standard for interacting with most of the applications we use every day. Are we ready to design for and evaluate the usability of these interfaces?
To conclude these examples, let’s look at a fairly recent demonstration of the next GIs to come out of the labs. This is a demo of a wearable interface that combines touch screen and free form GIs with portability in mind. I think this gives a good sense of the potential ubiquity of gestural interfaces in our everyday environment.
It’s always a good idea to ground any discussion about the implications of a new technology by being aware of the hype surrounding it. Gartner produces hype cycle diagrams to help us recognise the wave of over-enthusiasm that accompanies innovation. If the technology in question is at the peak of the hype cycle, then beware of overly optimistic predictions about how it will transform our lives.We can see in Gartner’s 2008 diagram that gestural interfaces are currently riding the hype peak. This means “expectations may outweigh benefits realised” – or, more bluntly - your mileage may vary. (This is just a rough representation of the detailed Gartner analysis – it’s well worth checking out the original if you haven’t seen it.)
Here are some ways in which widespread us of GIs could affect the way we work and play. This isn’t a complete list, or even a coherent one. It’s just a hint of how broad the implications might be..
Along with a shift in the way we work will come a shift in the interface metaphors that mean something to us. Gestural interfaces are likely to replace the WIMP interface (windows, icon, menu, pointing device), and along with it the desktop metaphor (documents, folders, notepad, desktop calculator). New UIs will be 3D rather than 2D and will be based on metaphors that represent complete environments. Everything in the vicinity will become part of the system. Source: Saffer, D. (2008)
Convergence is a well-established trend with ICT technology generally. GIs are highly likely to converge with each other and with existing technologies. For example, we’re likely to see convergence in technologies for voice recognition, virtual reality, ubiquitous computing, RFID and near field communication. What will this mean for us as users?Gestural interfaces will become complex as more and more devices are added.Interfaces will become more nuanced about interpreting our gestures. For example, a touchscreen may be able to discern when you are annoyed and adjust accordingly. We will gesture with smart objects. Developments in hapticswill provide more feedback and embodied functionality in objects.Robots and cobots will augment our physical capabilities and we’ll increasingly use gestures as a design tool. Imagine designing with 3D hologram imaging and gestural input, for example.With converging technologies, specialised applications and products will emerge rapidly. Source: Saffer, D. (2008)
Here are some random examples of gestural interfaces that are already in use. Imagine the possibilities for convergence and widespread use of these ideas.Starting with the CNN Wonder Wall, which captured the public imagination in 2008 when millions of TV viewers tuned in to the US elections. This is just an application of the multitouch interface Jeff Han demonstrated two years earlier.In the public consciousness, this technology went from attractive but impossible fiction (a la Minority Report) to the new way we do things in just 5-6 years. Of course, that’s not to deny the years of research that went before. However, it’s another indication that once we ordinary folk begin to see new interfaces emerge from the labs, rapid adoption and new applications are sure to follow.
Here’s an application for free form gestural interfaces.Surgeons need imagery and information during surgery but don’t want to be contaminated by touching equipment unnecessarily. Gestural input is ideal.In this case, the system was used during "in vivo" neurosurgical brain biopsy.
Hyper instruments map expressive gestures in real time to augment a virtuoso performer’s sonic range. For example, a player can affect other sounds (like human voices or electronics) by varying bow speed and pressure.In 2006, TodMachover composed the first orchestral music for hyperstrings. Virtuosos like YoYo Ma and Matt Haimovitz regularly perform with hypercellos.
Inspired by scuba diving, Osmose is a full body immersion 360 degree virtual reality interface using a head-mounted display. The primary forms of input are breathing and balance. Breathe in to float upwards. Breathe out to fall. Change direction by changing your centre of balance.On the left is a tree and a pond experienced during a live Osmose fly-through. On the right is the grid through which a player experiences moving through a forest.It’s not new. Osmose has been used in the virtual reality community since 1995. However, this is the sort of interface that GIs may bring out of the gaming world into mainstream applications.
Paradigm shifts in technology almost always raise new concerns about privacy, security, ethics, human dignity, and human identity.Is it reasonable for a stranger I meet in the course of my job to see everything that’s ever been posted online about me? What about the compromising party snaps my best friend’s brother’s girlfriend’s cousin posts?The physicality of the interface raises new threats to accessibility and equity. Would working all day with gestural interfaces just be too tiring for older workers or wheelchair-bound workers, for example? Then there are societal, cultural and even metaphysical questions. Are our bodies our own or are we just part of the machine? Can using a poorly-designed GI erode our dignity if we have a less than optimal age, shape, or level of physical ability? How well will GIs cater for diverse cultural and religious beliefs about acceptable body movement and dress?This is a huge topic in its own right. In our roles as user advocates, the usability community should speak up on these issues. Just because we can do something with technology doesn’t mean we should.
Multi-touch and gestural interfaces are now offered on a large scale to millions of consumers. That seems like good news for us. But there is always a downside to allowing commercial profit-making to drive innovation. Quite apart from the question of what gets built and what doesn’t, we can expect that companies will attempt to protect their market share in ways that might not be in our best interest.Standards:As GI designers are just beginning to build the foundations of a gestural language for products, individual companies (Apple, Perceptive Pixel) are rapidly creating their own gestural standards and promoting them with their products. As in previous technology waves, users might find themselves confused and the losers in the fallout from a standards war.Patents:Gestures themselves cannot be patented. But can a gesture tied to a system action (an interactive gesture) be patented? Apple has patented over 200 items on the iPhone alone, and is trying to patent “pinch to shrink on a touchscreen mobile device”. If it wins, all manufacturers will be forced to develop different gestures and to patent them. Again, this will be bad news for usability..
The answer is probably both..
For many people, a gestural interface can be a physical challenge. For example, a simple hand movement requires joint mobility, muscle power, muscle tone and no involuntary movements.ATMs, mobile phones, ticket machines and information kiosks increasingly using touchscreen interfaces. But touchscreen accessibility is currently poor for people with visual impairment:Controls can’t be easily identified by touch Position of controls on a touchscreen is dynamicAge may also be a limitation for an increasing number of workers. A German study found that older users experience more difficulties with touchscreens because the natural, age-related decline in motor skills reduces precision and slows down movements. GI accessibility guidelines are essential. For example, for older users:make targets largekeep gestures simplelimit the number of gesturesdo not use short impact gestures, such as tapThere are many more examples of accessibility considerations for gestural interfaces. We will probably also need a new vocabulary for describing gesture-related accessibility issues.
Gestural interfaces have the potential to improve accessibility, but it’s not as easy as it sounds.GI enthusiasts see potential to create a kind of universal interface that could work for everyone. Multimodal input could create an “everyperson information kiosk” using tailorable input and output modes. People with impairments could choose the kind of interface that they’re best able to use. It would be possible, for example, to build an interface that requires only gross motor skills or facial recognition. Critics of this utopian view point out that it’s not that simple. In a conventional interface, a sight-impaired user can substitute voice for text and alt-text (images) without losing information. However with multimodal interfaces, each mode is qualitatively different. Modes are not simple analogues of each other. Also, in practice, designers need to find a balance between generalising for all users and tailoring for a specific user. Are adaptive interfaces the answer? Adaptive interfaces change as you use them, working with feedback from the user about the recognition result. On the downside, adaptive interfaces have a reputation for becoming inconsistent and less predictable over time. Users need full control over how the software adapts.
This next section plunders liberally from Dan Saffer’s book on Designing Gestural Interfaces. (As do previous sections). Four stars from me for this book - a must have for GI designers and developers and a worthwhile read for anyone working in usability. Let’s start with some broad characteristics of good gestural interfaces – interfaces in general, really.
Discoverable: The interface must indicate how to interact with it, through its properties. For example, a button pushes. These principles were first expounded by Don Norman in “The Design of Everyday Things”.Trustworthy:The interface needs positive attraction to overcome user suspicions. Responsive:Feedback is essential.Appropriate:The interface must fit the user’s culture, situation, and context. It must never offend.Meaningful:Must meet the real needs of users.Source: Saffer (2008)
Smart: The interface should remember things we don’t, detect complicated patterns, perform rapid computations, etcClever: The interface should predict user needs, match actions the user is trying to perform, adapt Playful: A great interface make users relax, engages users, reduces errorsPleasurable: The interface should be agreeable to the senses.Good:It must preserve human dignity. Use must not be restricted to young and healthy users only. It should be good for culture and the environment.Source: Saffer, D. (2008)
.Natural:The most natural designs match system behaviour to gestures humans might already do. But how natural does that need to be? For example, before iPhone when did we ever “pinch” to scale images? Yet the pinch seems to be a successful interactive gesture.Intuitive:Intuitive means easy to use without learning. To design systems that can process multimodal input reliably, we need to understand how people naturally integrate and synchronise interaction modes. We need more cognitive psychology research!Enhancing:A gestural interface should be able to provide better tools for controlling complex information than just a mouse and keyboard.Multimodal input should combine the strengths and overcome the weaknesses of individual modes. It should reduce errors and support entirely new capabilities.Expressive:Recognition of emotions such as uncertainty is essential for creative interpretation.Adaptive:The system collaborates visually with the user rather than assigning a determined meaning to a gesture. (Although, as discussed previously, this often isn’t as simple in practice as we might wish.)Portable: All applications are becoming more portable. The "Apple effect," demonstrates how powerful and intuitive a multitouch track pad interface can be in a mobile device.Source: Saffer, D. (2008)
.Synchronise multiple modalitieseg point and talkredundant confirmationparallel communication – complementary use of modalitiesDegrade gracefullysupplementary modalities – degrade gracefullycomplementary modalities – need attentiondeal with changing capabilities – eg change to mobile device with different bandwidthShare a common interaction stateswitching modalities – need a shared interaction statehistory – helps rapid task completionmulti-device interactiondistributed multimodalityBe predictableeliciting correct inputprimary user question in a GI is “what can I do?”Adapt to user environmentneeds and abilities of usersabilities of connecting devicebandwidthconstraints from the environment Source: Saffer, D. (2008)
.Many traditional interface conventions apply but some work differently. For example: cursors:Often unnecessary because a cursor isn’t consistently pointing at something. Users don’t lose track of their fingers. Cursors are essential for gaming in free-form gestural interfaces, however.hovers and mouseovers:Seldom used. Some systems can detect a hand hover.double click: Harder to use. Need to set a threshold to recognise a double click as something distinct from two touches with a rest.right click:Most gestural interfaces don’t offer an alternative menu. They could but it doesn’t fit the philosophy of direct manipulation.drop down menus:Don’t work well for same reasons as right click.Source: Saffer , D. (2008)
.cut and paste:Not common on current gestural interfaces but may be more so in future.multiselect: Limits due to human anatomy, but there are ways around it.selected default buttons:Can only highlight predicted behaviours. Humans still need to make a gesture.undo:Hard to undo a gesture. Better to provide a reversing action than to rely on undo.Source: Saffer , D. (2008)
.A lot of design patterns are already established for GIs, particularly for touchscreen interfaces. Here are two sets of patterns, for example: GI patterns for touchscreens:tap to open/activate tap to select drag to move object slide to scroll Spin to scroll slide and hold for continuous scroll flick to nudge fling to scroll tap to stop pinch to shrink / spread to enlarge two fingers to scroll ghost fingers Patterns for free form gestural interfaces These gestures are typically performed in space, not on a touchscreen or solid surface. proximity activates/deactivates move body to activatepoint to select/activate wave to activate place hands inside/beneath to activaterotate to change statestep to activateshake to changetilt to move in a direction
The W3C Multimodal Interaction Activity working group is developing specifications for Web applications which use multiple modes of interaction. In particular, the group is looking at: multimodal architecture and interfaces (MMI)extensible multimodal annotation markup language specification (EMMA)an XML language for digital ink traces (InkML)In its May 2009 statement, the MMI group indicated that it may also incorporate the Emotional Markup Language Incubator group.For more, see www.w3.org/2002/mmi/Activity.html
GI designers need to know what research is telling us about how people use gestural interfaces. For example:Users demonstrate a strong preference to interact multimodally, especially for spatial information. However, this doesn’t mean that users always use multimodal commands. Research indicates that people use a single interaction mode about 20% of the time. Natural communication patterns mix multimodal and single mode, and preferences can be predicted from the type of action performed.Speaking and pointing are not the most dominant modes for selecting objects. Research shows that when people talk to each other, pointing gestures make up only about 20% of all gestures people use.Speech and gesture are highly inter-related and sometimes synchronized during multimodal interaction. This doesn’t mean they are simultaneous. This also varies widely between cultures.Speech may not be the primary input – it may not be the first input, it may not be the most important.Multimodal speech is different from unimodal speech – shorter, simpler.Multimodal input is not always redundant – usually complementary.Multimodal communicate helps users to recognise and recover from errors better than unimodal communication – mutual disambiguationDifferent users integrate multimodal communication differently. Need to learn from user’s pattern of interaction. Source: Oviatt, S. (1999)
Kinesiology, the science of human movement, is not a typical software developer skill set. However, we can at least make ourselves aware of the possibilities and limits of the human body. For example, here are just some of the things we should know if we want to design gestural interfaces for input from fingers and hands. people have different shapes and sizes of fingers – eg children, older adults, obese peoplelong fingernails and gloves can make it harder to perform some gestures 7-10% of people are left-handed, so provide a means to flip controls and adjust layout fingers are less accurate than cursors. An accidental drag and drop gesture in California election touchscreen ballot caused the whole system to crash. size restrictions on mobile touchscreens make accuracy even more of a problem – use iceberg tips or adaptive targets. fingers don’t float in space - hands, wrists, arms can also get into the interface – place essential features carefully important controls need to be large and close to the user fingers have natural oils that make touchscreens slippery and smudged Source: Nielsen, M. (2004).
User-centred, participatory methods will become critical for understanding how people gesture and how they combine different modalities.The question is how to adapt our current prototyping and inquiry methods to achieve this. One research group looked into adapting the classic Wizard of Oz prototyping method to enable users to invent and test their own gestural input. Source: Akers, D. (2006).They started with a contextual enquiry – observing neuroscientists using medical imaging techniques to locate bundles of nerves in the brains of live patients. They noted that neuroscientists need to browse and analyse complex pathways.The researchers then conducted a brainstorming with users, asking them to imagine realistic scenarios for selecting pathways.They then used a prototype (with a Wizard of Oz-style hidden control panel) to respond to gestural input from users. Users designed 2n 2 tools to perform shape matching and touch, combined with 4 selection modes – new selection, add to selection, remove from selection, intersect with selection. Participants explained meaning of gesture before wizard implemented it.Users invented gestures for the obvious things and came up with new ones – eg drawing a shape in a plane – to include an anatomical feature, easy way to shrink a selection, context slider.All these new requirements had not discovered before using the prototype.
Given that user participation in GI design is critical, how can we best communicate requirements and design decisions during development?For example, according to Victoria Bellotti, the things we need to be clear about are from a human-computer-interaction point of view are:Address How do users direct communication to a system? Attention How does the system establish that it is ready for input? Action What can be done with the system? Alignment How does the system respond? Accident How does the system avoid or recover from errors or misunderstandings? Source: Bellotti, V. (2002). Agile development approaches often use prototypes or the latest release of the final product as requirements documentation. But with GIs you might need to build very hi-fi prototypes to achieve a common vision. This could be prohibitively time-consuming and expensive.Perhaps modeling and diagramming techniques will return to favour because they can be a cheaper and quicker way to explore early design ideas. One study of multimodal input designers found that they typically use familiar sketching techniques in the early stages of design – especially storyboards and scenarios. As we saw in the previous slide, Wizard of Oz simulations can be adapted successfully to designing GIs.
As designers, we need to talk to each other and to clients about how a gestural interface will work while it’s still at the concept stage. We need a vocabulary for that. We could perhaps key into the existing extensive vocabulary of technical terms to describe movements. For example: flexion and extension, rotation, abduction and adduction, internal and external rotation, elevation and depression, protraction and retraction, supination and pronation, plantarflexion and dorsiflexion, eversion and inversion, opposition and reposition.But talking about gestures goes beyond jargon. To design natural gestural interactions, we need to understand and be able to talk about the roles gestures play in communication.
For example, this is an existing scheme for classifying the role of a gesture in communication:Space: These gestures include – using, grasping, describing shape, pointing, manipulating, describing function, metaphorPathic: Gestures that support process but have no intrinsic meaning. Examples are gestures for jabbing the air for emphasis or nodding to maintain discourse.Symbols: Symbolic gestures convey culturally significant concepts, such as v for victory, or rubbing thumb and fingers for money.Affect: The emotional payload of gestures.Deixis:Iconic gestures to convey the semantic meaning of accompanying speech. An example may be hands spread palm upwards to denote “I don’t understand” (or “What the . . . ?” in popular parlance). Source: Hummels, C. and Stappers, P.J. (1998). We need to agree on a means to talk about the purpose or intention of gestures that is meaningful and workable for designing and evaluating gestural interfaces.
UI designers and developers have traditionally drawn diagrams – wireframes, site maps, UML diagrams and others - to work through ideas conceptually in the early stages of design. We will probably want to extend these traditional techniques for use with GIs.But diagramming movement is not as simple as it first seems.On a simple sketch, the movement of lightly touching someone lovingly on the cheek is almost the same movement as slapping someone on the face.Dance and choreography use a number of movement notations but they’re all difficult to write and to read.Designers of gestural games sometimes use labanotation diagrams, for example, to describe physical movement of body and characteristics of motions along four axes:space - direct or indirect, touching something or not weight - strong or light time - sudden or sustained flow – bound or free, self-contained or ongoing We should draw on the accumulated wisdom from other disciplines (as we have always done), but we will probably need lighter weight methods of sketching gestures for gestural interfaces.Source: Loke (2005)
Well, what does this all mean?Without overlooking the hype cycle caveat, I think it means:GIs are coming . . . soonGIs have potential to make the way we interact with systems easier, more natural, and more user-centredThey will also bring with them a raft of new usability, accessibility and social concernsDevelopers will need new interface design skills and new standardsGIs will underscore the importance of usability principles and will change the way we do usability - usability professionals will need new skills, new methods, a new vocabulary and new heuristics . . . at the very least So, no change there, then.