Presented 12 December 2013 at MoDevEast13
We are finally starting to think about how touchscreen devices really work, and design proper sized targets, think about touch as different from mouse selection, and to create common gesture libraries.
But despite this we still forget the user. Fingers and thumbs take up space, and cover the screen. Corners of screens have different accuracy than the center. It's time to re-evaluate what we think we know.
Steven will review the current state of research on how people actually interact with mobile devices, present some new alternative ideas on how we can design to avoid errors and take advantage of this knowledge, and review work you bring so we can all come up with ways to improve real world sites and apps today.
33. Contact me for consulting, design, to
follow up on this deck, or just to talk:
Steven Hoober
steven@4ourth.com
+1 816 210 0455
@shoobe01
shoobe01 on:
www.4ourth.com
33
50. Contact me for consulting, design, to
follow up on this deck, or just to talk:
Steven Hoober
steven@4ourth.com
+1 816 210 0455
@shoobe01
shoobe01 on:
www.4ourth.com
50
Notes de l'éditeur
Let me ruin everything you have been taught. >>>>>>> Phones/Tablets… Wooden Phone… Example Worksheets… Magazine or Book… <<<<<<<<<<<
There are no pages. When I hold up a piece of paper or open a book, that’s a page because that’s the full scope.
When we hold a digital device, these days, we see the viewport. The user has to scroll, so sees only part of the page you designed at once.
BREAK We’ve been living with fragmentation since at least the dawn of consumer computing.
Designing for the next platform should not be a burden. It shouldn’t even be your process. Design SERVICES,that every platform can use, to future-proof your product and content.
BREAK Straight line, predictable processes are so rare you can start by assuming they do not exist.
People arrive at any page, and resume app use and processes at any time. This is real data from a site I work on. It’s pretty typical. No one starts at the home page. CONTINUES
You can’t design a simple flow, or predict choices people will made. You have to embrace complexity and plan for failure.
Your users will change, and figure out ways to solve their problems you cannot predict.
People use different devices in different ways. Just one is distance, and ways of holding. Tablets, for example, are held further from the eye than handset-sized devices, so you need to make text different sizes, and held differently so touch targets need to be different places.
BREAK Thisphone, and all smartphones, are not just about touch.
But have huge numbers of sensors, that make them aware of the user, and the world they live in.
Not to mention the radios: Bluetooth, WiFi, Mobile cellular,… and they are pretty much always online… Networks are just ways of making our computers bigger. Of making the world be one, large, distributed computing and storage system. You don’t use the cloud, you are part of it.
BREAKNot everyone grabs their phone like this.
But many, many different ways.
Whodoesn’t know what a phablet is? ANSWER. They are not a nerdy niche item. This is my cab driver last night, with a Galaxy Note 3, but they are close to 10% of the world smartphone & tablet market, and 70% in Korea! Use of phabletsmeans comfortable one handed reach is a lie, and probably an apple fanboi lie specifically. Smart watches (point at wrist) are going okay (Galaxy Gear has been a flop) but point to a relatively extreme form factor we may need to keep in mind. Don’t assume your favorite device is the only device that exists.
BREAK Looking closely at a bunch of serious academic research reveals that the way people touch devices is a bit more complex. But in ways that correspond neatly to some of the work we already do.
The accuracy people have corresponds to the position on the screen. Not the page. As you may recall from the first slide, consider the viewport instead. This appears to be not just about reach, but other parts of the cognitive psychology and the physiology of your users. They simply prefer to read and touch things in the middle of the screen, and are actually (if subconsciouly) aware that they are worse at touching the edges.
We can turn this datainto usable charts, with larger interference zones at the top and edges, and which neatly correspond to sort of structural zones that already exist in much of our design. These are the rows for mastheads, tabs, the big content area of course, and the chyron at the bottom. If you aren’t getting the rows I refer to, I mean this.
(Point to Masthead, Tabs, Content, Chyron)And, if you look at the few squares I overlaid here, you can see how they correspond to the diagram of where people touch screens accurately. Or, not. You can see the red square where things are a bit too close together also.
BREAK Everything I have said so far is subject to a little mythbusting. We should never confuse best practice with common practice. Best practices are based not on tradition, or what is easiest, but on users. We can optimize our designs to take advantage of what we know about people.
That users… Become familiar with OS conventions, so breaking them confuses people.Solutions are about convenience. If they can solve it a quicker, easier way, they will. While talking about context is no longer trendy, people do become distracted. You must plan for tasks to be interrupted and unattended, so don’t pop up critical notices that fade away after a few seconds. Users must trust the systems, networks and content. They don’t easily, not just because of fear in the news but because you so often erase their data and provide old or bad data. User choices are, likewise, important above all. Do what users say, not what we surmise. Even when I say to use sensors to detect, say, location if the user over-rides that then keep it. Do not revert to what you think is best!
Today, let’s try out something quite tactical. Let’s do some interface design based on turning those key understandings of how people view and touch the screen to design…
And again I have a bullet list, but this one is easy. In order, when designing what fits in the viewport: Put things that people want to read, or the primary interaction, in the center. Provide room to scroll, so pages longer than the viewport can scroll that content to the center of the page. Make rows selectable, without requiring small buttons at the left and right sides. Limit the number of common controls, in the masthead and chyron…Because everything has to have plenty of space. I’ll provide specific guidelines, but “plenty” is easy to remember. For tabs, don’t hide content or require gestures to use them.
Now, takea key interface for your product, or a favorite (or least favorite) app or website you use a lot. We’re going to redesign it by the numbers. There’s paper around here somewhere…
Work as teams. If you don’t feel comfortable drawing, you can help contribute to the design anyway. Notethat the handouts I gave you are at full scale. Always draw at full scale. This makes it easier to measure, and even to try out your design. Cut or fold to wrap it around a phone. Put it into a fake phone (example). Check: Can you read it? Can you tap it? Here, it also makes it easy to transfer designs from reality to the paper. Just stick your phone next to the drawing.
So you can follow along…
I’ll wander and see what you are doing, and we’ll talk through the details, but for anyone who has another question, or really refuses to do the exercise: What did I say that’s baffling you? What other, but related, topics are confusing you?
END OF EXERCISE. GATHER A FEW UP, OR SOMETHING. You aren’t done when you get to step 6.IT’S A CYCLE, an endless loop of actions you perform, as you design and as you revise. You can use the checklists and charts (and do bring home the paper so you can put it on your cube wall) but it’s best if it’s second nature: when a design revision comes in, you automatically evaluate to see if it fits and works…
…Examples to use to make it easy to show on screen…
…Examples to use to make it easy to show on screen…
Questions>> GIVEAWAY, 2 books to first two questions???… SAVE ONE FOR PETE <<
If you miss these addresses, just Google my name and you’ll find me.
Visual targets are important. As much as no-affordance interfaces and secret gestures are a trendy way to insist you are making delightfully surprising experiences, making simple click targets makes everything just work. Visual targets must: Attract the user's eye.Be drawn so that the user understands that they are actionable elements.Bereadable, so the user understands what action they will perform.Be large and clear enough the user is confident that he can easily tap them.
Angular resolution matters, and that’s calculated based on the distance between the screen and the viewer’s eyeballs. Get your cameras out…
These are increasingly going to be important. Maybe next year I’ll bring some info on designing for kinesthetic gesture. But today we’re going to talk about the touchscreen itself. And since it’s the most common thing, we’re talking about capacitive touch. Resistive is the one where you have to simply apply pressure, and a grid of conductive leads make contact, so the device knows which point you touched. These are still being built. Even for consumer devices, like tablets or seatback entertainment systems and so on. Capacitive touch, uses electrical properties of your body. Your finger acts as a capacitor whose presence in the system can easily be measured by these little nodes, in a grid, on several layers between the display screen and the protective plastic or glass. But it is not perfect. There is math, and interference, and tradeoffs in thickness, weight, cost, and optical clarity that get in the way of increased precision.
A year or two ago, Motorola put a handful of devices in a little jig so they could precisely, robotically control the pressure, angle and speed of touch sensing. This is some of them. Even the much-loved iPhone is imperfect, with notable distortion at the edges, and actually a total inability to get to the edge on some sides. Look at the stairstep pattern on the Droid. That’s a problem with the calculations or something that predicts the precise position between the sensors. The pitch of the steps is clearly the grid size.
As it turns out, it’s not really important how big our fingers are, except insofar as they obscure part of the screen, which is something else. Our finger squishes against the screen and only a part gets flattened and detected. My research indicates this is pretty much the same for everyone. Children press really hard, so have a larger relative contact patch for example. There is some variability based on task, so people can use fingertips and press lightly.
The centroid is just the geometric center of an area. The way the electrical conductivity of the capacitive touch screen works, the part that is always sensed is the centroid of that contact patch. What matters is the Circular Error of Probability or the pointing accuracy of people with their fingers. There’s a bit of a range here, depending on the user’s attention, care and the environment in which they operate. Not to mention the ability of the user themselves.
What really matters is interference. Why are you worried about touch targets and taking notes right now? To avoid accidental clicks. Interference is what happens when two or more targets are close enough together that they all fall into a single CEP. The user might hit any of them with a single selection. If you can only remember one number to check to assure your design is touch-friendly, make it this one. 8 mm if you have to, 10 mm if you possibly can. More is generally better if you have the room.
Defining as spacing between buttons won’t do it. Your link or button is so variable, what you need is a guideline for interference alone. As you see, you can check for it digitally, if you set Photoshop or Fireworks to the right pixel density. Hint, it’s NEVER 72 dpi. It’s different for every device.
…or put your designs on the actual device screen and measure it directly to make sure. You can use the 8 and 10 mm circles from any old circle template you get at the art supply store (or these days, Amazon), but I made up my own little tool I keep in my pocket, because this is so important.
Since everyone loves actionable information, what’s better than numbers. Here are all the numbers I had in the deck in one place, and it might even be up for a minute so you can all take photos, if I am not out of time yet. I will be posting this to Slideshare in the next few minutes, as well, and all the info I presented is published with sources in a couple of articles, so look it up or ask me for the links.
For touch, technology and human factors, you’ll see I already gave you the numbers. They are basically the same. For grasping? The best data I can get are from surveys, which of course are filled with bias and the few I have seen have tiny (or unspecified) numbers of responses. I am not even sure how to get better data as these are used in the home more than anywhere. If designing for an existing user base, I’d observe use in the classroom or office or wherever they are used and work off that data. If it’s solid, try to share it with us all.
But how about using theinformation we do have to help design? We can make a lot of general decisions, to settle on adaptive techniques to make sure scale and orientation are well accounted for across the range of devices. But more tactically, how do we decide where to put items on the page? Let’s take something ubiquitous and heavily used. The Back button. On iOS it’s way up in the corner…
… where the touch charts indicate that it cannot be reached for your one-handed user, with their thumb.
Maybe this.Maybe they shift their grip so they can reach anyway. Maybe they use both hands, or switch to their left hand for a moment. OR maybe this video
Or maybe they cradle with the other hand so they can reach with the thumb. Luke Wroblewski shared this video with me, as he’s less scared of taking video in public.
If you miss these addresses, just Google my name and you’ll find me.