SlideShare utilise les cookies pour améliorer les fonctionnalités et les performances, et également pour vous montrer des publicités pertinentes. Si vous continuez à naviguer sur ce site, vous acceptez l’utilisation de cookies. Consultez nos Conditions d’utilisation et notre Politique de confidentialité.
SlideShare utilise les cookies pour améliorer les fonctionnalités et les performances, et également pour vous montrer des publicités pertinentes. Si vous continuez à naviguer sur ce site, vous acceptez l’utilisation de cookies. Consultez notre Politique de confidentialité et nos Conditions d’utilisation pour en savoir plus.
Autocomplete is becoming more and more assistive, to a point that it could be viewed as a recommender system for writing, and indeed, for social relationships. Permitted by complex algorithms, this transformation raises many problems : language regularization, «filter bubble» effect, privacy issues. How could design bring solutions to these problems ? That's the question I am asking for my master thesis.
Presentation of the current state of my master thesis (Master Media Design at HEAD Geneva) at the Junior Research Conference, Zürich, Switzerland.
The font is Sporting Grotesque, created by Lucas Le Bihan for Velvetyne, a free and open source type foundry.
Hello everyone. I am Mathilde, I am from HEAD Geneva, Master Media Design. My thesis is about conversing in the era of autocomplete. Currently I am at the begining of my field research.
So first, what is autocomplete ? It’s the “predictive text” feature integrated in all the text messaging apps such as Messenger, Whatsapp or even Gmail. But where does autocomplete in text messaging come from ? Maybe you remember, fifteen years ago most of the mobile phones had 12 physical buttons with numbers, so text messaging came with a big problem : how to map the 26 letters of the alphabet on those 12 buttons ? Systems like T9 were developped, and that was the first goal of autocomplete in text messaging : be an help for typing. It was finishing our words based on the first letters we wrote.
But what’s happening, what is autocomplete now ? It’s changing. Thanks to complex algorithms, it’s not anymore only an help to type, but an help to write. Now the features include : suggesting whole words and sentences suggesting automatic answers based on an understanding of what we wrote apps like Google Allo even integrates electronic assistants inside our conversations
So we can conclude that it’s becoming closer to : a kind of recommender system for writing something between a writing advisor and a coach for social relationships
So we can apply the issues of recommender systems, to autocomplete. First : Language regularization : autocomplete narrows the writing options by encouraging users to use the words which are statistically the most frequent. It’s not a whole new way of talking of course, but it’s definitely a limited facet of language. And it let companies establish a form of control on what we write.. Second : “Filter bubble” effect : If you suggest writing possibilities according to someone’s past writings, you put him in a kind of loop in which he will only see information accurate to his profile and interests. This process confined the user in his own cultural and ideological bubble. And finally, all this is enabled by the mass collection of personal data. Those data are collected, stored, analyzed. They represent us, but only through a certain angle. So there are strong privacy issues.
So what interests me as a designer is to think about how to design autocomplete systems for text messaging, taking into account the issues I just have showed you. It rises many questions : How to take advantage of the characteristics of algorithms in order to propose a different, more stimulating conversation ? How should user interface change in order to make the user more aware of how the system works ? Which design principles can take into account the ethical and social issues that are an integral part of autocomplete ? The question can seem very big, but maybe it’s because we should define more clearly what I mean when I say “design”
Let’s first define a bit more precisely what is exactly a recommender system for writing. It’’s a system that tries to predict the preference of a user for a word, a sentence, an emoji or an idea. Technically it’s based on machine learning algorithms, algorithms that will predict the future based on a statistical analysis of the past writings. It creates a system that learns from user behavior. A system that evolves depending on user habits. So there’s a much closer link between humans and machines. And because of that, designing a recommender system is not just about user interface or user experience. If we talk about human-computer interaction, we deal with designing user experience But when we start thinking of it at human-computer relationships, we deal with designing system behavior
So my aim as a designer, is to find principles to design the system behavior of an autocomplete system.
For doing so, I would like to make experiments with conversing through automated systems. I first wanted to do interviews with people about the feelings and the frustrations they have while using autocomplete. But there is a problem, it’s difficult to talk directly about it with people because it’s something quite almost invisible for a lot of persons. Moreover, as I want to focus on the behavior of the system and on the experience of conversing - I just thought what about doing a step back and have a look at how people really interact, I mean, in the real life. That’s why finally I want to answer my question by doing some experiments with low-tech prototypes : card games that decide for you the talking point, or that impose you words to say for example. If you want to see what it looks like, come to the workshop in Room 5.G02
These experiences are here to see what happens when we bring some automation inside our social interactions. What is frustrating ? What is enjoyable ? What expands the possibilities of communicating ? Then I can apply those principles to the design of autocomplete systems. The hypothesis behind is that unexpectedness is enjoyable and that algorithms can bring us that random, unexpected view of the world. They can open up new perspectives and make us think outside the box. Algorithms carry a certain form of imagination that we can enjoy and take advantage of.
To finish I would just like to open on a notion I am interested in for my master’s project, the one of chili technology. Autocomplete is a powerful example of a technology that we use on a daily basis and that touches a very intimate aspect of our life : writing and talking. And at the same time it’s also a strong example of a technology that is in the background. It’s like a magic feature that we can see is sometimes right and sometimes wrong, but that we don’t really have an understanding of. It’s invisible. For me Chili design is about : How can we give users a better comprehension of how these systems work ? How can we think about form of technologies that are not just assistive, but also a bit challenging, a bit exciting ? And to relate more to my subject how can we rethink autocomplete tools in order to spice up our social relationships ?