dan nessler
digital experience designer

6. Designing with the human in mind

Three principles when designing conversations keeping the human in mind first.

 

The previous chapter deals with the current state of the industry, where it is heading, related technologies and terminologies. Although this knowledge is essential to understanding the topic, it leaves the user and the question of how to design for the user out of the equation.

To do so, this chapter explores multiple design principles and guidelines, specific subcategories related to the initial research question and the current state of technology covered in the previous chapter. The overall goal is to set up a set of guidelines to designing conversational UIs. The following process is applied to achieve this aim: First of all, design guidelines, UX design guidelines and bot design guidelines are examined (Figure 12). Secondly, aspects of communication theory are taken into consideration. Thirdly, insights from expert interviews and two industry collaborations are added. Finally, and in a qualitative approach, findings get analysed, structured, consolidated and summarised, to turn them into a set of guidelines.

Figure 12: Outtakes from process / clustering of design guidelines

With every new trend, we tend to focus on technology rather than the user and the problem we want to solve. As Beer (2016) puts it: “It’s about being problem oriented rather than being technology oriented”. This is a core aspiration of Norman’s (2013) HCD philosophy that starts with “a good understanding of people and the needs that the design is intended to meet”.

Accenture (2016) warns that the winners in the digital age do more than complete technology checklists. They understand that success relies on people and understanding changing customer needs and behaviours. “You can’t solve this challenge just by consuming more and more technology… enterprise’s must focus on enabling people – consumers, employees, and ecosystem partners – to do more with technology (Accenture, 2016)”.

There are a vast number of guidelines, principles and patterns when it comes to designing interfaces. Although professionals and non-professionals are familiar with using conversational UIs in the shape of messaging services, building them is another story. Thus, this chapter looks at various aspects of general design principles, usability and UX standards and specific guidelines for conversational UIs. The collection of principles and guidelines presented in this essay is by no means comprehensive.

This essay and the resulting findings ought to provide value beyond the current hype around chatbots. Therefore, this chapter takes approaches such as Dieter Rams “10 principles for good design” into consideration. Although his principles are general and date back to the 1970s, they have proven to be long-lasting and still get referenced today.

User-centred design guidelines are covered for the following reasons: First of all this essay is written in the scope of a Digital Experience Master’s programme that is built around this topic. Secondly, these guidelines and principles are of core interest in a world where we are constantly presented with new technologies and interfaces. Huge Inc. and van Hoof (2016) state: “The best AI applications will push the principles of user-centric design much further, using intuitiveness and usability not just to serve a user’s functional needs, but also their emotional needs”. According to them, conversational UIs do not only require design and creative skills. Especially when AI comes into play, emotional awareness, statistics, psychology, linguistics and other fields of expertise seem of relevance. Although these fields are not directly related to visual design, they are related to the field of UX. UX looks at the broader picture than just a visual appearance of things and as Huge Inc. and van Hoof (2016) state, “The principles of user-centric design lay the groundwork for building a great AI system”.

6.1. General Design

Messaging as a form of conversation has been around for years. It makes sense to go back into the history of the modern age design when looking at principles and guidelines. In the late 1970s, Dieter Rams challenged himself by questioning his design: “Is my design good design? (Vitsœ, no date)”. This question seems to be more relevant than ever as we are confronted with an overload of “designed” products, goods and services today. Rams’ “10 principles for good design” are being referenced and taught up to this day. Vitsœ (no date) states that good design cannot be measured in a finite way, but Rams set out to define his ten most important principles describing what he considers good design (Figure 13).

Figure 13: Rams 10 principles for good design, (Vitsœ, no date)

6.2. UX and Usability principles and guidelines

Over the last decades, UCD and UX Design have matured as individual fields in the design landscape. Whereas design is still often considered as purely visual, UCD or UX put focus on the user and the use a product or service generates. Relating to such guidelines (Figure 14) seems to make sense based on earlier references made and it is also backed up by Mauray (2016) who suggests that good bot design is achieved through fundamental principles for good UX design. A set of guidelines often referred to when it comes to UX design are “10 Usability Heuristics for User Interface Design” by Nielsen Norman Group and Nielsen (1998). Other references are often made to Norman’s (2013) “seven fundamental principles for designing good user experience”. Johnson, J. (2014) refers to general guidelines by Shneiderman (1987) and Shneiderman and Plaisant (2009) as some of the most important guidelines.

Figure 14: Overview of UX guidelines

UX advocates often highlight the need for usability as a core element of good design and a good UX. The issue with these approaches is that they mainly focus on usability. But the problem lies not necessarily within them. The issue often lies in their rigid or isolated application, ignoring other aspects. Good usability might be good for users with a rational, specific and task-oriented goal. But it might totally neglect the overall experience of a product. As earlier stated, some products might exclusively aim at fulfilling our life with fun, enjoyment, entertainment or a purely aesthetic pleasure rather than a functional benefit.

Rams (no date) takes a more holistic approach of making usability only one aspect of his ten principles for good design. Colborne (2010) aims for simplicity by saying “To be simple, you have to aim for something tougher than the regular goals for usability”. Norman (2005) challenges his concept advocating a more emotional approach to design. “Everything we do, everything we think is tinged with emotion, much of it subconscious”. He proposes three levels of design that cater for different aspects and serve the overall experience that goes beyond usability. In this approach he brings the visual appeal of design – often neglected in usability guidelines – back into play and he adds a “reflective” layer (Figure 15):

Figure 15: Norman’s (2015) layers of design visualised

Guidelines seem to depend on the perspective of their creator, their particular purpose, and the context in which they have been set up. They appear to question one another regarding focus and relevance based on a particular era of time and the state of technology and products. Nevertheless, there are also similarities, and there are guidelines that have withstood the test of time.

How might we relate general principles to designing for new and no interfaces, especially when talking about conversational interfaces? In traditional interaction design Kuang (2016) states as an example, systems should present all their functions clearly to the user, in a way that is always the same. This seems to be a challenge in a conversational interface. First of all, an interface may be non-visual. Second of all and even if the interface is visual, how does such an interface communicate its “knowledge”, “capabilities” or functionalities? Users are likely to understand the general interaction of messaging or giving voice commands. Nonetheless, to a user, it remains unclear what its counterparty – a bot or smart assistance – is capable of.

Thus, the following subchapter looks as certain guidelines, principles and recommendations for such interfaces.

6.3. Design Principles for Bots and Conversational UIs

There are numerous documentations, recommendations and guidelines on how to build bots. This subchapter covers some approaches. To build conversational interfaces and bots, designers and developers ought to leave behind current practices and adopt a new mindset as Mielke and Smashing Magazine (2016) state, because familiar design patterns used in GUIs do not work in conversation-driven interfaces. Before getting deeper into the conversational side, one of the core questions to be asked when evaluating the use of bots is their purpose and their benefit for the user (Connolly and Intercom, 2016).

Jellyvision Inc.’s (2002) guidelines for creating interactive programs, the Jack Principles, were named after the interactive game "you do not know jack" (Figure 16). These principles serve to develop and design interactive conversation interfaces (ICI) as they were called in their context. Similar to today's conversational UIs, ICIs imitate a human-human conversation between a machine and a human under one constraint: "...the conversation can be about anything the way a real human-human conversation can. The topic is constrained by the goals and design of the program’s creators (Jellyvision Inc., 2002)”. This constraint is much in line with sources stating that a bot ought to aim for one specific goal and serve one specific purpose.

Figure 16: Jellyvision Inc.’s (2002) Jack-Principles visualised

Although these principles date back to 2002, there are similarities when looking at the state of technology today and the challenges that are present. "Currently, there is no reliable way to do this with what has been called ‘artificial intelligence’. So, for the time being, to create the illusion of human awareness, we must use actual human beings to do it (Jellyvision Inc., 2002)".

Today, AI is evolving and making its way to the consumer market. Nevertheless, recommendations provided in “The Jack Principles” may still be found in more recent guidelines (Figure 17). Humphry-Baker and bemo’s (2015) suggestion to be “magical” can be related to the “illusion” aspect by Jellyvision Inc. (2002). When designing a bot to human conversation Toscano (2016) stresses, that every interaction should have to mean and provide value and provides his best-case recommendations as a set of guidelines. Similar approaches are brought forward by Connolly and Intercom (2016).

Figure 17: Overview of bot guidelines

6.4. Excursion – Case study: uxchat.me

To gain experience designing and running a chatbot I have collaborated with and contributed to uxchat.me – a cooperation between Adrian Zumbrunnen – designer at Google, responsible for Google’s chat application Allo – and uxdesign.cc – a popular UX publication. Uxchat.me is a standalone chat website on which users chat to a bot – the UX bear (Figure 18). The site’s purpose is to offer a curated stream of articles and provide an alternative and more engaging way for people to learn about the field of UX and share their thoughts.

Figure 18: Screenshot of uxchat.me

As part of the collaboration I have been a content contributor and editor, researching, creating and editing conversations that are presented on the site. Conversations are scripted, there is no AI involved, and the flow of a conversation is handwritten. Conversations are set up based on individual topics. When visiting the site, the “UX bear” initiates a conversation. Users may stir conversations with predefined and structured answers. Depending on a user’s choice the conversation goes into a certain direction. A free speech functionality allows users to respond to questions or leave feedback. Although the bot is not context sensitive or fueled with the necessary intelligence to respond, Zumbrunnen (2016) points out the importance of feedback in a conversation: “Every open form text input by the user needs to be answered properly to create a positive reinforcement (Zumbrunnen, 2016)”. The bot possesses a memory that allows it to remember names and topics already covered in conversations to avoid redundancies.

Working on this project has provided valuable insights. As Zumbrunnen (2016) states, good interaction design is about writing. He considers interaction design as partially a writing discipline. He also stresses the importance of visual components and animations. Unlike other bots that live in a messaging ecosystem, Zumbrunnen’s bot is a standalone application. Thus, it incorporates a unique visual UI.

Recommendations based on Zumbrunnen (2016) and our collaboration:

  • A user’s reading speed is lower than the a bots potential writing speed, which the bot should account for by adjusting the pace.

    • Long messages have a greater delay to the next text bubble so that that the reader can follow.

    • Text bubble messages should not be too long (max. 120 characters) and the and the number of bubbles should remain between two and four before the next user interaction.

  • Button labels should be kept short. The action a user may take can be implied in the bot messages rather than the button label for the user.

  • Button labels ought to be positive.

  • The bot should add value to a conversation from a beginning and inform the user about the topic as soon as possible.

  • The user should be able to leave to conversation at any point.

These rules may be broken to maintain a certain element of unpredictability Zumbrunnen (2016) points out. Additionally, we ran a user survey that generated direct 64 responses. Users also responded with feedback through social channels and email. Although responses were positive, especially pointing out the visual way conversations are presented and the quality of content, some similarities in feedback regarding potential improvement could be deduced.

  • Users expect the bot to remember information such as their name or topics already covered in conversations.

  • Users expect more flexibility in terms of interaction – multiple feedback options and more free speech.

  • Users expect the bot to be more intelligent in general.

The issues mentioned mainly relate to the nature of the bot and the fact that it is scripted rather than fueled by some intelligence. Re-appearing topics and the bot’s forgetting names lay with the login-free nature of the website. Users do not need to create an account or login, which lowers the barriers to using the bot. At the same time, the bot relies on cookies to store the history of a conversation and user data. Once a user changes devices, browsers or deletes the cookies, information is lost.

From a content and maintenance perspective, the main challenge with this bot lies in its static nature. Each conversation needs to be manually written, edited and published. The process involves evaluating and curating relevant articles, summarising them, adding them to a collective spreadsheet, obtaining image material and turning all of this into conversations. Producing one story may easily take up to an hour of manual labour.

This project is available online at: http://www.uxchat.me

6.5. Designing conversations

Working on uxchat.me made it apparent that a core part of a conversational UI lies in the actual conversation and the ability of a bot to individually and contextually communicate to a human being. "As the visual design is demoted for words, what you say and how you say it becomes more crucial than ever (Mielke and Smashing Magazine, 2016)". A conversational UI reproduces a one-to-one communication between two parties. It seems relevant to look into the fields of communication theory and psychology, as there are several models to map communication, conversations and the way they work and function.

One approach is Schulz von Thun’s (2000) four layer model (Röhner and Schütz, 2012). Communication and conversations require a sender and a receiver. They exchange messages. Each message contains elements on four layers (Figure 19):

Figure 19: Schulz von Thun’s (2000) four layer model visualised

In a human-computer interaction either party can be sender and receiver and communicate on all four levels. Thus, it may be beneficial to take these layers into account when designing a conversational UI that aims at resembling human-human conversations. Röhner and Schütz (2012) refer to Grice (1975), who demands cooperation and the common interest and willingness of sender and receiver to establish a conversation. This is necessary to establish communication, in which both parties understand one another. Four maxims need to be followed to achieve efficiency and avoid misunderstandings (Figure 20):

Figure 20: Grice’s (1975) maxims for good communication

To deduce how these general principles relate to specific suggestions provided by specific bot guidelines the following listing of recommendations by Mielke and Smashing Magazine (2016) have been labelled applying Grice’s (1975) maxims  (Figure 21).

Figure 21: Comparison of bot guidelines and Grice’s (1975) maxims

Complying with Grice’s (2016) principles may help to assess conversations and make them natural and human. This seems to be a major aspiration when designing conversational UIs. As an example Zumbrunnen (2016) stresses the importance of avoiding repetition (maxim of quantity) and isolated messages that are off topic (maxim of relevance) as they do not feel human. "In a real conversation, you would move on to a new topic after exchanging some small talk about the weather. You would not return to the weather topic (Mielke and Smashing Magazine, 2016)".

Another core quality and aspiration in communication is to build trust. Beer (2015) calls it the main goal in a conversation between a chatbot and users. Robots need emotions to be successful, and even though these emotions do not need to be human, robots need to able to sense human emotions and respond accordingly as Norman (2005) states. Trust has different implications based on the goal and context of a solution. Especially when building bots for sensitive context such as healthcare, building trust is essential (Stan, 2016).

Levy (2016) refers to Apple's Tom Gruber statement about the design decision for Siri's voice: “Though it seems like a small detail, a more natural voice for Siri actually can trigger big differences. “People feel more trusting if the voice is a bit more high-quality”. Eyal (2015) goes one step further and describes working with an assistant through a conversational interface should feel like interacting with a friend. An approach to building trust was used by community-based traffic and navigation app Waze. To create more compelling and effective road safety messages Waze used children’s voices (Trendwatching, 2015). Kaliouby (2015) spots an opportunity to reimagine how humans connect with machines and thus connect with each other.

6.6. Reflection and learnings turned into principles and guidelines

Analysing secondary research in the field of design and communication and collaborating on real-world projects with Zumbrunnen (2016) and Hinderling Volkart (2016) helped to gain a better understanding of design guidelines and principles in the context of conversational UIs. This chapter aims at making sense of the learnings and providing a set of guidelines and principles to designing such UIs.

Trying to do so has revealed certain challenges. What are the differences between guidelines, principles, rules, recommendations, etc.? This question remains unanswered in this study, and the terms will be used interchangeably. When taking apart and clustering different approaches from various sources it has become apparent that there are similarities and redundancies.

While some writers propose specific actions to take, others describe the desired best-case in their guidelines leaving questions when it comes to the execution. So, there is often a question of “what to achieve” and “why to achieve it” as the desired state and a question of “how to achieve it” as a recommended way. In my opinion, all these questions are of relevance. Why we should do something in a certain way, provides a reason. The reason is needed to build understanding and awareness. The question of what to achieve describes the desired outcome of a design to best fulfil a certain need. Questioning the “how” relates to specific tools, techniques or patterns that may be applied to achieve this desired state.

Norman’s (2005) approach of clustering design on a visceral, behavioural and reflective level provides a good foundation to understand the different aspects of design and corresponding guidelines. Especially when looking at specific usability or UX guidelines clusters and purpose emerge. Such guidelines usually and almost exclusively focus on the behavioural layer of design. Based on my industry experience I have often noticed that usability companies and experts often purely argue from a usability perspective, using quantitative data e.g. from testings to support specific solutions. It remains questionable to me if this is the actual way to go and whether it matters if a user spends 4.23 or 4.25 seconds to achieve a user goal or conversion. When working in advertising and communication, I got exposed to the other extreme. Decisions are often exclusively made on the visceral level. Visual appeal, creativity and resulting industry awards are often the value measured. At the same time, usability and the creation of an actual user value are often neglected.

When talking about the reflective level and the long-term emotional impact of design, both aspects mentioned before come into play but do not necessarily have to be weighed to the same extent. In the end, it is down to a specific goal of a product or the need a human being has.

Interestingly, looking at some guidelines in communication theory and revealed linguistic, similarities may be found in design principles. Referring to Grice’s (1975) maxims to achieve efficiency and avoid misunderstandings are as much about usability and UX as corresponding design guidelines labelled as such. These maxims could easily be translated and relabeled to maxims for usability in design. This is where a core aspect of UX design comes into place. It is not that much about one specific discipline but rather about awareness and comprehension of different disciplines and it is especially valuable in the context of conversational UIs, where the visual side is only one aspect, and communication and conversations are central.  

The following guidelines  (Figure 22) aim at covering all three levels of Norman’s (2005) definition, provide a reason why to apply something, a goal to achieve and they ought to illustrate how to do so. Nevertheless, they remain general but at the same time applicable to designing conversational UIs for visual and nonvisual interfaces.

The guidelines have been deduced and developed through the following process: Firstly, ten different design guidelines and sources in communication theory were compared and clustered. Following this, learnings from two industry-collaborations were reflected and blended into the clusters from secondary research. Once categorised they were consolidated, re-phrased and regrouped. This laid the foundation for the following guidelines that are based on the main categorisation relating to Norman’s (2005) three layers of design.

Figure 22: Guidelines: How to design for New and No User Interfaces in the form of Conversational UIs


Previous chapter:

Next chapter: