dan nessler
digital experience designer

7. Case study: Building bots with Hinderling Volkart

Existing knowledge in UX may be adapted and applied. New and specific learnings need to be made.

 

In the scope of this industry research project, a collaboration with Hinderling Volkart – one of Switzerland’s leading digital agencies – was set up. The goal is to find out what if feels like to build a bot, to apply a human-centered design approach to a specific and real industry project and to put the previous research to practice.

At the point, I started the collaboration, Hinderling Volkart’s (2016) project – redesigning their agency website – had already been in the works. As they had planned, chatbots ought to become a part of their new website. Each employee would get a unique profile page and an individualised bot, initiating conversations with visitors. In the context of the idea, Hinderling Volkart (2016) had already designed new business cards. Unlike conventional business cards, their draft showed a short text covering the person’s hobbies or passions and a URL rather than an email address or phone number. The URL would lead the user to an isolated page where he would be presented with the employee's bot that would pick up on the content of the business card and initiate a chat  (Figure 23).

Figure 23: Visualisation of initial Hinderling Volkart bot idea

Designs and a first coded prototype were already in an advanced stage, and a rough timing with a GO LIVE deadline had been set. The idea to integrate bots on the company’s website had not been based on specific user research or user needs. This opposed to the user-centred approach taught at Hyper Island in which one starts with user needs first, followed by research, assumptions, prototyping, testing and corresponding iterations.

7.1. The design process and project management

To start off, I presented a sprint planning (Figure 24) to introduce a more user-centred approach based on Jongerius and Berghuis (2013) combined with the Revamped Double Diamond design process (Nessler, 2016). In scrum-like processes, sprint 0 serves as the point of departure. Research and setting up assumptions and hypotheses and creating a corresponding planning and backlog are core activities in this phase. Due to the advanced state of the project, sprint 0 activities were merged into sprint 1.

Figure 24: Sprint planning proposal for Hinderling Volkart

Part of a sprint planning is to constantly measure the progress and alter it based on the current status and the achievements during a sprint. Therefore, the process and the corresponding goals and tasks of a sprint got reevaluated and redefined when necessary.

7.2. Techniques, tools and artifacts

This chapter documents the various artifacts and deliverables that were produced during the process. Techniques, tools and artifacts applied and delivered were chosen based on various sources such as my experience in digital projects, recommendations based on bot design process by BAM (2016), general UX design principles and Garrett’s (2010) UX guidelines.

7.2.1. Goals, requirement and needs – People, Business and Technology

Based on IDEO’s (2016) DFV-model that suggests evaluating desirability (people), feasibility (technology) and viability (business), I held a workshop (Figure 25) with Hinderling & Volkart (2016) to define the overarching goals of the new website and the employee’s profile pages. The workshop resulted in the definition of a vision consisting of three core statements and specific goals the new website, employee profile pages and the bot ought to serve (Figure 26):

Figure 25: Kick Off with Hinderling Volkart management (Photo: Michael Volkart)

Figure 26: Outtakes from Status Update presentation for Hinderling Volkart – Purpose & Goals

Furthermore, we defined a technical scope in which we could operate based on resources Hinderling Volkart had at their disposal. Target groups were mapped out, and priorities set (Figure 29). This served as the groundwork for the evaluation and recruitment of interviewees. Nine interviews with users were conducted (Figure 28) to gain insights regarding their needs. Google Analytics was analysed (Figure 27), and previous research and learnings were taken into consideration.

Figure 27: Outtake from hinderlingvolkart.ch Google Analytics & Analysis

Figure 28: Outtake from user interview and testing presentation

Figure 29: User group and user flow analysis

7.2.2. Use cases, scenarios and personas

Since business goals, a technological frame and user needs had been identified, general scenarios, specific use cases and related personas could be defined (Figure 30). These activities were carried out as team activities to elaborate the scope, which was narrowed down in individual session to produce the relevant definitions. Use cases, scenarios and personas laid the foundation to understand who would use our bots, what these people's needs and scenarios would be and what tasks they would perform.

Figure 30: Use cases & persona workshop outtake

7.2.3. Functionalities and conversational areas

With user and business goals and technical constraints set, specific functionalities and conversational areas could be defined. Functionalities and conversational areas would both be brainstormed (Figure 31) in a collaborative team activity, mapped on matrixes, voted upon and then discussed, prioritised and approved by the management (Figure 32). This activity helped to narrow down and specify the technical scope and relevant conversation topics.

Figure 31: Conversational areas & functionalities workshop outtakes

Figure 32: Conversational areas & functionalities review & approval workshop outtakes

7.2.4. Bot persona

Information collected at this point was made more tangible by setting up a bot persona (Figure 33). The bot persona encompasses traits of a regular user persona. In our case we set up the bot persona by defining its overall purpose based on the company’s goals, specific business goals it would have to achieve, the user needs it would have to fulfil, they way, style, and tone of voice it would talk in and potential topics it would talk about. To a large extent, the bot persona could be set up based on the findings from the research that had been down to this point. To define the bots personality and tone of voice a workshop with the management was held, and corresponding attributes were defined.

Figure 33: Bot persona workshop outtake

7.2.5. Employee interviews and survey

The bot persona served as a framework to make the bot, its purpose, goals and personality tangible. Nevertheless, we had to account for the fact that every single employee would get its bot. Therefore, each bot would have to be unique and at the same time communicate the company’s values and represent them. To learn more about employees, qualitative interviews with eight employees had been carried out. Based on the knowledge gained and the requirements agreed upon by the management, a survey was launched. This survey was designed to have employees respond to questions about their role in the company, their job, things they like and dislike and insights into their personality, hobbies and passion. Before the survey was sent out to the entire team, it was tested and iterated upon, to verify whether people would understand the questions and the way they hoped them to react to them (Figure 34). The goal was not only to learn about the employees and get samples of the way the write, but it was also a way to involve them in the process so that they could relate to their individual bots and feel appropriately represented.

Figure 34: Employee survey screenshot & user testing

7.2.6. Conversation flows and database structure

With the definition of conversational areas, source content available and the knowledge of technical boundaries, conversational flows and a database a structure could be set up. In collaborative workshops, flows of conversations and relations between single topics and thus databases were defined (Figure 35). The way they were elaborated and the amount of complexity that could be applied partially depended on the capabilities of the database and the functionalities that could be implemented within the technical boundaries. Provided with the conversational flow and database structure, content production could be initiated.

Figure 35: Source data: Conversational flow and data base structure workshop outtakes and conversation .js script snippet

7.2.7. Information architecture, UI and interaction design

The general structure of the employee profile page and its design had been drafted before my collaboration with Hinderling Volkart had started. To validate the decisions that had been made, an early user testing put emphasis on these decisions. Based on feedbacks from testings, design decisions got reflected and learnings got implemented in later design iterations (Figure 36).

Figure 36: Source data: Visual of rough IA/wireframe after first user testing and resulting 3rd prototype layout

7.2.8. Writing conversations

Early conversations were written in google docs consisting of tables with different columns for the bot messages and the user messages and adding corresponding references to link the blocks together. In another approach, such conversations got visualised using a flow chart tool. This helped to visualise the interconnection between and flow. Data processing and integrating such tools into a more structured data format proved to be difficult. In later versions, dialogs were written in spreadsheets as they allowed easier processing into the structured code and JSON objects. In the final MVP version, multiple google sheets are used to write and edit conversation (Figure 37). They then get fed into the code structure and visualised in the final product.

Figure 37: Source data: Conversations visualised as flow chart, in final form as spreadsheet database and in simplified .yml

7.2.9. User testings

Involving the user in every stage of the process is a core principle of an user-centred design process. The original sprint planning accounted for a user testing in every sprint. Due to limited resources, this could not be done, as required prototype iterations could not be finished in time. Nevertheless, and to the point of this writing two user testing sessions (Figure 38) with a total number of 13 users had been conducted. Personas, scenarios and use cases served as the foundation for these testings that each had different goals. Based on the defined target groups and personas, testers got evaluated. Testings were conducted in person or via video chat. The testings generated feedback that got applied to the process and product during the process.

Figure 38: Source data: 2nd user testing outtakes

7.3. Project related learnings

First of all, it is crucial to get all internal stakeholders on board and involve them during the process. To create a product that represents them, their opinions, goals, requirements, and expectations are essential to success. Elaborating these aspects early helped to create a shared understanding. Although the project had already been in progress at the time, I joined, going back one step and interviewing users contributed to generating important insights.

Creating use cases, scenarios and personas were beneficial regarding what we would build, who we would build it for and what the expected outcome would be from a user perspective. These artifacts did not get a polished execution and maintained a low-fi character be it either post-its on walls or hand-written and drawn visualisations. In the context of this project setup, there was no need for these process artifacts to be high-fi and polished. If a client had contracted the project, these artifacts would have potentially been modified to a level that they could have been used as deliverables.

Collaboratively brainstorming to define potential functionalities and content areas helped to gauge the scope of the project, and workflow, plan resources, sprint scopes and feasibility.

Creating a bot persona was useful to define the purpose of the bot, its goals and its capabilities. At the same time defining its personality was challenging. Although we established a tone of voice, the adaptation when producing content remained tricky. A more accurate writing style guide providing specific examples of how the bot would talk could have been beneficial. At the same time, setting one up would have been difficult as our bot persona represented Hinderling Volkart as one. We had to account for the fact that every employee would get a personal bot with an individual “sub-personality” embedded in the “corporate” personality. A way to address this could have been to develop individual bot personas for each person, which would have come at the cost of a lot of labour without a clear benefit.

Prototypes with sample conversations were built and tested early in the process. Sample conversations helped to gain and provide an idea of how a bot would communicate and sound. Testers et al. (2016) liked the style of the test conversations and the tone of voice. Nevertheless, some testers et. al. (2016) felt the discussions went to deep. One other thing they often mentioned that they appreciated it when the bot would remember and refer to their name. As prototypes were limited in functionalities, more complex conversations and transitions between different topics could only be partially tested. These learnings were incorporated during the iterations.

One primary challenge of this project was to write the actual conversations and make them tangible for collaboration, review and testings. A thoroughly satisfying solution could not be found. From an editor's perspective, writing in a “google docs” was most comfortable regarding writing. Referencing and connecting different parts of dialogues was difficult though, as conversations with various alternate directions are hard to set up and comprehend. Visualising such conversations using a flowchart or mind-map-like diagram helped. At the same time, writing in the tool used and processing the data was inefficient. The final solution of using spreadsheets is applicable regarding data processing and the structure necessary to implement the content into code. From an editor’s view writing in a spreadsheet is not the most comfortable thing to do. Reviewing these conversations and later testing them relied on having a working chat interface where conversations could be visualised. Continuously validating written conversations in prototypes and refining them in the spreadsheets were the only useful ways of working. Nevertheless, this process was not ideal as there were multiple dependencies regarding having a working software and the editing of content was not editor friendly and required a lot of time.

Testings helped to generate valuable feedback. We determined early that the bot could not be a standalone feature separated from the employee’s main profile page providing hard facts such as name, role, and contact details of an employee. Various testers et. al. (2016) stated that they had a clear need for these pieces of information and they would want to be able to retrieve them at first glance. This led to design changes and later iterations. At the same time, testers et. al. (2016) argued they would be willing to engage in conversations with a bot to obtain soft facts and learn about the person behind the bot. The playful way of the interaction was appreciated. Another issue that popped up was trust. Although most testers anticipated they were dealing with a bot, some seemed slightly confused and expressed this confusion verbally. To build trust, they would need the bot to be honest early in the conversation. The testings also resulted in various findings when it comes to the page architecture, the UI and the interaction design. Various feedbacks could be implemented during the iterations based on user feedbacks. Ideally, there would have been more testings to validate the findings further and correctly evaluate the quality of the conversations. Due to a lack of time and resources, more testings could not be carried out.

Nevertheless, and more importantly than ever, an interdisciplinary collaboration is an essential aspect when working on such projects.

 

Video 3: Screen Recording of Prototype bot interface and sample conversation

Figure 39: 8 HV employee screens with chat initiations (prototype screenshots – state November 2016)


Disclaimer:

The images shown above (Figure 39) are screenshots taken from a prototype (designs and execution are not final). The final version will be public on (presumably early 2017):

http://www.hinderlingvolkart.com


Previous chapter:

Next chapter: