Case study / SaaS / 2022 / Research / Prototyping

Improving quality of chats at Helpdesk.media

Helpdesk Media Foundation is a European non-profit organisation that functions as an emergency hotline for the victims of the war in Ukraine. The Foundation has also a media that tells stories of people in the context of the Russo-Ukrainian war. The hotline has been operated by 3k+ volunteers and has helped 35k+ people in the course of 2022–23. To operate at this scale Helpdesk Media Foundation developed it’s own small but mighty and secured support system of a hotline app for people and a conversational support system (e.g. Zendesk, Intercom).
Foundation's website (RU+EN)
CEO’s article in the New York Times (EN)
About the Foundation’s helpline service in the Nieman Lab (EN)

P.S. All the designs here were translated from Russian to English for a better comprehension




Introduction

Launched early in 2022 convesrational support system at Helpdesk Media Foundation lacked many features that similar instruments could offer. One of those features is tracking and improving quality of chats. My role as a new Product Designer to the team was to develop a system that could help assess and improve quality of chats helping the foundation offer better help to people affected by the war.Users: 1-2 regularStakeholders: 1 (Head of Support)Team: Designer (me), CTO In the beginning the quality of chats has been assessed by a chat quality manager. They used several instruments helping them doing the job correctly but it required a lot of manual work to do first. Make all those tasks be in one place paired with analytics would significantly reduce manual work, increase the amount of chats the manager could assess the quality of and find pain points of people writing in chats resulting in overall better chat quality.






Defining problems

The get better understanding of the problem and the whole process of assessing quality of chats I conducted an interview with the chat quality manager and talked to our stakeholder. The results of the interview were transformed into a CJM helping me define pain points and think about solutions later.


The key apsects of the interview were defined:
  • Either a dedicated space for assessing chats is needed or the conversational support chats space should be user-friendlier for working with chat quality
  • Closed but available chats should be visible for assessment
  • Need more analytical data to be able to assess chat quality better
  • Need to tag individual messages to be able to write reports more properly


Design

I discussed potential features and improvements with CTO and we made a list of features based on importance and difficulty of developing them. I drew quick wireframes to facilitate conversation with CTO and when we decided on particular solutions, he started working on code and I made our designs final. The first fersion of the conversational support system was made with custom components that didn’t allow us to make features fast and scale our product in the long run so we decided to take an already built component library and gradually move to the new components as we would develop new features later. Another interesting finding from the talk to the stakeholder was that a dashboard with data is needed for her and the chat quality manager to observe data in real time and filter corner cases in customer behavior to improve vhat quality in the long run. On that we decided to make a dedicated space for them where they could filter closed chats based on diferrent crietria and at the same time see available chats for quality assessment.


We also developed a new filtering system that I had made at a previous job. We would use it later in diferrent directories of the conversational support system. The new filtering system allowed our stakeholder to see average chat quality and CSAT in real time and dive deep in corner cases when needed.


The experience of a person chatting with our agents also improved to help us with chat assessment. Firstly, I designed user reaction to a message (e.g. reactions in popular messagge apps when long pressing). Ours ought to be clearly visibile and easy to use. Secondly, when finishing a conversation we would a person rate their experience.

Through a quick hallway testing 1st option was chosen


Technical scheme of how the end of a conversation was transformed with rating actions


To help the chat quality manager connect a person's rating of their chat with comments we also added message tagging.

Implementation

As user flows were decided for both the web chats and the chat quality assessment feature, I made prototypes and tested the latter on the user. Structured usability testing showed that the neccessary demands were met and painpoints were closed. With that in mind, the feature was set for launch and was met dearly by the chat assessment manager and the stakeholder.

Results

The feature for chats allowed us to start measaring customer satisfaction rate and later helped find painpoints in the customer experience flow thanks to having detailed chat assessments.
Agents’ responsesimproved Chat quality assessmentmade faster 95%+CSAT became after the 1st week of measuring In depth analytics
of chats quality became possible