Case study / SaaS / 2022 / Research / Prototyping
Helpdesk Media Foundation is a European non-profit organisation that functions as an emergency hotline for the victims of the war in Ukraine. The Foundation has also a media that tells stories of people in the context of the Russo-Ukrainian war. The hotline has been operated by 3k+ volunteers and has helped 35k+ people in the course of 2022–23. To operate at this scale Helpdesk Media Foundation developed it’s own small but mighty and secured support system of a hotline app for people and a conversational support system (e.g. Zendesk, Intercom).Improving quality of chats at Helpdesk.media
Foundation's website (RU+EN)
CEO’s article in the New York Times (EN)
About the Foundation’s helpline service in the Nieman Lab (EN)
P.S. All the designs here were translated from Russian to English for a better comprehension
Introduction
Defining problems
The get better understanding of the problem and the whole process of assessing quality of chats I conducted an interview with the chat quality manager and talked to our stakeholder. The results of the interview were transformed into a CJM helping me define pain points and think about solutions later.The key apsects of the interview were defined:
-
Either a dedicated space for assessing chats is needed or the conversational support chats space should be user-friendlier for working with chat quality
- Closed but available chats should be visible for assessment
- Need more analytical data to be able to assess chat quality better
- Need to tag individual messages to be able to write reports more properly
Design
I discussed potential features and improvements with CTO and we made a list of features based on importance and difficulty of developing them.
I drew quick wireframes to facilitate conversation with CTO and when we decided on particular solutions, he started working on code and I made our designs final.
The first fersion of the conversational support system was made with custom components that didn’t allow us to make features fast and scale our product in the long run so we decided to take an already built component library and gradually move to the new components as we would develop new features later.
Another interesting finding from the talk to the stakeholder was that a dashboard with data is needed for her and the chat quality manager to observe data in real time and filter corner cases in customer behavior to improve vhat quality in the long run. On that we decided to make a dedicated space for them where they could filter closed chats based on diferrent crietria and at the same time see available chats for quality assessment.
We also developed a new filtering system that I had made at a previous job. We would use it later in diferrent directories of the conversational support system. The new filtering system allowed our stakeholder to see average chat quality and CSAT in real time and dive deep in corner cases when needed.
The experience of a person chatting with our agents also improved to help us with chat assessment. Firstly, I designed user reaction to a message (e.g. reactions in popular messagge apps when long pressing). Ours ought to be clearly visibile and easy to use. Secondly, when finishing a conversation we would a person rate their experience.
To help the chat quality manager connect a person's rating of their chat with comments we also added message tagging.
Implementation
As user flows were decided for both the web chats and the chat quality assessment feature, I made prototypes and tested the latter on the user. Structured usability testing showed that the neccessary demands were met and painpoints were closed. With that in mind, the feature was set for launch and was met dearly by the chat assessment manager and the stakeholder.Results
The feature for chats allowed us to start measaring customer satisfaction rate and later helped find painpoints in the customer experience flow thanks to having detailed chat assessments.of chats quality became possible