Quality AI uses an AI model to automatically analyze customer service conversations, or interactions between contact center agents and users. The AI model analyzes chat or voice transcripts.
Conversation details
Conversations contain the following details. These details include identifiers and metrics for analysis.
- Agent ID: A unique number assigned to each agent which identifies the conversations they have handled.
- Agent total score: The average score of an agent's performance across that agent's conversations.
- AHT: Average Handling Time, the average duration of an agent's conversations in a specified timeframe.
- Average agent score: The average across all your agents' total scores. (See Agent total score.)
- Average agent quality score: Average of the quality scores produced by a single agent's conversations over a specified period of time. (See Quality score.)
- Average conversation score: Average score across all conversations.
- Average quality score: Average of the quality score over a specified period of time. (See Quality score.)
- Channel: The medium of conversation between a customer and an agent. Channel has one of two values: voice or chat
- Conversation ID: A unique number assigned to identify each customer service conversation.
- Conversation total score: Sum of question scores in a single conversation.
- CSAT: Customer satisfaction rating, generally ranging from 1-5.
- Duration: Time the conversation spans, beginning to end.
- Primary topic: The concern discussed during a conversation, determined by topic modeling. Quality AI only displays a primary topic if you've used topic modeling on that conversation.
- Quality score: The overall score assigned for a scorecard.
- Question: Used to evaluate an agent's performance in a conversation. You enter your questions into Quality AI, and the agent is then rated on whether they satisfied the criteria for each question.
- Sentiment: The main emotional state conveyed by the conversation. Sentiment has one of three values: positive, neutral, or negative. Quality AI displays a sentiment only if you've used sentiment analysis on the conversation.
- Silence: Time during which neither the customer nor the agent spoke or typed.
- Start date: The date on which the conversation began.
- Start time: The time at which the conversation began.
- Total volume: The total number of conversations that a single agent handled in a specified timeframe.
Scorecards
The scorecard is a structured framework used to assess conversation quality and the performance of contact center agents during conversations. Each contact center has its own scorecards. Within the scorecard console, navigate to the Scorecard page and add the following information:
- Question (Example: Did the agent provide an appropriate product compliment?).
- Optional: Tag to group the questions into categories.
- Instructions for interpreting the question and defining each answer choice.
- Answer type (can be text, numbers, or yes/no).
- Answer choices that define the possible answers based on answer type (For example, yes and no, a list of numbers, or some text responses).
- Score to set the points earned for each answer choice. The maximum score for a single question is determined by the highest score among all the answer choices.
Conversation scores
Quality AI automatically evaluates conversations against the scorecards you supply. For each question, do the following:
- Define the answer type
- List the possible answer choices
- Set the score for each answer choice
A conversation score consists of the total received score divided by the maximum possible score for that conversation. The total received score is the sum of all the points obtained from the assigned answer choices for each question. The maximum possible score is the sum of the maximum scores for each question. Any question assigned an N/A response is removed from this conversation score calculation. The conversation score is displayed as a percentage.
Manual updates
After analyzing a conversation, you can manually update the answer to any question. When you manually update an answer, Quality AI automatically adjusts the score for that question and the corresponding conversation score. In addition, the Quality AI console marks that question and conversation with a visual icon to indicate that the answer was manually updated. Lastly, Quality AI automatically adds any manually updated answer as an example conversation to improve the AI model.
Examples
The following examples illustrate how a conversation score is calculated.
Example 1
If the following is true:
- A scorecard has 10 questions
- Each question is a yes or no question
- Yes receives a score of 1 and No gets 0
- A conversation has received all "Yes" answers
Then the conversation score is 100%.
Example 2
If the following is true:
- A scorecard has 10 questions
- Each question is a yes or no question
- Yes receives a score of 1 and No gets 0
- A conversation receive 7 "Yes" responses, 2 "No", and 1 "N/A"
Then the "N/A" question is removed so there are 9 total possible points. The conversation received 7 out of 9 possible points. The conversation score is rounded and displayed as 78%.