Real-time topic inference

Creating a topic model is a long-running operation, so you can't create a topic model during a conversation with an end user at runtime. However, with real time topic inference, you can use a previously created topic model during a conversation to infer topics in real time.

Create a model

You need to have more than 1,000 conversations in a project in order to create a V2 topic model. After model training is completed, training statistics (based on matching generated topics to training conversations) will also be returned.

To create a new topic model, call the create method of the issueModel resource.

The model training is a long-running operation. You can poll the status of this operation to see if it has completed. For more information, see the long-running operations documentation.

Curate topics

The description of each topic is automatically inferred from the conversation transcripts.

You also have the following options for using the issue resource to improve topic assignments:

  • Manually update the description. To update the description of each topic, call the patch method.
  • Add a topic. To add a new topic, call the post method.
  • Delete a topic. To remove an existing topic, call the delete method.

To apply a new change to an existing analysis, follow these steps to re-analyze the conversation.

  1. In the Insights console, choose your project.
  2. Click news Conversation Hub.
  3. Choose one option:
    • To re-analyze a single conversation, select a conversation from the list and click Re-analyze.
    • For bulk analysis, navigate to Conversation History, set a conversation filter to Analysis Status = Has been analyzed, then click Analyze.

Deploy a model

You need to deploy a model before you can use it.

To deploy a previously created model, call the deploy method of the issueModel resource.

Deploying a model is a long-running operation. You can poll the status of this operation to see if it has completed. For more information, see the long-running operations documentation.

Infer topics

Next, you can infer a topic for an end user utterance at runtime. To infer a topic, call the create method of the analyses resource. To run all annotators, you can call the create method without specifying an annotator selector. The topic inference result will be present in the analysisResult resource.

Undeploy a model

To undeploy a model, call the undeploy method of the issueModel resource.

Undeploying a model is a long-running operation. You can poll the status of this operation to see if it has completed. For more information, see the long-running operations documentation.

Delete a model

To delete a model, call the delete method of the issueModel resource.

Deleting a model is a long-running operation. You can poll the status of this operation to see if it has completed. For more information, see the long-running operations documentation.