public sealed class InferenceParameter : IMessage<InferenceParameter>, IEquatable<InferenceParameter>, IDeepCloneable<InferenceParameter>, IBufferMessage, IMessage
Reference documentation and code samples for the Google Cloud Dialogflow v2 API class InferenceParameter.
The parameters of inference.
Implements
IMessageInferenceParameter, IEquatableInferenceParameter, IDeepCloneableInferenceParameter, IBufferMessage, IMessageNamespace
Google.Cloud.Dialogflow.V2Assembly
Google.Cloud.Dialogflow.V2.dll
Constructors
InferenceParameter()
public InferenceParameter()
InferenceParameter(InferenceParameter)
public InferenceParameter(InferenceParameter other)
Parameter | |
---|---|
Name | Description |
other |
InferenceParameter |
Properties
HasMaxOutputTokens
public bool HasMaxOutputTokens { get; }
Gets whether the "max_output_tokens" field is set
Property Value | |
---|---|
Type | Description |
bool |
HasTemperature
public bool HasTemperature { get; }
Gets whether the "temperature" field is set
Property Value | |
---|---|
Type | Description |
bool |
HasTopK
public bool HasTopK { get; }
Gets whether the "top_k" field is set
Property Value | |
---|---|
Type | Description |
bool |
HasTopP
public bool HasTopP { get; }
Gets whether the "top_p" field is set
Property Value | |
---|---|
Type | Description |
bool |
MaxOutputTokens
public int MaxOutputTokens { get; set; }
Optional. Maximum number of the output tokens for the generator.
Property Value | |
---|---|
Type | Description |
int |
Temperature
public double Temperature { get; set; }
Optional. Controls the randomness of LLM predictions. Low temperature = less random. High temperature = more random. If unset (or 0), uses a default value of 0.
Property Value | |
---|---|
Type | Description |
double |
TopK
public int TopK { get; set; }
Optional. Top-k changes how the model selects tokens for output. A top-k of 1 means the selected token is the most probable among all tokens in the model's vocabulary (also called greedy decoding), while a top-k of 3 means that the next token is selected from among the 3 most probable tokens (using temperature). For each token selection step, the top K tokens with the highest probabilities are sampled. Then tokens are further filtered based on topP with the final token selected using temperature sampling. Specify a lower value for less random responses and a higher value for more random responses. Acceptable value is [1, 40], default to 40.
Property Value | |
---|---|
Type | Description |
int |
TopP
public double TopP { get; set; }
Optional. Top-p changes how the model selects tokens for output. Tokens are selected from most K (see topK parameter) probable to least until the sum of their probabilities equals the top-p value. For example, if tokens A, B, and C have a probability of 0.3, 0.2, and 0.1 and the top-p value is 0.5, then the model will select either A or B as the next token (using temperature) and doesn't consider C. The default top-p value is 0.95. Specify a lower value for less random responses and a higher value for more random responses. Acceptable value is [0.0, 1.0], default to 0.95.
Property Value | |
---|---|
Type | Description |
double |