- 4.59.0 (latest)
- 4.58.0
- 4.57.0
- 4.55.0
- 4.54.0
- 4.53.0
- 4.52.0
- 4.51.0
- 4.50.0
- 4.49.0
- 4.48.0
- 4.47.0
- 4.46.0
- 4.45.0
- 4.43.0
- 4.42.0
- 4.41.0
- 4.40.0
- 4.39.0
- 4.38.0
- 4.37.0
- 4.36.0
- 4.35.0
- 4.34.0
- 4.33.0
- 4.30.0
- 4.29.0
- 4.28.0
- 4.27.0
- 4.26.0
- 4.25.0
- 4.24.0
- 4.23.0
- 4.22.0
- 4.21.0
- 4.20.0
- 4.19.0
- 4.18.0
- 4.17.0
- 4.15.0
- 4.14.0
- 4.13.0
- 4.12.0
- 4.11.0
- 4.10.0
- 4.9.1
- 4.8.6
- 4.7.5
- 4.6.0
- 4.5.11
- 4.4.0
- 4.3.1
public interface InferenceParameterOrBuilder extends MessageOrBuilder
Implements
MessageOrBuilderMethods
getMaxOutputTokens()
public abstract int getMaxOutputTokens()
Optional. Maximum number of the output tokens for the generator.
optional int32 max_output_tokens = 1 [(.google.api.field_behavior) = OPTIONAL];
Returns | |
---|---|
Type | Description |
int |
The maxOutputTokens. |
getTemperature()
public abstract double getTemperature()
Optional. Controls the randomness of LLM predictions. Low temperature = less random. High temperature = more random. If unset (or 0), uses a default value of 0.
optional double temperature = 2 [(.google.api.field_behavior) = OPTIONAL];
Returns | |
---|---|
Type | Description |
double |
The temperature. |
getTopK()
public abstract int getTopK()
Optional. Top-k changes how the model selects tokens for output. A top-k of 1 means the selected token is the most probable among all tokens in the model's vocabulary (also called greedy decoding), while a top-k of 3 means that the next token is selected from among the 3 most probable tokens (using temperature). For each token selection step, the top K tokens with the highest probabilities are sampled. Then tokens are further filtered based on topP with the final token selected using temperature sampling. Specify a lower value for less random responses and a higher value for more random responses. Acceptable value is [1, 40], default to 40.
optional int32 top_k = 3 [(.google.api.field_behavior) = OPTIONAL];
Returns | |
---|---|
Type | Description |
int |
The topK. |
getTopP()
public abstract double getTopP()
Optional. Top-p changes how the model selects tokens for output. Tokens are selected from most K (see topK parameter) probable to least until the sum of their probabilities equals the top-p value. For example, if tokens A, B, and C have a probability of 0.3, 0.2, and 0.1 and the top-p value is 0.5, then the model will select either A or B as the next token (using temperature) and doesn't consider C. The default top-p value is 0.95. Specify a lower value for less random responses and a higher value for more random responses. Acceptable value is [0.0, 1.0], default to 0.95.
optional double top_p = 4 [(.google.api.field_behavior) = OPTIONAL];
Returns | |
---|---|
Type | Description |
double |
The topP. |
hasMaxOutputTokens()
public abstract boolean hasMaxOutputTokens()
Optional. Maximum number of the output tokens for the generator.
optional int32 max_output_tokens = 1 [(.google.api.field_behavior) = OPTIONAL];
Returns | |
---|---|
Type | Description |
boolean |
Whether the maxOutputTokens field is set. |
hasTemperature()
public abstract boolean hasTemperature()
Optional. Controls the randomness of LLM predictions. Low temperature = less random. High temperature = more random. If unset (or 0), uses a default value of 0.
optional double temperature = 2 [(.google.api.field_behavior) = OPTIONAL];
Returns | |
---|---|
Type | Description |
boolean |
Whether the temperature field is set. |
hasTopK()
public abstract boolean hasTopK()
Optional. Top-k changes how the model selects tokens for output. A top-k of 1 means the selected token is the most probable among all tokens in the model's vocabulary (also called greedy decoding), while a top-k of 3 means that the next token is selected from among the 3 most probable tokens (using temperature). For each token selection step, the top K tokens with the highest probabilities are sampled. Then tokens are further filtered based on topP with the final token selected using temperature sampling. Specify a lower value for less random responses and a higher value for more random responses. Acceptable value is [1, 40], default to 40.
optional int32 top_k = 3 [(.google.api.field_behavior) = OPTIONAL];
Returns | |
---|---|
Type | Description |
boolean |
Whether the topK field is set. |
hasTopP()
public abstract boolean hasTopP()
Optional. Top-p changes how the model selects tokens for output. Tokens are selected from most K (see topK parameter) probable to least until the sum of their probabilities equals the top-p value. For example, if tokens A, B, and C have a probability of 0.3, 0.2, and 0.1 and the top-p value is 0.5, then the model will select either A or B as the next token (using temperature) and doesn't consider C. The default top-p value is 0.95. Specify a lower value for less random responses and a higher value for more random responses. Acceptable value is [0.0, 1.0], default to 0.95.
optional double top_p = 4 [(.google.api.field_behavior) = OPTIONAL];
Returns | |
---|---|
Type | Description |
boolean |
Whether the topP field is set. |