Reference documentation and code samples for the Vision AI V1 API class Google::Cloud::VisionAI::V1::VertexCustomConfig.
Message describing VertexCustomConfig.
Inherits
Object
Extended By
Google::Protobuf::MessageExts::ClassMethods
Includes
Google::Protobuf::MessageExts
Methods
#attach_application_metadata
defattach_application_metadata()->::Boolean
Returns
(::Boolean) — If true, the prediction request received by custom model will also contain
metadata with the following schema:
'appPlatformMetadata': {
'ingestionTime': DOUBLE; (UNIX timestamp)
'application': STRING;
'instanceId': STRING;
'node': STRING;
'processor': STRING;
}
#attach_application_metadata=
defattach_application_metadata=(value)->::Boolean
Parameter
value (::Boolean) — If true, the prediction request received by custom model will also contain
metadata with the following schema:
'appPlatformMetadata': {
'ingestionTime': DOUBLE; (UNIX timestamp)
'application': STRING;
'instanceId': STRING;
'node': STRING;
'processor': STRING;
}
Returns
(::Boolean) — If true, the prediction request received by custom model will also contain
metadata with the following schema:
'appPlatformMetadata': {
'ingestionTime': DOUBLE; (UNIX timestamp)
'application': STRING;
'instanceId': STRING;
'node': STRING;
'processor': STRING;
}
(::String) — Optional. By setting the configuration_input_topic, processor will
subscribe to given topic, only pub/sub topic is supported now. Example
channel:
//pubsub.googleapis.com/projects/visionai-testing-stable/topics/test-topic
message schema should be:
message Message {
// The ID of the stream that associates with the application instance.
string stream_id = 1;
// The target fps. By default, the custom processor will not send any
data to the Vertex Prediction container. Note that once the
dynamic_config_input_topic is set, max_prediction_fps will not work and be
preceded by the fps set inside the topic.
int32 fps = 2;
}
#dynamic_config_input_topic=
defdynamic_config_input_topic=(value)->::String
Parameter
value (::String) — Optional. By setting the configuration_input_topic, processor will
subscribe to given topic, only pub/sub topic is supported now. Example
channel:
//pubsub.googleapis.com/projects/visionai-testing-stable/topics/test-topic
message schema should be:
message Message {
// The ID of the stream that associates with the application instance.
string stream_id = 1;
// The target fps. By default, the custom processor will not send any
data to the Vertex Prediction container. Note that once the
dynamic_config_input_topic is set, max_prediction_fps will not work and be
preceded by the fps set inside the topic.
int32 fps = 2;
}
Returns
(::String) — Optional. By setting the configuration_input_topic, processor will
subscribe to given topic, only pub/sub topic is supported now. Example
channel:
//pubsub.googleapis.com/projects/visionai-testing-stable/topics/test-topic
message schema should be:
message Message {
// The ID of the stream that associates with the application instance.
string stream_id = 1;
// The target fps. By default, the custom processor will not send any
data to the Vertex Prediction container. Note that once the
dynamic_config_input_topic is set, max_prediction_fps will not work and be
preceded by the fps set inside the topic.
int32 fps = 2;
}
#max_prediction_fps
defmax_prediction_fps()->::Integer
Returns
(::Integer) — The max prediction frame per second. This attribute sets how fast the
operator sends prediction requests to Vertex AI endpoint. Default value is
0, which means there is no max prediction fps limit. The operator sends
prediction requests at input fps.
#max_prediction_fps=
defmax_prediction_fps=(value)->::Integer
Parameter
value (::Integer) — The max prediction frame per second. This attribute sets how fast the
operator sends prediction requests to Vertex AI endpoint. Default value is
0, which means there is no max prediction fps limit. The operator sends
prediction requests at input fps.
Returns
(::Integer) — The max prediction frame per second. This attribute sets how fast the
operator sends prediction requests to Vertex AI endpoint. Default value is
0, which means there is no max prediction fps limit. The operator sends
prediction requests at input fps.
#post_processing_cloud_function
defpost_processing_cloud_function()->::String
Returns
(::String) —
If not empty, the prediction result will be sent to the specified cloud
function for post processing.
The cloud function will receive AppPlatformCloudFunctionRequest where
the annotations field will be the json format of proto PredictResponse.
The cloud function should return AppPlatformCloudFunctionResponse with
PredictResponse stored in the annotations field.
To drop the prediction output, simply clear the payload field in the
returned AppPlatformCloudFunctionResponse.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-09-09 UTC."],[],[],null,["# Vision AI V1 API - Class Google::Cloud::VisionAI::V1::VertexCustomConfig (v1.1.1)\n\nVersion latestkeyboard_arrow_down\n\n- [1.1.1 (latest)](/ruby/docs/reference/google-cloud-vision_ai-v1/latest/Google-Cloud-VisionAI-V1-VertexCustomConfig)\n- [1.1.0](/ruby/docs/reference/google-cloud-vision_ai-v1/1.1.0/Google-Cloud-VisionAI-V1-VertexCustomConfig)\n- [1.0.1](/ruby/docs/reference/google-cloud-vision_ai-v1/1.0.1/Google-Cloud-VisionAI-V1-VertexCustomConfig)\n- [0.4.0](/ruby/docs/reference/google-cloud-vision_ai-v1/0.4.0/Google-Cloud-VisionAI-V1-VertexCustomConfig)\n- [0.3.0](/ruby/docs/reference/google-cloud-vision_ai-v1/0.3.0/Google-Cloud-VisionAI-V1-VertexCustomConfig)\n- [0.2.0](/ruby/docs/reference/google-cloud-vision_ai-v1/0.2.0/Google-Cloud-VisionAI-V1-VertexCustomConfig)\n- [0.1.0](/ruby/docs/reference/google-cloud-vision_ai-v1/0.1.0/Google-Cloud-VisionAI-V1-VertexCustomConfig) \nReference documentation and code samples for the Vision AI V1 API class Google::Cloud::VisionAI::V1::VertexCustomConfig.\n\nMessage describing VertexCustomConfig. \n\nInherits\n--------\n\n- Object \n\nExtended By\n-----------\n\n- Google::Protobuf::MessageExts::ClassMethods \n\nIncludes\n--------\n\n- Google::Protobuf::MessageExts\n\nMethods\n-------\n\n### #attach_application_metadata\n\n def attach_application_metadata() -\u003e ::Boolean\n\n**Returns**\n\n- (::Boolean) --- If true, the prediction request received by custom model will also contain metadata with the following schema: 'appPlatformMetadata': { 'ingestionTime': DOUBLE; (UNIX timestamp) 'application': STRING; 'instanceId': STRING; 'node': STRING; 'processor': STRING; }\n\n### #attach_application_metadata=\n\n def attach_application_metadata=(value) -\u003e ::Boolean\n\n**Parameter**\n\n- **value** (::Boolean) --- If true, the prediction request received by custom model will also contain metadata with the following schema: 'appPlatformMetadata': { 'ingestionTime': DOUBLE; (UNIX timestamp) 'application': STRING; 'instanceId': STRING; 'node': STRING; 'processor': STRING; } \n**Returns**\n\n- (::Boolean) --- If true, the prediction request received by custom model will also contain metadata with the following schema: 'appPlatformMetadata': { 'ingestionTime': DOUBLE; (UNIX timestamp) 'application': STRING; 'instanceId': STRING; 'node': STRING; 'processor': STRING; }\n\n### #dedicated_resources\n\n def dedicated_resources() -\u003e ::Google::Cloud::VisionAI::V1::DedicatedResources\n\n**Returns**\n\n- ([::Google::Cloud::VisionAI::V1::DedicatedResources](./Google-Cloud-VisionAI-V1-DedicatedResources)) --- A description of resources that are dedicated to the DeployedModel, and that need a higher degree of manual configuration.\n\n### #dedicated_resources=\n\n def dedicated_resources=(value) -\u003e ::Google::Cloud::VisionAI::V1::DedicatedResources\n\n**Parameter**\n\n- **value** ([::Google::Cloud::VisionAI::V1::DedicatedResources](./Google-Cloud-VisionAI-V1-DedicatedResources)) --- A description of resources that are dedicated to the DeployedModel, and that need a higher degree of manual configuration. \n**Returns**\n\n- ([::Google::Cloud::VisionAI::V1::DedicatedResources](./Google-Cloud-VisionAI-V1-DedicatedResources)) --- A description of resources that are dedicated to the DeployedModel, and that need a higher degree of manual configuration.\n\n### #dynamic_config_input_topic\n\n def dynamic_config_input_topic() -\u003e ::String\n\n**Returns**\n\n- (::String) --- Optional. By setting the configuration_input_topic, processor will subscribe to given topic, only pub/sub topic is supported now. Example channel: //pubsub.googleapis.com/projects/visionai-testing-stable/topics/test-topic message schema should be: message Message { // The ID of the stream that associates with the application instance. string stream_id = 1; // The target fps. By default, the custom processor will *not* send any data to the Vertex Prediction container. Note that once the dynamic_config_input_topic is set, max_prediction_fps will not work and be preceded by the fps set inside the topic. int32 fps = 2; }\n\n### #dynamic_config_input_topic=\n\n def dynamic_config_input_topic=(value) -\u003e ::String\n\n**Parameter**\n\n- **value** (::String) --- Optional. By setting the configuration_input_topic, processor will subscribe to given topic, only pub/sub topic is supported now. Example channel: //pubsub.googleapis.com/projects/visionai-testing-stable/topics/test-topic message schema should be: message Message { // The ID of the stream that associates with the application instance. string stream_id = 1; // The target fps. By default, the custom processor will *not* send any data to the Vertex Prediction container. Note that once the dynamic_config_input_topic is set, max_prediction_fps will not work and be preceded by the fps set inside the topic. int32 fps = 2; } \n**Returns**\n\n- (::String) --- Optional. By setting the configuration_input_topic, processor will subscribe to given topic, only pub/sub topic is supported now. Example channel: //pubsub.googleapis.com/projects/visionai-testing-stable/topics/test-topic message schema should be: message Message { // The ID of the stream that associates with the application instance. string stream_id = 1; // The target fps. By default, the custom processor will *not* send any data to the Vertex Prediction container. Note that once the dynamic_config_input_topic is set, max_prediction_fps will not work and be preceded by the fps set inside the topic. int32 fps = 2; }\n\n### #max_prediction_fps\n\n def max_prediction_fps() -\u003e ::Integer\n\n**Returns**\n\n- (::Integer) --- The max prediction frame per second. This attribute sets how fast the operator sends prediction requests to Vertex AI endpoint. Default value is 0, which means there is no max prediction fps limit. The operator sends prediction requests at input fps.\n\n### #max_prediction_fps=\n\n def max_prediction_fps=(value) -\u003e ::Integer\n\n**Parameter**\n\n- **value** (::Integer) --- The max prediction frame per second. This attribute sets how fast the operator sends prediction requests to Vertex AI endpoint. Default value is 0, which means there is no max prediction fps limit. The operator sends prediction requests at input fps. \n**Returns**\n\n- (::Integer) --- The max prediction frame per second. This attribute sets how fast the operator sends prediction requests to Vertex AI endpoint. Default value is 0, which means there is no max prediction fps limit. The operator sends prediction requests at input fps.\n\n### #post_processing_cloud_function\n\n def post_processing_cloud_function() -\u003e ::String\n\n**Returns**\n\n- (::String) --- If not empty, the prediction result will be sent to the specified cloud\n function for post processing.\n\n - The cloud function will receive AppPlatformCloudFunctionRequest where the annotations field will be the json format of proto PredictResponse.\n - The cloud function should return AppPlatformCloudFunctionResponse with PredictResponse stored in the annotations field.\n - To drop the prediction output, simply clear the payload field in the returned AppPlatformCloudFunctionResponse.\n\n### #post_processing_cloud_function=\n\n def post_processing_cloud_function=(value) -\u003e ::String\n\n**Parameter**\n\n- **value** (::String) ---\n\n If not empty, the prediction result will be sent to the specified cloud\n function for post processing.\n - The cloud function will receive AppPlatformCloudFunctionRequest where the annotations field will be the json format of proto PredictResponse.\n - The cloud function should return AppPlatformCloudFunctionResponse with PredictResponse stored in the annotations field.\n- To drop the prediction output, simply clear the payload field in the returned AppPlatformCloudFunctionResponse. \n**Returns**\n\n- (::String) --- If not empty, the prediction result will be sent to the specified cloud\n function for post processing.\n\n - The cloud function will receive AppPlatformCloudFunctionRequest where the annotations field will be the json format of proto PredictResponse.\n - The cloud function should return AppPlatformCloudFunctionResponse with PredictResponse stored in the annotations field.\n - To drop the prediction output, simply clear the payload field in the returned AppPlatformCloudFunctionResponse."]]