Reference documentation and code samples for the Vision AI V1 API class Google::Cloud::VisionAI::V1::VertexCustomConfig.
Message describing VertexCustomConfig.
Inherits
- Object
Extended By
- Google::Protobuf::MessageExts::ClassMethods
Includes
- Google::Protobuf::MessageExts
Methods
#attach_application_metadata
def attach_application_metadata() -> ::Boolean
- (::Boolean) — If true, the prediction request received by custom model will also contain metadata with the following schema: 'appPlatformMetadata': { 'ingestionTime': DOUBLE; (UNIX timestamp) 'application': STRING; 'instanceId': STRING; 'node': STRING; 'processor': STRING; }
#attach_application_metadata=
def attach_application_metadata=(value) -> ::Boolean
- value (::Boolean) — If true, the prediction request received by custom model will also contain metadata with the following schema: 'appPlatformMetadata': { 'ingestionTime': DOUBLE; (UNIX timestamp) 'application': STRING; 'instanceId': STRING; 'node': STRING; 'processor': STRING; }
- (::Boolean) — If true, the prediction request received by custom model will also contain metadata with the following schema: 'appPlatformMetadata': { 'ingestionTime': DOUBLE; (UNIX timestamp) 'application': STRING; 'instanceId': STRING; 'node': STRING; 'processor': STRING; }
#dedicated_resources
def dedicated_resources() -> ::Google::Cloud::VisionAI::V1::DedicatedResources
- (::Google::Cloud::VisionAI::V1::DedicatedResources) — A description of resources that are dedicated to the DeployedModel, and that need a higher degree of manual configuration.
#dedicated_resources=
def dedicated_resources=(value) -> ::Google::Cloud::VisionAI::V1::DedicatedResources
- value (::Google::Cloud::VisionAI::V1::DedicatedResources) — A description of resources that are dedicated to the DeployedModel, and that need a higher degree of manual configuration.
- (::Google::Cloud::VisionAI::V1::DedicatedResources) — A description of resources that are dedicated to the DeployedModel, and that need a higher degree of manual configuration.
#dynamic_config_input_topic
def dynamic_config_input_topic() -> ::String
- (::String) — Optional. By setting the configuration_input_topic, processor will subscribe to given topic, only pub/sub topic is supported now. Example channel: //pubsub.googleapis.com/projects/visionai-testing-stable/topics/test-topic message schema should be: message Message { // The ID of the stream that associates with the application instance. string stream_id = 1; // The target fps. By default, the custom processor will not send any data to the Vertex Prediction container. Note that once the dynamic_config_input_topic is set, max_prediction_fps will not work and be preceded by the fps set inside the topic. int32 fps = 2; }
#dynamic_config_input_topic=
def dynamic_config_input_topic=(value) -> ::String
- value (::String) — Optional. By setting the configuration_input_topic, processor will subscribe to given topic, only pub/sub topic is supported now. Example channel: //pubsub.googleapis.com/projects/visionai-testing-stable/topics/test-topic message schema should be: message Message { // The ID of the stream that associates with the application instance. string stream_id = 1; // The target fps. By default, the custom processor will not send any data to the Vertex Prediction container. Note that once the dynamic_config_input_topic is set, max_prediction_fps will not work and be preceded by the fps set inside the topic. int32 fps = 2; }
- (::String) — Optional. By setting the configuration_input_topic, processor will subscribe to given topic, only pub/sub topic is supported now. Example channel: //pubsub.googleapis.com/projects/visionai-testing-stable/topics/test-topic message schema should be: message Message { // The ID of the stream that associates with the application instance. string stream_id = 1; // The target fps. By default, the custom processor will not send any data to the Vertex Prediction container. Note that once the dynamic_config_input_topic is set, max_prediction_fps will not work and be preceded by the fps set inside the topic. int32 fps = 2; }
#max_prediction_fps
def max_prediction_fps() -> ::Integer
- (::Integer) — The max prediction frame per second. This attribute sets how fast the operator sends prediction requests to Vertex AI endpoint. Default value is 0, which means there is no max prediction fps limit. The operator sends prediction requests at input fps.
#max_prediction_fps=
def max_prediction_fps=(value) -> ::Integer
- value (::Integer) — The max prediction frame per second. This attribute sets how fast the operator sends prediction requests to Vertex AI endpoint. Default value is 0, which means there is no max prediction fps limit. The operator sends prediction requests at input fps.
- (::Integer) — The max prediction frame per second. This attribute sets how fast the operator sends prediction requests to Vertex AI endpoint. Default value is 0, which means there is no max prediction fps limit. The operator sends prediction requests at input fps.
#post_processing_cloud_function
def post_processing_cloud_function() -> ::String
-
(::String) —
If not empty, the prediction result will be sent to the specified cloud function for post processing.
- The cloud function will receive AppPlatformCloudFunctionRequest where the annotations field will be the json format of proto PredictResponse.
- The cloud function should return AppPlatformCloudFunctionResponse with PredictResponse stored in the annotations field.
- To drop the prediction output, simply clear the payload field in the returned AppPlatformCloudFunctionResponse.
#post_processing_cloud_function=
def post_processing_cloud_function=(value) -> ::String
-
value (::String) —
If not empty, the prediction result will be sent to the specified cloud function for post processing.
- The cloud function will receive AppPlatformCloudFunctionRequest where the annotations field will be the json format of proto PredictResponse.
- The cloud function should return AppPlatformCloudFunctionResponse with PredictResponse stored in the annotations field.
- To drop the prediction output, simply clear the payload field in the returned AppPlatformCloudFunctionResponse.
-
(::String) —
If not empty, the prediction result will be sent to the specified cloud function for post processing.
- The cloud function will receive AppPlatformCloudFunctionRequest where the annotations field will be the json format of proto PredictResponse.
- The cloud function should return AppPlatformCloudFunctionResponse with PredictResponse stored in the annotations field.
- To drop the prediction output, simply clear the payload field in the returned AppPlatformCloudFunctionResponse.