- 0.58.0 (latest)
- 0.57.0
- 0.56.0
- 0.55.0
- 0.54.0
- 0.53.0
- 0.52.0
- 0.51.0
- 0.50.0
- 0.49.0
- 0.48.0
- 0.47.0
- 0.46.0
- 0.45.0
- 0.44.0
- 0.43.0
- 0.42.0
- 0.41.0
- 0.40.0
- 0.39.0
- 0.38.0
- 0.37.0
- 0.36.0
- 0.35.0
- 0.34.0
- 0.33.0
- 0.32.0
- 0.31.0
- 0.30.0
- 0.29.0
- 0.28.0
- 0.27.0
- 0.26.0
- 0.25.0
- 0.24.0
- 0.23.0
- 0.22.0
- 0.21.0
- 0.20.0
- 0.19.0
- 0.18.0
- 0.17.0
- 0.16.0
- 0.15.0
- 0.14.0
- 0.13.0
- 0.12.0
- 0.11.0
- 0.10.0
- 0.9.1
- 0.8.0
- 0.7.0
- 0.6.0
- 0.5.0
- 0.4.0
- 0.3.0
- 0.2.0
- 0.1.0
Reference documentation and code samples for the Vertex AI V1 API class Google::Cloud::AIPlatform::V1::ExplanationMetadata::InputMetadata::Visualization.
Visualization configurations for image explanation.
Inherits
- Object
Extended By
- Google::Protobuf::MessageExts::ClassMethods
Includes
- Google::Protobuf::MessageExts
Methods
#clip_percent_lowerbound
def clip_percent_lowerbound() -> ::Float
- (::Float) — Excludes attributions below the specified percentile, from the highlighted areas. Defaults to 62.
#clip_percent_lowerbound=
def clip_percent_lowerbound=(value) -> ::Float
- value (::Float) — Excludes attributions below the specified percentile, from the highlighted areas. Defaults to 62.
- (::Float) — Excludes attributions below the specified percentile, from the highlighted areas. Defaults to 62.
#clip_percent_upperbound
def clip_percent_upperbound() -> ::Float
- (::Float) — Excludes attributions above the specified percentile from the highlighted areas. Using the clip_percent_upperbound and clip_percent_lowerbound together can be useful for filtering out noise and making it easier to see areas of strong attribution. Defaults to 99.9.
#clip_percent_upperbound=
def clip_percent_upperbound=(value) -> ::Float
- value (::Float) — Excludes attributions above the specified percentile from the highlighted areas. Using the clip_percent_upperbound and clip_percent_lowerbound together can be useful for filtering out noise and making it easier to see areas of strong attribution. Defaults to 99.9.
- (::Float) — Excludes attributions above the specified percentile from the highlighted areas. Using the clip_percent_upperbound and clip_percent_lowerbound together can be useful for filtering out noise and making it easier to see areas of strong attribution. Defaults to 99.9.
#color_map
def color_map() -> ::Google::Cloud::AIPlatform::V1::ExplanationMetadata::InputMetadata::Visualization::ColorMap
-
(::Google::Cloud::AIPlatform::V1::ExplanationMetadata::InputMetadata::Visualization::ColorMap) — The color scheme used for the highlighted areas.
Defaults to PINK_GREEN for [Integrated Gradients attribution][google.cloud.aiplatform.v1.ExplanationParameters.integrated_gradients_attribution], which shows positive attributions in green and negative in pink.
Defaults to VIRIDIS for [XRAI attribution][google.cloud.aiplatform.v1.ExplanationParameters.xrai_attribution], which highlights the most influential regions in yellow and the least influential in blue.
#color_map=
def color_map=(value) -> ::Google::Cloud::AIPlatform::V1::ExplanationMetadata::InputMetadata::Visualization::ColorMap
-
value (::Google::Cloud::AIPlatform::V1::ExplanationMetadata::InputMetadata::Visualization::ColorMap) — The color scheme used for the highlighted areas.
Defaults to PINK_GREEN for [Integrated Gradients attribution][google.cloud.aiplatform.v1.ExplanationParameters.integrated_gradients_attribution], which shows positive attributions in green and negative in pink.
Defaults to VIRIDIS for [XRAI attribution][google.cloud.aiplatform.v1.ExplanationParameters.xrai_attribution], which highlights the most influential regions in yellow and the least influential in blue.
-
(::Google::Cloud::AIPlatform::V1::ExplanationMetadata::InputMetadata::Visualization::ColorMap) — The color scheme used for the highlighted areas.
Defaults to PINK_GREEN for [Integrated Gradients attribution][google.cloud.aiplatform.v1.ExplanationParameters.integrated_gradients_attribution], which shows positive attributions in green and negative in pink.
Defaults to VIRIDIS for [XRAI attribution][google.cloud.aiplatform.v1.ExplanationParameters.xrai_attribution], which highlights the most influential regions in yellow and the least influential in blue.
#overlay_type
def overlay_type() -> ::Google::Cloud::AIPlatform::V1::ExplanationMetadata::InputMetadata::Visualization::OverlayType
- (::Google::Cloud::AIPlatform::V1::ExplanationMetadata::InputMetadata::Visualization::OverlayType) — How the original image is displayed in the visualization. Adjusting the overlay can help increase visual clarity if the original image makes it difficult to view the visualization. Defaults to NONE.
#overlay_type=
def overlay_type=(value) -> ::Google::Cloud::AIPlatform::V1::ExplanationMetadata::InputMetadata::Visualization::OverlayType
- value (::Google::Cloud::AIPlatform::V1::ExplanationMetadata::InputMetadata::Visualization::OverlayType) — How the original image is displayed in the visualization. Adjusting the overlay can help increase visual clarity if the original image makes it difficult to view the visualization. Defaults to NONE.
- (::Google::Cloud::AIPlatform::V1::ExplanationMetadata::InputMetadata::Visualization::OverlayType) — How the original image is displayed in the visualization. Adjusting the overlay can help increase visual clarity if the original image makes it difficult to view the visualization. Defaults to NONE.
#polarity
def polarity() -> ::Google::Cloud::AIPlatform::V1::ExplanationMetadata::InputMetadata::Visualization::Polarity
- (::Google::Cloud::AIPlatform::V1::ExplanationMetadata::InputMetadata::Visualization::Polarity) — Whether to only highlight pixels with positive contributions, negative or both. Defaults to POSITIVE.
#polarity=
def polarity=(value) -> ::Google::Cloud::AIPlatform::V1::ExplanationMetadata::InputMetadata::Visualization::Polarity
- value (::Google::Cloud::AIPlatform::V1::ExplanationMetadata::InputMetadata::Visualization::Polarity) — Whether to only highlight pixels with positive contributions, negative or both. Defaults to POSITIVE.
- (::Google::Cloud::AIPlatform::V1::ExplanationMetadata::InputMetadata::Visualization::Polarity) — Whether to only highlight pixels with positive contributions, negative or both. Defaults to POSITIVE.
#type
def type() -> ::Google::Cloud::AIPlatform::V1::ExplanationMetadata::InputMetadata::Visualization::Type
- (::Google::Cloud::AIPlatform::V1::ExplanationMetadata::InputMetadata::Visualization::Type) — Type of the image visualization. Only applicable to [Integrated Gradients attribution][google.cloud.aiplatform.v1.ExplanationParameters.integrated_gradients_attribution]. OUTLINES shows regions of attribution, while PIXELS shows per-pixel attribution. Defaults to OUTLINES.
#type=
def type=(value) -> ::Google::Cloud::AIPlatform::V1::ExplanationMetadata::InputMetadata::Visualization::Type
- value (::Google::Cloud::AIPlatform::V1::ExplanationMetadata::InputMetadata::Visualization::Type) — Type of the image visualization. Only applicable to [Integrated Gradients attribution][google.cloud.aiplatform.v1.ExplanationParameters.integrated_gradients_attribution]. OUTLINES shows regions of attribution, while PIXELS shows per-pixel attribution. Defaults to OUTLINES.
- (::Google::Cloud::AIPlatform::V1::ExplanationMetadata::InputMetadata::Visualization::Type) — Type of the image visualization. Only applicable to [Integrated Gradients attribution][google.cloud.aiplatform.v1.ExplanationParameters.integrated_gradients_attribution]. OUTLINES shows regions of attribution, while PIXELS shows per-pixel attribution. Defaults to OUTLINES.