The tensor represents a bag of features where each index maps to
a feature.
[InputMetadata.index_feature_mapping][google.cloud.aiplatform.v1beta1.ExplanationMetadata.InputMetadata.index_feature_mapping]
must be provided for this encoding. For example:
The tensor represents a bag of features where each index maps to a
feature. Zero values in the tensor indicates feature being
non-existent.
[InputMetadata.index_feature_mapping][google.cloud.aiplatform.v1beta1.ExplanationMetadata.InputMetadata.index_feature_mapping]
must be provided for this encoding. For example:
The tensor is encoded into a 1-dimensional array represented by an
encoded tensor.
[InputMetadata.encoded_tensor_name][google.cloud.aiplatform.v1beta1.ExplanationMetadata.InputMetadata.encoded_tensor_name]
must be provided for this encoding. For example:
Select this encoding when the input tensor is encoded into a
2-dimensional array represented by an encoded tensor.
[InputMetadata.encoded_tensor_name][google.cloud.aiplatform.v1beta1.ExplanationMetadata.InputMetadata.encoded_tensor_name]
must be provided for this encoding. The first dimension of the encoded
tensor's shape is the same as the input tensor's shape. For example:
The tensor is a list of binaries representing whether a feature exists
or not (1 indicates existence).
[InputMetadata.index_feature_mapping][google.cloud.aiplatform.v1beta1.ExplanationMetadata.InputMetadata.index_feature_mapping]
must be provided for this encoding. For example:
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-08-14 UTC."],[[["\u003cp\u003eThis documentation details the \u003ccode\u003eExplanationMetadata.Types.InputMetadata.Types.Encoding\u003c/code\u003e enum within the Vertex AI v1beta1 API for .NET, defining how features are encoded.\u003c/p\u003e\n"],["\u003cp\u003eThe \u003ccode\u003eEncoding\u003c/code\u003e enum provides several options including \u003ccode\u003eIdentity\u003c/code\u003e, \u003ccode\u003eBagOfFeatures\u003c/code\u003e, \u003ccode\u003eBagOfFeaturesSparse\u003c/code\u003e, \u003ccode\u003eCombinedEmbedding\u003c/code\u003e, \u003ccode\u003eConcatEmbedding\u003c/code\u003e, \u003ccode\u003eIndicator\u003c/code\u003e, and \u003ccode\u003eUnspecified\u003c/code\u003e which defaults to \u003ccode\u003eIdentity\u003c/code\u003e.\u003c/p\u003e\n"],["\u003cp\u003eDifferent encoding methods represent data in distinct formats, such as bag-of-features, sparse representations, or various types of embeddings, with specific requirements for accompanying metadata like \u003ccode\u003eindex_feature_mapping\u003c/code\u003e or \u003ccode\u003eencoded_tensor_name\u003c/code\u003e.\u003c/p\u003e\n"],["\u003cp\u003eEach encoding type has a specific use case and description, that can be used to better understand the input when looking for explanations, such as the \u003ccode\u003eBagOfFeatures\u003c/code\u003e encoding, which requires a list of input index to feature names.\u003c/p\u003e\n"],["\u003cp\u003eThis API documentation also has reference links to previous versions, such as 1.0.0-beta20, and gives context as to the latest version, 1.0.0-beta21.\u003c/p\u003e\n"]]],[],null,[]]