Vertex AI Feature Store(旧版)提供了一个用于整理、存储和应用机器学习特征的集中式存储库。使用中央特征存储区,组织可以大规模地高效共享、发现和重复使用机器学习特征,从而加快开发和部署新机器学习应用的速度。
Vertex AI Feature Store(旧版)是一个全代管式解决方案,可管理和扩缩底层基础架构,例如存储和计算资源。此解决方案意味着,数据科学家可以专注于特征计算逻辑,而无需操心如何将特征部署到生产环境中。
Vertex AI Feature Store(旧版)已集成到 Vertex AI 中。您可以单独使用 Vertex AI Feature Store(旧版),也可以将其作为 Vertex AI 工作流的一部分使用。例如,您可以从 Vertex AI Feature Store(旧版)提取数据,以在 Vertex AI 中训练自定义或 AutoML 模型。
Vertex AI Feature Store(旧版)是 Vertex AI Feature Store 的前身。如需详细了解 Vertex AI Feature Store,请参阅 Vertex AI Feature Store 文档。
概览
使用 Vertex AI Feature Store(旧版)创建和管理特征存储区、实体类型和特征。特征存储区是特征及其值的顶级容器。如果设置特征存储区,获得许可的用户无需额外的工程支持即可添加和共享其特征。用户可以定义特征,然后从各种数据源中导入(注入)特征值。详细了解 Vertex AI Feature Store(旧版)数据模型和资源。
Vertex AI Feature Store(旧版)提供搜索和过滤功能,使其他用户可以发现和重复使用现有特征。对于每个特征,您可以查看相关元数据以确定特征的质量和使用模式。例如,您可以查看具有某个特征有效值(也称为特征覆盖率)的实体的比例以及特征值的统计分布。
大规模在线部署的代管式解决方案
Vertex AI Feature Store(旧版)提供在线特征部署(低延迟部署)的代管式解决方案,这对于执行及时的在线预测至关重要。您无需构建和运营低延迟的数据传送基础架构;Vertex AI Feature Store(旧版)会为您执行此操作,并根据需要进行扩缩。您只需编写生成特征的逻辑,部署特征的任务则无需您费心。由于管理工作由 Feature Store 代劳,新特征的构建将更为顺畅,这使数据科学家能够专注于自己的工作,无需操心如何部署。
缓解训练-应用偏差
如果生产环境中使用的特征数据分布与训练模型时使用的特征数据分布不同,会出现训练-应用偏差。这种偏差通常会导致训练中的模型性能与生产环境中的模型性能有差异。以下示例介绍了 Vertex AI Feature Store(旧版)如何消除导致训练-应用偏差的潜在根源:
Vertex AI Feature Store(旧版)可确保将一个特征值仅导入到特征存储区一次,并对训练和服务重复使用该值。如果没有特征存储区,训练和部署的生成特征的代码路径可能会不同,因此,特征值可能在训练和部署之间有所不同。
Vertex AI Feature Store(旧版)提供时间点查找来为训练提取历史数据。通过这些查找,您可以通过仅提取预测之前可用的特征值,而不是之后的特征值,来减少数据泄露。
Vertex AI Feature Store(旧版)可帮助您检测特征数据分布随时间推移而发生的重大变化(也称为偏移)。Vertex AI Feature Store(旧版)会持续跟踪导入到特征存储区的特征值的分布。随着特征偏移的增加,您可能需要重新训练使用受影响特征的模型。如需详细了解如何检测漂移,请参阅查看特征值异常。
配额和限制
Vertex AI Feature Store(旧版)实施配额和限制,以帮助您通过设置自己的用量限额来管理资源,并通过避免意外的用量激增来保护 Google Cloud 用户群体。为避免达到计划外的限制条件,请查看配额和限制页面上的 Vertex AI Feature Store(旧版)配额。例如,Vertex AI Feature Store(旧版)对在线部署节点的数量和每分钟可以发出的在线部署请求的数量设置了配额。
数据保留
Vertex AI Feature Store(旧版)会在数据保留时间限制内保留特征值。此限制基于与特征值关联的时间戳,而不是值的导入时间。Vertex AI Feature Store(旧版)会安排删除时间戳超过限制的值。
价格
Vertex AI Feature Store(旧版)价格取决于多种因素,例如您存储的数据量以及使用的特征存储区在线节点数量。创建特征存储区后,将立即开始计费。如需了解详情,请参阅 Vertex AI Feature Store(旧版)价格。
[[["易于理解","easyToUnderstand","thumb-up"],["解决了我的问题","solvedMyProblem","thumb-up"],["其他","otherUp","thumb-up"]],[["很难理解","hardToUnderstand","thumb-down"],["信息或示例代码不正确","incorrectInformationOrSampleCode","thumb-down"],["没有我需要的信息/示例","missingTheInformationSamplesINeed","thumb-down"],["翻译问题","translationIssue","thumb-down"],["其他","otherDown","thumb-down"]],["最后更新时间 (UTC):2025-08-18。"],[],[],null,["# Introduction to Vertex AI Feature Store (Legacy)\n\nVertex AI Feature Store (Legacy) provides a\ncentralized repository for organizing, storing, and serving ML features.\nUsing a central featurestore enables an organization to efficiently share,\ndiscover, and re-use ML features at scale, which can increase the velocity of\ndeveloping and deploying new ML applications.\n\nVertex AI Feature Store (Legacy) is a fully managed solution, which manages\nand scales the underlying infrastructure such as storage and compute\nresources. This solution means that data scientists can focus on the\nfeature computation logic instead of worrying about the challenges of deploying\nfeatures into production.\n\nVertex AI Feature Store (Legacy) is an integrated part of\nVertex AI. You can use Vertex AI Feature Store (Legacy)\nindependently or as part of Vertex AI workflows. For example, you\ncan fetch data from Vertex AI Feature Store (Legacy) to train custom or\nAutoML models in Vertex AI.\n\nVertex AI Feature Store (Legacy) is the predecessor of\nVertex AI Feature Store. To learn more about Vertex AI Feature Store,\nsee the [Vertex AI Feature Store documentation](/vertex-ai/docs/featurestore/latest/overview).\n\nOverview\n--------\n\nUse Vertex AI Feature Store (Legacy) to create and manage *featurestores* , *entity types* , and *features* . A featurestore is a top-level container for\nyour features and their values. When you set up a featurestore, permitted\nusers can add and share their features without additional engineering support.\nUsers can define features and then import (ingest) feature values from various\ndata sources. [Learn more about Vertex AI Feature Store (Legacy) data model and resources](/vertex-ai/docs/featurestore/concepts).\n\nAny permitted user can search and retrieve values from the featurestore. For\nexample, you can find features and then do a batch export to get training\ndata for ML model creation. You can also retrieve feature values in real time\nto perform fast online predictions.\n\nBenefits\n--------\n\nBefore using Vertex AI Feature Store (Legacy), you might have computed feature\nvalues and saved them in various locations such as tables in BigQuery\nand as files in Cloud Storage. Moreover, you might have built and managed\nseparate solutions for storage and the consumption of feature values. In\ncontrast, Vertex AI Feature Store (Legacy) provides a unified solution for\nbatch and online storage as well as the serving of ML features. The following\nsections details the benefits that Vertex AI Feature Store (Legacy) provides.\n\n### Share features across your organization\n\nIf you produce features in a featurestore, you can quickly share them with\nothers for training or serving tasks. Teams don't need to re-engineer features\nfor different projects or use cases. Also, because you can manage and serve\nfeatures from a central repository, you can maintain consistency across your\norganization and reduce duplicate efforts, particularly for high value\nfeatures.\n\nVertex AI Feature Store (Legacy) provides search and filter capabilities so\nthat others discover and reuse existing features. For each feature,\nyou can view relevant metadata to determine the quality and usage patterns of\nthe feature. For example, you can view the fraction of entities that have a\nvalid value for a feature (also known as *feature coverage*) and the statistical\ndistribution of feature values.\n\n### Managed solution for online serving at scale\n\nVertex AI Feature Store (Legacy) provides a managed solution for online\nfeature serving (low-latency serving), which is critical for making timely\nonline predictions. You don't need to build and operate low-latency data\nserving infrastructure; Vertex AI Feature Store (Legacy) does this for you and\nscales as needed. You code the logic to generate features but offload the task\nof serving features. All of this included management reduces the friction for\nbuilding new features, enabling data scientists to do their work without worrying\nabout deployment.\n\n### Mitigate training-serving skew\n\n*Training-serving skew* occurs when the feature data distribution that you use\nin production differs from the feature data distribution that was used to train\nyour model. This skew often results in discrepancies between a model's\nperformance during training and its performance in production. The following\nexamples describe how Vertex AI Feature Store (Legacy) can address potential\nsources of training-serving skew:\n\n- Vertex AI Feature Store (Legacy) ensures that a feature value is imported once into a featurestore and that same value is reused for both training and serving. Without a featurestore, you might have different code paths for generating features between training and serving. So, feature values might differ between training and serving.\n- Vertex AI Feature Store (Legacy) provides point-in-time lookups to fetch historical data for training. With these lookups, you can mitigate data leakage by fetching only the feature values that were available before a prediction and not after.\n\nFor more information about how to detect training-serving skew, see [View feature value anomalies](/vertex-ai/docs/featurestore/monitoring#view_feature_value_anomalies).\n\n### Detect drift\n\nVertex AI Feature Store (Legacy) helps you detect significant changes to your\nfeature data distribution over time, also known as *drift* .\nVertex AI Feature Store (Legacy) constantly tracks the distribution of feature\nvalues that are imported into the featurestore. As feature drift increases, you\nmight need to retrain models that are using the affected features. For more\ninformation about how to detect drift, see [View feature value anomalies](/vertex-ai/docs/featurestore/monitoring#view_feature_value_anomalies).\n\nQuotas and limits\n-----------------\n\nVertex AI Feature Store (Legacy) enforces quotas and limits to help you manage\nresources by setting your own usage limits and to protect the community of\nGoogle Cloud users by preventing unforeseen spikes in usage. To prevent you from\nhitting unplanned constraints, review Vertex AI Feature Store (Legacy) quotas\non the [Quotas and limits](/vertex-ai/quotas#featurestore) page. For example,\nVertex AI Feature Store (Legacy) sets a quota on the number of online serving\nnodes and a quota on the number of online serving requests that you can make per\nminute.\n\nData retention\n--------------\n\nVertex AI Feature Store (Legacy) keeps feature values up to the [data\nretention limit](/vertex-ai/quotas#featurestore). This limit is based on the\ntimestamp associated with the feature values, not when the values were imported.\nVertex AI Feature Store (Legacy) schedules to delete values with timestamps\nthat exceed the limit.\n\nPricing\n-------\n\nVertex AI Feature Store (Legacy) pricing is based on several factors, such as\nhow much data you store and the number of featurestore online nodes you use.\nCharges start right after you create a featurestore. For more information, see\n[Vertex AI Feature Store (Legacy) pricing](/vertex-ai/pricing#featurestore).\n\nWhat's next\n-----------\n\n- Learn about the Vertex AI Feature Store (Legacy) [data model and its\n resources](/vertex-ai/docs/featurestore/concepts).\n- Learn [how to set up a project and set Identity and Access Management permissions for\n Vertex AI Feature Store (Legacy)](/vertex-ai/docs/featurestore/setup).\n- View Vertex AI Feature Store (Legacy) quotas on the [Quotas and limits\n page](/vertex-ai/quotas#featurestore)."]]