Name of the Experiment to filter results. If not set, return results of current active experiment.
include_time_series
bool
Optional. Whether or not to include time series metrics in df. Default is True. Setting to False will largely improve execution time and reduce quota contributing calls. Recommended when time series metrics are not needed or number of runs in Experiment is large. For time series metrics consider querying a specific run using get_time_series_data_frame.
Logs time series metrics to to this Experiment Run.
Requires the experiment or experiment run has a backing Vertex Tensorboard resource.
my_tensorboard = aiplatform.Tensorboard(...)
aiplatform.init(experiment='my-experiment', experiment_tensorboard=my_tensorboard)
aiplatform.start_run('my-run')
# increments steps as logged
for i in range(10):
aiplatform.log_time_series_metrics({'loss': loss})
# explicitly log steps
for i in range(10):
aiplatform.log_time_series_metrics({'loss': loss}, step=i)
Parameters
Name
Description
metrics
Dict[str, Union[str, float]]
Required. Dictionary of where keys are metric names and values are metric values.
step
int
Optional. Step index of this data point within the run. If not provided, the latest step amongst all time series metrics already logged will be used.
wall_time
timestamp_pb2.Timestamp
Optional. Wall clock timestamp when this data point is generated by the end user. If not provided, this will be generated based on the value from time.time()
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-08-07 UTC."],[],[],null,["# Package preview (1.95.1)\n\nVersion latestkeyboard_arrow_down\n\n- [1.95.1 (latest)](/python/docs/reference/vertexai/latest/vertexai.preview)\n- [1.94.0](/python/docs/reference/vertexai/1.94.0/vertexai.preview)\n- [1.93.1](/python/docs/reference/vertexai/1.93.1/vertexai.preview)\n- [1.92.0](/python/docs/reference/vertexai/1.92.0/vertexai.preview)\n- [1.91.0](/python/docs/reference/vertexai/1.91.0/vertexai.preview)\n- [1.90.0](/python/docs/reference/vertexai/1.90.0/vertexai.preview)\n- [1.89.0](/python/docs/reference/vertexai/1.89.0/vertexai.preview)\n- [1.88.0](/python/docs/reference/vertexai/1.88.0/vertexai.preview)\n- [1.87.0](/python/docs/reference/vertexai/1.87.0/vertexai.preview)\n- [1.86.0](/python/docs/reference/vertexai/1.86.0/vertexai.preview)\n- [1.85.0](/python/docs/reference/vertexai/1.85.0/vertexai.preview)\n- [1.84.0](/python/docs/reference/vertexai/1.84.0/vertexai.preview)\n- [1.83.0](/python/docs/reference/vertexai/1.83.0/vertexai.preview)\n- [1.82.0](/python/docs/reference/vertexai/1.82.0/vertexai.preview)\n- [1.81.0](/python/docs/reference/vertexai/1.81.0/vertexai.preview)\n- [1.80.0](/python/docs/reference/vertexai/1.80.0/vertexai.preview)\n- [1.79.0](/python/docs/reference/vertexai/1.79.0/vertexai.preview)\n- [1.78.0](/python/docs/reference/vertexai/1.78.0/vertexai.preview)\n- [1.77.0](/python/docs/reference/vertexai/1.77.0/vertexai.preview)\n- [1.76.0](/python/docs/reference/vertexai/1.76.0/vertexai.preview)\n- [1.75.0](/python/docs/reference/vertexai/1.75.0/vertexai.preview)\n- [1.74.0](/python/docs/reference/vertexai/1.74.0/vertexai.preview)\n- [1.73.0](/python/docs/reference/vertexai/1.73.0/vertexai.preview)\n- [1.72.0](/python/docs/reference/vertexai/1.72.0/vertexai.preview)\n- [1.71.1](/python/docs/reference/vertexai/1.71.1/vertexai.preview)\n- [1.70.0](/python/docs/reference/vertexai/1.70.0/vertexai.preview)\n- [1.69.0](/python/docs/reference/vertexai/1.69.0/vertexai.preview)\n- [1.68.0](/python/docs/reference/vertexai/1.68.0/vertexai.preview)\n- [1.67.1](/python/docs/reference/vertexai/1.67.1/vertexai.preview)\n- [1.66.0](/python/docs/reference/vertexai/1.66.0/vertexai.preview)\n- [1.65.0](/python/docs/reference/vertexai/1.65.0/vertexai.preview)\n- [1.63.0](/python/docs/reference/vertexai/1.63.0/vertexai.preview)\n- [1.62.0](/python/docs/reference/vertexai/1.62.0/vertexai.preview)\n- [1.60.0](/python/docs/reference/vertexai/1.60.0/vertexai.preview)\n- [1.59.0](/python/docs/reference/vertexai/1.59.0/vertexai.preview) \nAPI documentation for `preview` package. \n\nPackages\n--------\n\n### [tuning](/python/docs/reference/vertexai/latest/vertexai.preview.tuning)\n\nAPI documentation for `tuning` package.\n\n### [reasoning_engines](/python/docs/reference/vertexai/latest/vertexai.preview.reasoning_engines)\n\nAPI documentation for `reasoning_engines` package.\n\nModules\n-------\n\n### [generative_models](/python/docs/reference/vertexai/latest/vertexai.preview.generative_models)\n\nClasses for working with the Gemini models.\n\n### [prompts](/python/docs/reference/vertexai/latest/vertexai.preview.prompts)\n\nAPI documentation for `prompts` module.\n\n### [language_models](/python/docs/reference/vertexai/latest/vertexai.preview.language_models)\n\nClasses for working with language models.\n\n### [vision_models](/python/docs/reference/vertexai/latest/vertexai.preview.vision_models)\n\nClasses for working with vision models.\n\nPackages\nFunctions\n------------------\n\n### end_run\n\n end_run(\n state: google.cloud.aiplatform_v1.types.execution.Execution.State = State.COMPLETE,\n )\n\nEnds the the current experiment run. \n\n aiplatform.start_run('my-run')\n ...\n aiplatform.end_run()\n\n### get_experiment_df\n\n get_experiment_df(\n experiment: typing.Optional[str] = None, *, include_time_series: bool = True\n ) -\u003e pd.DataFrame\n\nReturns a Pandas DataFrame of the parameters and metrics associated with one experiment.\n\nExample: \n\n aiplatform.init(experiment='exp-1')\n aiplatform.start_run(run='run-1')\n aiplatform.log_params({'learning_rate': 0.1})\n aiplatform.log_metrics({'accuracy': 0.9})\n\n aiplatform.start_run(run='run-2')\n aiplatform.log_params({'learning_rate': 0.2})\n aiplatform.log_metrics({'accuracy': 0.95})\n\n aiplatform.get_experiment_df()\n\nWill result in the following DataFrame: \n\n experiment_name | run_name | param.learning_rate | metric.accuracy\n exp-1 | run-1 | 0.1 | 0.9\n exp-1 | run-2 | 0.2 | 0.95\n\n### log_classification_metrics\n\n log_classification_metrics(\n *,\n labels: typing.Optional[typing.List[str]] = None,\n matrix: typing.Optional[typing.List[typing.List[int]]] = None,\n fpr: typing.Optional[typing.List[float]] = None,\n tpr: typing.Optional[typing.List[float]] = None,\n threshold: typing.Optional[typing.List[float]] = None,\n display_name: typing.Optional[str] = None\n ) -\u003e (\n google.cloud.aiplatform.metadata.schema.google.artifact_schema.ClassificationMetrics\n )\n\nCreate an artifact for classification metrics and log to ExperimentRun. Currently support confusion matrix and ROC curve. \n\n my_run = aiplatform.ExperimentRun('my-run', experiment='my-experiment')\n classification_metrics = my_run.log_classification_metrics(\n display_name='my-classification-metrics',\n labels=['cat', 'dog'],\n matrix=[[9, 1], [1, 9]],\n fpr=[0.1, 0.5, 0.9],\n tpr=[0.1, 0.7, 0.9],\n threshold=[0.9, 0.5, 0.1],\n )\n\n### log_metrics\n\n log_metrics(metrics: typing.Dict[str, typing.Union[float, int, str]])\n\nLog single or multiple Metrics with specified key and value pairs.\n\nMetrics with the same key will be overwritten. \n\n aiplatform.start_run('my-run', experiment='my-experiment')\n aiplatform.log_metrics({'accuracy': 0.9, 'recall': 0.8})\n\n### log_params\n\n log_params(params: typing.Dict[str, typing.Union[float, int, str]])\n\nLog single or multiple parameters with specified key and value pairs.\n\nParameters with the same key will be overwritten. \n\n aiplatform.start_run('my-run')\n aiplatform.log_params({'learning_rate': 0.1, 'dropout_rate': 0.2})\n\n### log_time_series_metrics\n\n log_time_series_metrics(\n metrics: typing.Dict[str, float],\n step: typing.Optional[int] = None,\n wall_time: typing.Optional[google.protobuf.timestamp_pb2.Timestamp] = None,\n )\n\nLogs time series metrics to to this Experiment Run.\n\nRequires the experiment or experiment run has a backing Vertex Tensorboard resource. \n\n my_tensorboard = aiplatform.Tensorboard(...)\n aiplatform.init(experiment='my-experiment', experiment_tensorboard=my_tensorboard)\n aiplatform.start_run('my-run')\n\n # increments steps as logged\n for i in range(10):\n aiplatform.log_time_series_metrics({'loss': loss})\n\n # explicitly log steps\n for i in range(10):\n aiplatform.log_time_series_metrics({'loss': loss}, step=i)\n\n### start_run\n\n start_run(\n run: str,\n *,\n tensorboard: typing.Optional[\n typing.Union[\n google.cloud.aiplatform.tensorboard.tensorboard_resource.Tensorboard, str\n ]\n ] = None,\n resume=False\n ) -\u003e google.cloud.aiplatform.metadata.experiment_run_resource.ExperimentRun\n\nStart a run to current session. \n\n aiplatform.init(experiment='my-experiment')\n aiplatform.start_run('my-run')\n aiplatform.log_params({'learning_rate':0.1})\n\nUse as context manager. Run will be ended on context exit: \n\n aiplatform.init(experiment='my-experiment')\n with aiplatform.start_run('my-run') as my_run:\n my_run.log_params({'learning_rate':0.1})\n\nResume a previously started run: \n\n aiplatform.init(experiment='my-experiment')\n with aiplatform.start_run('my-run', resume=True) as my_run:\n my_run.log_params({'learning_rate':0.1})"]]