Mulai 29 April 2025, model Gemini 1.5 Pro dan Gemini 1.5 Flash tidak tersedia di project yang belum pernah menggunakan model ini, termasuk project baru. Untuk mengetahui detailnya, lihat Versi dan siklus proses model.
Tetap teratur dengan koleksi
Simpan dan kategorikan konten berdasarkan preferensi Anda.
Halaman ini menunjukkan cara mengaktifkan Cloud Trace di agen
dan melihat rekaman aktivitas untuk menganalisis waktu respons kueri dan operasi yang dijalankan.
Trace
adalah linimasa permintaan saat agen Anda merespons setiap kueri. Misalnya, diagram Gantt berikut menunjukkan contoh rekaman aktivitas dari LangchainAgent:
Baris pertama dalam diagram Gantt adalah untuk rekaman aktivitas. Rekaman aktivitas terdiri dari span individual, yang merepresentasikan satu unit tugas, seperti panggilan fungsi atau interaksi dengan LLM, dengan span pertama merepresentasikan keseluruhan permintaan. Setiap rentang memberikan detail tentang operasi tertentu, seperti nama operasi, waktu mulai dan berakhir,
serta atribut yang relevan, dalam permintaan. Misalnya, JSON berikut menunjukkan rentang tunggal yang merepresentasikan
panggilan ke model bahasa besar (LLM):
{"name":"llm","context":{"trace_id":"ed7b336d-e71a-46f0-a334-5f2e87cb6cfc","span_id":"ad67332a-38bd-428e-9f62-538ba2fa90d4"},"span_kind":"LLM","parent_id":"f89ebb7c-10f6-4bf8-8a74-57324d2556ef","start_time":"2023-09-07T12:54:47.597121-06:00","end_time":"2023-09-07T12:54:49.321811-06:00","status_code":"OK","status_message":"","attributes":{"llm.input_messages":[{"message.role":"system","message.content":"You are an expert Q&A system that is trusted around the world.\nAlways answer the query using the provided context information, and not prior knowledge.\nSome rules to follow:\n1. Never directly reference the given context in your answer.\n2. Avoid statements like 'Based on the context, ...' or 'The context information ...' or anything along those lines."},{"message.role":"user","message.content":"Hello?"}],"output.value":"assistant: Yes I am here","output.mime_type":"text/plain"},"events":[],}
Anda dapat melihat rekaman aktivitas menggunakan Trace Explorer:
Untuk mendapatkan izin guna melihat data rekaman aktivitas di Google Cloud konsol atau memilih cakupan rekaman aktivitas, minta administrator Anda untuk memberi Anda peran IAM Cloud Trace User (roles/cloudtrace.user) di project Anda.
[[["Mudah dipahami","easyToUnderstand","thumb-up"],["Memecahkan masalah saya","solvedMyProblem","thumb-up"],["Lainnya","otherUp","thumb-up"]],[["Sulit dipahami","hardToUnderstand","thumb-down"],["Informasi atau kode contoh salah","incorrectInformationOrSampleCode","thumb-down"],["Informasi/contoh yang saya butuhkan tidak ada","missingTheInformationSamplesINeed","thumb-down"],["Masalah terjemahan","translationIssue","thumb-down"],["Lainnya","otherDown","thumb-down"]],["Terakhir diperbarui pada 2025-08-25 UTC."],[],[],null,["This page shows you how to enable [Cloud Trace](/trace/docs/overview) on your agent\nand view traces to analyze query response times and executed operations.\n\nA [**trace**](https://opentelemetry.io/docs/concepts/signals/traces/)\nis a timeline of requests as your agent responds to each query. For example, the following Gantt chart shows a sample trace from a `LangchainAgent`:\n\n\u003cbr /\u003e\n\nThe first row in the Gantt chart is for the trace. A trace is\ncomposed of individual [**spans**](https://opentelemetry.io/docs/concepts/signals/traces/#spans), which\nrepresent a single unit of work, like a function call or an interaction with an\nLLM, with the first span representing the overall\nrequest. Each span provides details about a specific operation, such as the operation's name, start and end times,\nand any relevant [attributes](https://opentelemetry.io/docs/concepts/signals/traces/#attributes), within the request. For example, the following JSON shows a single span that represents\na call to a large language model (LLM): \n\n {\n \"name\": \"llm\",\n \"context\": {\n \"trace_id\": \"ed7b336d-e71a-46f0-a334-5f2e87cb6cfc\",\n \"span_id\": \"ad67332a-38bd-428e-9f62-538ba2fa90d4\"\n },\n \"span_kind\": \"LLM\",\n \"parent_id\": \"f89ebb7c-10f6-4bf8-8a74-57324d2556ef\",\n \"start_time\": \"2023-09-07T12:54:47.597121-06:00\",\n \"end_time\": \"2023-09-07T12:54:49.321811-06:00\",\n \"status_code\": \"OK\",\n \"status_message\": \"\",\n \"attributes\": {\n \"llm.input_messages\": [\n {\n \"message.role\": \"system\",\n \"message.content\": \"You are an expert Q&A system that is trusted around the world.\\nAlways answer the query using the provided context information, and not prior knowledge.\\nSome rules to follow:\\n1. Never directly reference the given context in your answer.\\n2. Avoid statements like 'Based on the context, ...' or 'The context information ...' or anything along those lines.\"\n },\n {\n \"message.role\": \"user\",\n \"message.content\": \"Hello?\"\n }\n ],\n \"output.value\": \"assistant: Yes I am here\",\n \"output.mime_type\": \"text/plain\"\n },\n \"events\": [],\n }\n\n| **Note:** The format of the trace(s) and span(s) depends on the instrumentation option you go with. The example span is experimental and subject to change so you shouldn't rely on the format to be stable for now. For details, see the [Semantic Conventions for Generative AI systems](https://opentelemetry.io/docs/specs/semconv/gen-ai/) being developed in OpenTelemetry.\n\nFor details, see the Cloud Trace documentation on\n[Traces and spans](/trace/docs/traces-and-spans) and\n[Trace context](/trace/docs/trace-context).\n\nWrite traces for an agent\n\nTo write traces for an agent: \n\nADK\n\nTo enable tracing for `AdkApp`, specify `enable_tracing=True` when you\n[develop an Agent Development Kit agent](/vertex-ai/generative-ai/docs/agent-engine/develop/adk).\nFor example: \n\n from vertexai.preview.reasoning_engines import AdkApp\n from google.adk.agents import Agent\n\n agent = Agent(\n model=model,\n name=agent_name,\n tools=[get_exchange_rate],\n )\n\n app = AdkApp(\n agent=agent, # Required.\n enable_tracing=True, # Optional.\n )\n\nLangchainAgent\n\nTo enable tracing for `LangchainAgent`, specify `enable_tracing=True` when you\n[develop a LangChain agent](/vertex-ai/generative-ai/docs/agent-engine/develop/langchain).\nFor example: \n\n from vertexai.preview.reasoning_engines import LangchainAgent\n\n agent = LangchainAgent(\n model=model, # Required.\n tools=[get_exchange_rate], # Optional.\n enable_tracing=True, # [New] Optional.\n )\n\nLanggraphAgent\n\nTo enable tracing for `LanggraphAgent`, specify `enable_tracing=True` when you\n[develop a LangGraph agent](/vertex-ai/generative-ai/docs/agent-engine/develop/langgraph).\nFor example: \n\n from vertexai.preview.reasoning_engines import LanggraphAgent\n\n agent = LanggraphAgent(\n model=model, # Required.\n tools=[get_exchange_rate], # Optional.\n enable_tracing=True, # [New] Optional.\n )\n\nLlamaIndex\n\n\n| **Preview**\n|\n|\n| This feature is subject to the \"Pre-GA Offerings Terms\" in the General Service Terms section\n| of the [Service Specific Terms](/terms/service-terms#1).\n|\n| Pre-GA features are available \"as is\" and might have limited support.\n|\n| For more information, see the\n| [launch stage descriptions](/products#product-launch-stages).\n\n\u003cbr /\u003e\n\nTo enable tracing for `LlamaIndexQueryPipelineAgent`, specify `enable_tracing=True` when you\n[develop a LlamaIndex agent](/vertex-ai/generative-ai/docs/agent-engine/develop/llama-index/query-pipeline).\nFor example: \n\n from vertexai.preview import reasoning_engines\n\n def runnable_with_tools_builder(model, runnable_kwargs=None, **kwargs):\n from llama_index.core.query_pipeline import QueryPipeline\n from llama_index.core.tools import FunctionTool\n from llama_index.core.agent import ReActAgent\n\n llama_index_tools = []\n for tool in runnable_kwargs.get(\"tools\"):\n llama_index_tools.append(FunctionTool.from_defaults(tool))\n agent = ReActAgent.from_tools(llama_index_tools, llm=model, verbose=True)\n return QueryPipeline(modules = {\"agent\": agent})\n\n agent = reasoning_engines.LlamaIndexQueryPipelineAgent(\n model=\"gemini-2.0-flash\",\n runnable_kwargs={\"tools\": [get_exchange_rate]},\n runnable_builder=runnable_with_tools_builder,\n enable_tracing=True, # Optional\n )\n\nCustom\n\nTo enable tracing for [custom agents](/vertex-ai/generative-ai/docs/agent-engine/develop/custom),\nvisit [Tracing using OpenTelemetry](/vertex-ai/generative-ai/docs/agent-engine/develop/custom#tracing)\nfor details.\n\nThis will export traces to Cloud Trace under the project in\n[Set up your Google Cloud project](/vertex-ai/generative-ai/docs/agent-engine/set-up#project).\n\nView traces for an agent\n\nYou can view your traces using the [Trace Explorer](/trace/docs/finding-traces):\n\n1. To get the permissions to view trace data in the Google Cloud console or\n select a trace scope, ask your administrator to grant you the\n [Cloud Trace User](/iam/docs/understanding-roles#cloudtrace.user)\n (`roles/cloudtrace.user`) IAM role on your project.\n\n2. Go to **Trace Explorer** in the Google Cloud console:\n\n [Go to the Trace Explorer](https://console.cloud.google.com/traces/list)\n3. Select your Google Cloud project (corresponding to \u003cvar translate=\"no\"\u003ePROJECT_ID\u003c/var\u003e)\n at the top of the page.\n\nTo learn more, see the [Cloud Trace documentation](/trace/docs/finding-traces).\n\nQuotas and limits\n\nSome attribute values might get truncated when they reach quota limits. For\nmore information, see [Cloud Trace Quota](/trace/docs/quotas).\n\nPricing\n\nCloud Trace has a free tier. For more information, see\n[Cloud Trace Pricing](/trace#pricing)."]]