Mulai 29 April 2025, model Gemini 1.5 Pro dan Gemini 1.5 Flash tidak tersedia di project yang belum pernah menggunakan model ini, termasuk project baru. Untuk mengetahui detailnya, lihat Versi dan siklus proses model.
Tetap teratur dengan koleksi
Simpan dan kategorikan konten berdasarkan preferensi Anda.
Halaman ini menunjukkan cara mencantumkan token dan ID tokennya dari sebuah perintah
serta cara mendapatkan jumlah total token dari sebuah perintah menggunakan Google Gen AI SDK.
Token dan pentingnya pencatatan dan penghitungan token
Model AI generatif menguraikan teks dan data lain dalam sebuah perintah menjadi unit-unit yang disebut token untuk diproses. Cara data dikonversi menjadi token bergantung pada
tokenizer yang digunakan. Token dapat berupa karakter, kata, atau frasa.
Setiap model memiliki jumlah maksimum token yang dapat ditangani dalam perintah dan respons. Dengan mengetahui jumlah token perintah, Anda dapat mengetahui apakah Anda telah melampaui batas ini atau tidak. Selain itu, penghitungan token juga menampilkan karakter yang dapat ditagih untuk perintah, yang membantu Anda memperkirakan biaya.
Token listingan menampilkan daftar token yang dipecah menjadi beberapa bagian oleh perintah Anda.
Setiap token yang tercantum dikaitkan dengan ID token, yang membantu Anda melakukan pemecahan masalah dan menganalisis perilaku model.
Model yang didukung
Tabel berikut menunjukkan model yang mendukung pencatatan token dan penghitungan token:
Mendapatkan daftar token dan ID token untuk perintah
Contoh kode berikut menunjukkan cara mendapatkan daftar token dan ID token untuk
sebuah perintah. Perintah hanya boleh berisi teks. Perintah multimodal tidak didukung.
Mendapatkan jumlah token dan karakter yang dapat ditagih dari sebuah prompt
Contoh kode berikut menunjukkan cara Mendapatkan jumlah token dan jumlah karakter yang dapat ditagih dari sebuah perintah. Perintah khusus teks dan multimodal didukung.
[[["Mudah dipahami","easyToUnderstand","thumb-up"],["Memecahkan masalah saya","solvedMyProblem","thumb-up"],["Lainnya","otherUp","thumb-up"]],[["Sulit dipahami","hardToUnderstand","thumb-down"],["Informasi atau kode contoh salah","incorrectInformationOrSampleCode","thumb-down"],["Informasi/contoh yang saya butuhkan tidak ada","missingTheInformationSamplesINeed","thumb-down"],["Masalah terjemahan","translationIssue","thumb-down"],["Lainnya","otherDown","thumb-down"]],["Terakhir diperbarui pada 2025-08-25 UTC."],[],[],null,["# List and count tokens\n\n| **Preview**\n|\n|\n| This product or feature is subject to the \"Pre-GA Offerings Terms\" in the General Service Terms section\n| of the [Service Specific Terms](/terms/service-terms#1).\n|\n| Pre-GA products and features are available \"as is\" and might have limited support.\n|\n| For more information, see the\n| [launch stage descriptions](/products#product-launch-stages).\n\nThis page shows you how to list the tokens and their token IDs of a prompt\nand how to get a total token count of a prompt by using the Google Gen AI SDK.\n\nTokens and the importance of token listing and counting\n-------------------------------------------------------\n\nGenerative AI models break down text and other data in a prompt into units called\ntokens for processing. The way that data is converted into tokens depends on the\ntokenizer used. A token can be characters, words, or phrases.\n\nEach model has a maximum number of tokens that it can handle in a prompt and\nresponse. Knowing the token count of your prompt lets you know whether you've\nexceeded this limit or not. Additionally, counting tokens also returns the billable\ncharacters for the prompt, which helps you estimate cost.\n\nListing tokens returns a list of the tokens that your prompt is broken down into.\nEach listed token is associated with a token ID, which helps you perform\ntroubleshooting and analyze model behavior.\n\nSupported models\n----------------\n\nThe following table shows you the models that support token listing and token\ncounting:\n\n- [Gemini 2.5 Flash Image Preview](/vertex-ai/generative-ai/docs/models/gemini/2-5-flash#image) (Preview)\n- [Gemini 2.5 Flash-Lite](/vertex-ai/generative-ai/docs/models/gemini/2-5-flash-lite)\n- [Gemini 2.0 Flash with image generation](/vertex-ai/generative-ai/docs/models/gemini/2-0-flash) (Preview)\n- [Vertex AI Model Optimizer](/vertex-ai/generative-ai/docs/model-reference/vertex-ai-model-optimizer) (Experimental)\n- [Gemini 2.5 Pro](/vertex-ai/generative-ai/docs/models/gemini/2-5-pro)\n- [Gemini 2.5 Flash](/vertex-ai/generative-ai/docs/models/gemini/2-5-flash)\n- [Gemini 2.0 Flash](/vertex-ai/generative-ai/docs/models/gemini/2-0-flash)\n- [Gemini 2.0 Flash-Lite](/vertex-ai/generative-ai/docs/models/gemini/2-0-flash-lite)\n\nGet a list of tokens and token IDs for a prompt\n-----------------------------------------------\n\nThe following code sample shows you how to get a list of tokens and token IDs for\na prompt. The prompt must contain only text. Multimodal prompts are not supported. \n\n### Python\n\n#### Install\n\n```\npip install --upgrade google-genai\n```\n\n\nTo learn more, see the\n[SDK reference documentation](https://googleapis.github.io/python-genai/).\n\n\nSet environment variables to use the Gen AI SDK with Vertex AI:\n\n```bash\n# Replace the `GOOGLE_CLOUD_PROJECT` and `GOOGLE_CLOUD_LOCATION` values\n# with appropriate values for your project.\nexport GOOGLE_CLOUD_PROJECT=GOOGLE_CLOUD_PROJECT\nexport GOOGLE_CLOUD_LOCATION=global\nexport GOOGLE_GENAI_USE_VERTEXAI=True\n```\n\n\u003cbr /\u003e\n\n from google import genai\n from google.genai.types import HttpOptions\n\n client = genai.Client(http_options=HttpOptions(api_version=\"v1\"))\n response = client.models.compute_tokens(\n model=\"gemini-2.5-flash\",\n contents=\"What's the longest word in the English language?\",\n )\n\n print(response)\n # Example output:\n # tokens_info=[TokensInfo(\n # role='user',\n # token_ids=[1841, 235303, 235256, 573, 32514, 2204, 575, 573, 4645, 5255, 235336],\n # tokens=[b'What', b\"'\", b's', b' the', b' longest', b' word', b' in', b' the', b' English', b' language', b'?']\n # )]\n\n### Go\n\nLearn how to install or update the [Go](/vertex-ai/generative-ai/docs/sdks/overview).\n\n\nTo learn more, see the\n[SDK reference documentation](https://pkg.go.dev/google.golang.org/genai).\n\n\nSet environment variables to use the Gen AI SDK with Vertex AI:\n\n```bash\n# Replace the `GOOGLE_CLOUD_PROJECT` and `GOOGLE_CLOUD_LOCATION` values\n# with appropriate values for your project.\nexport GOOGLE_CLOUD_PROJECT=GOOGLE_CLOUD_PROJECT\nexport GOOGLE_CLOUD_LOCATION=global\nexport GOOGLE_GENAI_USE_VERTEXAI=True\n```\n\n\u003cbr /\u003e\n\n import (\n \t\"context\"\n \t\"encoding/json\"\n \t\"fmt\"\n \t\"io\"\n\n \tgenai \"google.golang.org/genai\"\n )\n\n // computeWithTxt shows how to compute tokens with text input.\n func computeWithTxt(w io.Writer) error {\n \tctx := context.Background()\n\n \tclient, err := genai.NewClient(ctx, &genai.ClientConfig{\n \t\tHTTPOptions: genai.HTTPOptions{APIVersion: \"v1\"},\n \t})\n \tif err != nil {\n \t\treturn fmt.Errorf(\"failed to create genai client: %w\", err)\n \t}\n\n \tmodelName := \"gemini-2.5-flash\"\n \tcontents := []*genai.Content{\n \t\t{Parts: []*genai.Part{\n \t\t\t{Text: \"What's the longest word in the English language?\"},\n \t\t},\n \t\t\tRole: \"user\"},\n \t}\n\n \tresp, err := client.Models.ComputeTokens(ctx, modelName, contents, nil)\n \tif err != nil {\n \t\treturn fmt.Errorf(\"failed to generate content: %w\", err)\n \t}\n\n \ttype tokenInfoDisplay struct {\n \t\tIDs []int64 `json:\"token_ids\"`\n \t\tTokens []string `json:\"tokens\"`\n \t}\n \t// See the documentation: https://pkg.go.dev/google.golang.org/genai#ComputeTokensResponse\n \tfor _, instance := range resp.TokensInfo {\n \t\tdisplay := tokenInfoDisplay{\n \t\t\tIDs: instance.TokenIDs,\n \t\t\tTokens: make([]string, len(instance.Tokens)),\n \t\t}\n \t\tfor i, t := range instance.Tokens {\n \t\t\tdisplay.Tokens[i] = string(t)\n \t\t}\n\n \t\tdata, err := json.MarshalIndent(display, \"\", \" \")\n \t\tif err != nil {\n \t\t\treturn fmt.Errorf(\"failed to marshal token info: %w\", err)\n \t\t}\n \t\tfmt.Fprintln(w, string(data))\n \t}\n\n \t// Example response:\n \t// {\n \t// \t\"ids\": [\n \t// \t\t1841,\n \t// \t\t235303,\n \t// \t\t235256,\n \t// ...\n \t// \t],\n \t// \t\"values\": [\n \t// \t\t\"What\",\n \t// \t\t\"'\",\n \t// \t\t\"s\",\n \t// ...\n \t// \t]\n \t// }\n\n \treturn nil\n }\n\n\u003cbr /\u003e\n\nGet the token count and billable characters of a prompt\n-------------------------------------------------------\n\nThe following code sample shows you how to Get the token count and the number of\nbillable characters of a prompt. Both text-only and multimodal prompts are\nsupported. \n\n### Python\n\n#### Install\n\n```\npip install --upgrade google-genai\n```\n\n\nTo learn more, see the\n[SDK reference documentation](https://googleapis.github.io/python-genai/).\n\n\nSet environment variables to use the Gen AI SDK with Vertex AI:\n\n```bash\n# Replace the `GOOGLE_CLOUD_PROJECT` and `GOOGLE_CLOUD_LOCATION` values\n# with appropriate values for your project.\nexport GOOGLE_CLOUD_PROJECT=GOOGLE_CLOUD_PROJECT\nexport GOOGLE_CLOUD_LOCATION=global\nexport GOOGLE_GENAI_USE_VERTEXAI=True\n```\n\n\u003cbr /\u003e\n\n from google import genai\n from google.genai.types import HttpOptions\n\n client = genai.Client(http_options=HttpOptions(api_version=\"v1\"))\n\n prompt = \"Why is the sky blue?\"\n\n # Send text to Gemini\n response = client.models.generate_content(\n model=\"gemini-2.5-flash\", contents=prompt\n )\n\n # Prompt and response tokens count\n print(response.usage_metadata)\n\n # Example output:\n # cached_content_token_count=None\n # candidates_token_count=311\n # prompt_token_count=6\n # total_token_count=317\n\n### Go\n\nLearn how to install or update the [Go](/vertex-ai/generative-ai/docs/sdks/overview).\n\n\nTo learn more, see the\n[SDK reference documentation](https://pkg.go.dev/google.golang.org/genai).\n\n\nSet environment variables to use the Gen AI SDK with Vertex AI:\n\n```bash\n# Replace the `GOOGLE_CLOUD_PROJECT` and `GOOGLE_CLOUD_LOCATION` values\n# with appropriate values for your project.\nexport GOOGLE_CLOUD_PROJECT=GOOGLE_CLOUD_PROJECT\nexport GOOGLE_CLOUD_LOCATION=global\nexport GOOGLE_GENAI_USE_VERTEXAI=True\n```\n\n\u003cbr /\u003e\n\n import (\n \t\"context\"\n \t\"encoding/json\"\n \t\"fmt\"\n \t\"io\"\n\n \tgenai \"google.golang.org/genai\"\n )\n\n // generateTextAndCount shows how to generate text and obtain token count metadata from the model response.\n func generateTextAndCount(w io.Writer) error {\n \tctx := context.Background()\n\n \tclient, err := genai.NewClient(ctx, &genai.ClientConfig{\n \t\tHTTPOptions: genai.HTTPOptions{APIVersion: \"v1\"},\n \t})\n \tif err != nil {\n \t\treturn fmt.Errorf(\"failed to create genai client: %w\", err)\n \t}\n\n \tmodelName := \"gemini-2.5-flash\"\n \tcontents := []*genai.Content{\n \t\t{Parts: []*genai.Part{\n \t\t\t{Text: \"Why is the sky blue?\"},\n \t\t},\n \t\t\tRole: \"user\"},\n \t}\n\n \tresp, err := client.Models.GenerateContent(ctx, modelName, contents, nil)\n \tif err != nil {\n \t\treturn fmt.Errorf(\"failed to generate content: %w\", err)\n \t}\n\n \tusage, err := json.MarshalIndent(resp.UsageMetadata, \"\", \" \")\n \tif err != nil {\n \t\treturn fmt.Errorf(\"failed to convert usage metadata to JSON: %w\", err)\n \t}\n \tfmt.Fprintln(w, string(usage))\n\n \t// Example response:\n \t// {\n \t// \t \"candidatesTokenCount\": 339,\n \t// \t \"promptTokenCount\": 6,\n \t// \t \"totalTokenCount\": 345\n \t// }\n\n \treturn nil\n }"]]