This guide shows you how to use the code execution feature in the Gemini API to generate and run Python code. With the code execution feature in the Gemini API, the model can generate and run Python code. The model learns iteratively from the results to produce a final output. You can use this capability to build applications that benefit from code-based reasoning and produce text output, such as an application that solves equations or processes text. The Gemini API provides code execution as a tool, similar to function calling. After you add code execution as a tool in your request, the model decides when to use it. The code execution environment includes the following libraries. You can't install your own libraries. The following models support code execution: This section assumes that you've completed the setup and configuration steps in the Gemini API quickstart. To enable code execution, add the
To learn more, see the
SDK reference documentation.
Set environment variables to use the Gen AI SDK with Vertex AI:
Supported models
Get started with code execution
Enable code execution on the model
code_execution
tool when you send a prompt to the model.Python
Install
pip install --upgrade google-genai
# Replace the `GOOGLE_CLOUD_PROJECT` and `GOOGLE_CLOUD_LOCATION` values
# with appropriate values for your project.
export GOOGLE_CLOUD_PROJECT=GOOGLE_CLOUD_PROJECT
export GOOGLE_CLOUD_LOCATION=global
export GOOGLE_GENAI_USE_VERTEXAI=True
Go
Learn how to install or update the Go.
To learn more, see the SDK reference documentation.
Set environment variables to use the Gen AI SDK with Vertex AI:
# Replace the `GOOGLE_CLOUD_PROJECT` and `GOOGLE_CLOUD_LOCATION` values # with appropriate values for your project. export GOOGLE_CLOUD_PROJECT=GOOGLE_CLOUD_PROJECT export GOOGLE_CLOUD_LOCATION=global export GOOGLE_GENAI_USE_VERTEXAI=True
Node.js
Install
npm install @google/genai
To learn more, see the SDK reference documentation.
Set environment variables to use the Gen AI SDK with Vertex AI:
# Replace the `GOOGLE_CLOUD_PROJECT` and `GOOGLE_CLOUD_LOCATION` values # with appropriate values for your project. export GOOGLE_CLOUD_PROJECT=GOOGLE_CLOUD_PROJECT export GOOGLE_CLOUD_LOCATION=global export GOOGLE_GENAI_USE_VERTEXAI=True
REST
Before using any of the request data, make the following replacements:
GENERATE_RESPONSE_METHOD
: The type of response that you want the model to generate. Choose a method that generates how you want the model's response to be returned:streamGenerateContent
: The response is streamed as it's being generated to reduce the perception of latency to a human audience.generateContent
: The response is returned after it's fully generated.
LOCATION
: The region to process the request. Available options include the following:Click to expand a partial list of available regions
us-central1
us-west4
northamerica-northeast1
us-east4
us-west1
asia-northeast3
asia-southeast1
asia-northeast1
PROJECT_ID
: Your project ID.MODEL_ID
: The model ID of the model that you want to use.ROLE
: The role in a conversation associated with the content. Specifying a role is required even in singleturn use cases. Acceptable values include the following:USER
: Specifies content that's sent by you.MODEL
: Specifies the model's response.
The text instructions to include in the prompt.TEXT
To send your request, choose one of these options:
curl
Save the request body in a file named request.json
.
Run the following command in the terminal to create or overwrite
this file in the current directory:
cat > request.json << 'EOF' { "tools": [{'codeExecution': {}}], "contents": { "role": "ROLE", "parts": { "text": "TEXT" } }, } EOF
Then execute the following command to send your REST request:
curl -X POST \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "Content-Type: application/json; charset=utf-8" \
-d @request.json \
"https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/LOCATION/publishers/google/models/MODEL_ID:GENERATE_RESPONSE_METHOD"
PowerShell
Save the request body in a file named request.json
.
Run the following command in the terminal to create or overwrite
this file in the current directory:
@' { "tools": [{'codeExecution': {}}], "contents": { "role": "ROLE", "parts": { "text": "TEXT" } }, } '@ | Out-File -FilePath request.json -Encoding utf8
Then execute the following command to send your REST request:
$cred = gcloud auth print-access-token
$headers = @{ "Authorization" = "Bearer $cred" }
Invoke-WebRequest `
-Method POST `
-Headers $headers `
-ContentType: "application/json; charset=utf-8" `
-InFile request.json `
-Uri "https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/LOCATION/publishers/google/models/MODEL_ID:GENERATE_RESPONSE_METHOD" | Select-Object -Expand Content
You should receive a JSON response similar to the following.
Use code execution in chat
You can also use code execution as part of a chat session.
REST
curl -X POST \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "Content-Type: application/json" \
https://aiplatform.googleapis.com/v1/projects/test-project/locations/global/publishers/google/models/gemini-2.0-flash-001:generateContent -d \
$'{
"tools": [{'code_execution': {}}],
"contents": [
{
"role": "user",
"parts": {
"text": "Can you print \"Hello world!\"?"
}
},
{
"role": "model",
"parts": [
{
"text": ""
},
{
"executable_code": {
"language": "PYTHON",
"code": "\nprint(\"hello world!\")\n"
}
},
{
"code_execution_result": {
"outcome": "OUTCOME_OK",
"output": "hello world!\n"
}
},
{
"text": "I have printed \"hello world!\" using the provided python code block. \n"
}
],
},
{
"role": "user",
"parts": {
"text": "What is the sum of the first 50 prime numbers? Generate and run code for the calculation, and make sure you get all 50."
}
}
]
}'
Code execution versus function calling
The following table compares code execution with function calling.
Feature | Description | Pros | Use Case |
---|---|---|---|
Code execution | The model runs Python code in a fixed, isolated environment that the API backend manages. | Simpler to use and resolves in a single API request. | Use when you want the model to write and run Python code for you and return the result. |
Function calling | The model requests to run functions in your own environment. | You have full control over the execution environment and can call any external API or local function. | Use when you have your own functions or external APIs that you want the model to call. |
In general, use code execution if it can handle your use case, because it's simpler and resolves in a single API request. Use function calling if you need to run your own local functions or call external APIs, which requires an additional API request to send back the output from each function call.
Billing
There is no additional charge for using the code execution tool. You are billed at the standard rate for the Gemini model that you use.
The billing process for requests that use the code execution tool is as follows:
- You're billed for input and output tokens based on the Gemini model that you use.
- When the model uses code execution, the original prompt, the generated code, and the code execution results are considered intermediate tokens. These are billed as input tokens.
- The model's final response, which can include a summary, the generated code, and the execution results, is billed as output tokens.
- The API response includes an intermediate token count so that you can track any additional input tokens beyond your initial prompt.
The output from the generated code can include text and multimodal content, such as images from generated graphs.
Limitations
- The model can only generate and execute code. It can't return other artifacts like media files directly.
- File I/O: The tool doesn't support file URIs for input or output. However, you can provide file input and receive graph output as inlined bytes. For example, you can upload a CSV file, ask questions about it, and receive a generated Matplotlib graph.
- Supported MIME types: The supported MIME types for inlined bytes are
.cpp
,.csv
,.java
,.jpeg
,.js
,.png
,.py
,.ts
, and.xml
.
- Supported MIME types: The supported MIME types for inlined bytes are
- Code execution times out after 30 seconds.
- In some cases, enabling code execution can lead to regressions in other areas of model output, such as writing a story.