DeepSeek models on Vertex AI offer fully managed and serverless
models as APIs. To use a DeepSeek model on Vertex AI, send
a request directly to the Vertex AI API endpoint. Because
DeepSeek models use a managed API, there's no need to provision or
manage infrastructure. You can stream your responses to reduce the end-user latency perception. A
streamed response uses server-sent events (SSE) to incrementally stream the
response. The following models are available from DeepSeek to use in
Vertex AI. To access a DeepSeek model, go to its
Model Garden model card. DeepSeek-R1-0528 is the latest version of the DeepSeek R1 model.
Compared to DeepSeek-R1, it has significantly improved depth of reasoning and
inference capabilities. DeepSeek-R1-0528 excels in wide range tasks, such as
creative writing, general question answering, editing, and summarization. Go to the DeepSeek-R1-0528 model card You can use curl commands to send requests to the Vertex AI endpoint
using the following model names: To use DeepSeek models with Vertex AI, you must perform the
following steps. The Vertex AI API
( In the Google Cloud console, on the project selector page,
select or create a Google Cloud project.
Verify that billing is enabled for your Google Cloud project.
Enable the Vertex AI API.
In the Google Cloud console, on the project selector page,
select or create a Google Cloud project.
Verify that billing is enabled for your Google Cloud project.
Enable the Vertex AI API.
The following sample makes a streaming call to a DeepSeek model:
After you
set up your environment,
you can use REST to test a text prompt. The following sample sends a request to the publisher
model endpoint.
Before using any of the request data,
make the following replacements:
Specify a lower value for shorter responses and a higher value for potentially longer
responses.
HTTP method and URL:
Request JSON body:
To send your request, choose one of these options:
Save the request body in a file named
Save the request body in a file named You should receive a JSON response similar to the following.Available DeepSeek models
DeepSeek-R1-0528
Considerations
Use DeepSeek models
deepseek-r1-0528-maas
Before you begin
aiplatform.googleapis.com
) must be enabled to use
Vertex AI. If you already have an existing project with the
Vertex AI API enabled, you can use that project instead of creating a
new project.
Make a streaming call to a DeepSeek model
REST
user
or an assistant
.
The first message must use the user
role. The models
operate with alternating user
and assistant
turns.
If the final message uses the assistant
role, then the response
content continues immediately from the content in that message. You can use
this to constrain part of the model's response.user
or assistant
message.true
to stream the response
and false
to return the response all at once.POST https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/LOCATION/endpoints/openapi/chat/completions
{
"model": "deepseek-ai/MODEL",
"messages": [
{
"role": "ROLE",
"content": "CONTENT"
}
],
"max_tokens": MAX_OUTPUT_TOKENS,
"stream": true
}
curl
request.json
,
and execute the following command:
curl -X POST \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "Content-Type: application/json; charset=utf-8" \
-d @request.json \
"https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/LOCATION/endpoints/openapi/chat/completions"PowerShell
request.json
,
and execute the following command:
$cred = gcloud auth print-access-token
$headers = @{ "Authorization" = "Bearer $cred" }
Invoke-WebRequest `
-Method POST `
-Headers $headers `
-ContentType: "application/json; charset=utf-8" `
-InFile request.json `
-Uri "https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/LOCATION/endpoints/openapi/chat/completions" | Select-Object -Expand Content
Make a non-streaming call to a DeepSeek model
The following sample makes a non-streaming call to a DeepSeek model:
REST
After you set up your environment, you can use REST to test a text prompt. The following sample sends a request to the publisher model endpoint.
Before using any of the request data, make the following replacements:
- LOCATION: A region that supports DeepSeek models.
- MODEL: The model name you want to use.
- ROLE: The role associated with a
message. You can specify a
user
or anassistant
. The first message must use theuser
role. The models operate with alternatinguser
andassistant
turns. If the final message uses theassistant
role, then the response content continues immediately from the content in that message. You can use this to constrain part of the model's response. - CONTENT: The content, such as
text, of the
user
orassistant
message. - MAX_OUTPUT_TOKENS:
Maximum number of tokens that can be generated in the response. A token is
approximately four characters. 100 tokens correspond to roughly 60-80 words.
Specify a lower value for shorter responses and a higher value for potentially longer responses.
- STREAM: A boolean that specifies
whether the response is streamed or not. Stream your response to reduce the
end-use latency perception. Set to
true
to stream the response andfalse
to return the response all at once.
HTTP method and URL:
POST https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/LOCATION/endpoints/openapi/chat/completions
Request JSON body:
{ "model": "deepseek-ai/MODEL", "messages": [ { "role": "ROLE", "content": "CONTENT" } ], "max_tokens": MAX_OUTPUT_TOKENS, "stream": false }
To send your request, choose one of these options:
curl
Save the request body in a file named request.json
,
and execute the following command:
curl -X POST \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "Content-Type: application/json; charset=utf-8" \
-d @request.json \
"https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/LOCATION/endpoints/openapi/chat/completions"
PowerShell
Save the request body in a file named request.json
,
and execute the following command:
$cred = gcloud auth print-access-token
$headers = @{ "Authorization" = "Bearer $cred" }
Invoke-WebRequest `
-Method POST `
-Headers $headers `
-ContentType: "application/json; charset=utf-8" `
-InFile request.json `
-Uri "https://LOCATION-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/LOCATION/endpoints/openapi/chat/completions" | Select-Object -Expand Content
You should receive a JSON response similar to the following.
DeepSeek model region availability and quotas
For DeepSeek models, a quota applies for each region where the model is available. The quota is specified in queries per minute (QPM).
Model | Region | Quotas | Context length |
---|---|---|---|
DeepSeek-R1-0528 | |||
us-central1 |
|
163,840 |
If you want to increase any of your quotas for Generative AI on Vertex AI, you can use the Google Cloud console to request a quota increase. To learn more about quotas, see Work with quotas.