Last updated: March 18, 2025
Gemini 2.0 general FAQ
Which Gemini 2.0 models are available on Vertex AI as of February 25, 2025?
The following 2.0 models are available as of February 25, 2025:
- Gemini 2.0 Flash (GA)
- Gemini 2.0 Flash-Lite (GA)
- Gemini 2.0 Flash Thinking (Experimental)
- Gemini 2.0 Pro (Experimental)
How do the Gemini 2.0 models compare to the 1.5 generation?
The Gemini 2.0 models feature the following upgrades over our 1.5 models:
- Improved multilingual capabilities: Gemini 2.0 models show strong advancements in multilingual understanding, with increased scores in the Global MMLU (Lite) benchmark.
- Significant gains in reasoning and knowledge factuality: Gemini 2.0 Pro shows substantial improvements in GPQA (domain expert reasoning) and SimpleQA (world knowledge factuality) indicating enhanced ability to understand and provide accurate information.
- Enhanced mathematical problem-solving: Both Gemini 2.0 Flash and Gemini 2.0 Pro demonstrate notable progress in handling complex mathematical problems, as evidenced by the MATH and HiddenMath benchmarks.
The following table shows the comparison between our 2.0 models:
Model name | Description | Upgrade path for |
---|---|---|
Gemini 2.0 Pro | Strongest model quality (especially for code and world knowledge), with a 2M token-long context window | Gemini 1.5 Pro users who want better quality, or who are particularly invested in long context and code |
Gemini 2.0 Flash | Workhorse model for all daily tasks and features enhanced performance and supports real-time Live API |
|
Gemini 2.0 Flash-Lite | Our most cost effective offering to support high throughput |
|
To see all benchmark capabilities for Gemini 2.0, visit the Google DeepMind documentation.
How do I migrate Gemini on Google AI Studio to Vertex AI Studio?
Migrating to Google Cloud's Vertex AI platform offers a suite of MLOps tools that streamline the usage, deployment, and monitoring of AI models for efficiency and reliability. To migrate your work to Vertex AI, import and upload your existing data to Vertex AI Studio and use the Gemini API with Vertex AI.
For more information, see Migrate from Gemini on Google AI to Vertex AI.
How does Gemini 2.0 image generation compare to Imagen 3?
While the experimental version of Gemini 2.0 Flash supports image generation, Gemini 2.0 does not currently support image generation in our generally available models. The experimental version of Gemini 2.0 Flash shouldn't be used in production-level code.
If you need image generation in production code, use Imagen 3. This powerful model offers high-quality images, low-latency generation, and flexible editing options.
Does Gemini 2.0 in Vertex AI support compositional function calling?
Compositional function calling is only available in Google AI Studio.
What locations are supported for Gemini 2.0?
For the full list of locations that are supported for Gemini 2.0 models, see Locations.
What are the default quotas for Gemini 2.0?
Gemini 2.0 Flash and Gemini 2.0 Flash-Lite use dynamic shared quota and have no default quota.
Gemini 2.0 Pro is an experimental model and has a 10 queries per minute (QPM) limit.
Monitoring
Why does my quota usage show as 0% percent on API dashboard when I'm sending requests?
For Gemini models on Vertex, we use a Dynamic Shared Quota (DSQ) system. This innovative approach automatically manages capacity across all users in a region, ensuring optimal performance without the need for manual quota adjustments or requests. As a result, you won't see traditional quota usage displayed in the Quotas & System Limits tab. Your project will automatically receive the necessary resources based on real-time availability.
Use the Vertex AI Model Garden (Monitoring) dashboard to monitor usage.
Provisioned Throughput
When should I use Provisioned Throughput?
For generative AI applications in production requiring consistent throughput, we recommend using Provisioned Throughput (PT). PT ensures a predictable and consistent user experience, critical for time-sensitive workloads. Additionally, it provides deterministic monthly or weekly cost structures, enabling accurate budget planning.
For more information, see Provisioned Throughput overview.
What models are supported for Provisioned Throughput?
The list of models supported for Provisioned Throughput, including throughput, purchase increment, and burndown rate is listed on our Supported models page.
Partner models like Claude and Llama are not available for PT purchase using the console. For PT for Anthropic models, contact anthropic-gtm.
How can I monitor my Provisioned Throughput usage?
There are three ways to measure your Provisioned Throughput usage:
- Use the Model Garden monitoring dashboard
- Use the built-in monitoring metrics
- Use the HTTP response headers
When using the built-in monitoring metrics or HTTP response headers, you can create a chart in the Metrics Explorer to monitor usage.
What permissions are required to purchase and use Provisioned Throughput?
To buy and manage Provisioned Throughput, follow the instructions in the Permissions section of Purchase Provisioned Throughput. The same permissions for pay-as-you-go apply for Provisioned Throughput usage.
If you still run into issues placing an order, you likely need to add one of the following roles:
- Vertex AI Administrator
- Vertex AI Platform Provisioned Throughput Admin
What is a GSU?
A generative AI scale unit (GSU) is an abstract measure of capacity for throughput provisioning that is fixed and standard across all Google models that support Provisioned Throughput. A GSU's price and capacity is fixed, but throughput may vary between models because different models may require different amounts of capacity to deliver the same throughput.
How can I estimate my GSU needs for Provisioned Throughput?
You can estimate your Provisioned Throughput needs by:
- Gather your requirements
-
Calculate your throughput:
$$ \begin{aligned} \text{Throughput per sec} = & \\ & \qquad (\text{Inputs per query converted to input chars} \\ & \qquad + \text{Outputs per query converted to input chars}) \\ & \qquad \times \text{QPS} \end{aligned} $$
- Calculate your GSUs: Use the estimation tool provided in the purchasing console to calculate the corresponding number of GSUs needed to cover that usage for the given model and details.
How often am I billed for Provisioned Throughput?
You're invoiced for any charges you incur for Provisioned Throughput usage over the course of the month at the end of that month.
How long does it take to activate my Provisioned Throughput order?
- For small orders or small incremental increases, the order will be auto-approved and ready within minutes if capacity is available.
- Larger increases or orders may take longer and may require us to communicate with you directly in order to prepare capacity for your order. We aim to have a decision on each request (either approved or denied) within 1 week of order submission.
Can I test Provisioned Throughput before placing an order?
While a direct test environment is not available, a 1-week order with a limited number of GSUs provides a cost-effective way to experience its benefits and assess its suitability for your requirements.
For more information, see Purchase Provisioned Throughput.