Llama 4 Maverick 17B-128E is Llama 4's largest and most capable model. It uses
the Mixture-of-Experts (MoE) architecture and early fusion to provide coding,
reasoning, and image capabilities.
Try in Vertex AI
View model card in Model Garden
Model availability
United States ML processing
United States us-east5:
Property
Description
Model ID llama-4-maverick-17b-128e-instruct-maas
Capabilities
Knowledge cutoff date August 2024
Versions
llama-4-maverick-17b-128e-instruct-maas
Supported regions
us-east5
Multi-region
Quota limits
Pricing See Pricing.
Llama 4 Maverick 17B-128E
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2025-08-21 UTC.