Llama 4 Scout 17B-16E is a multmodal model that uses the Mixture-of-Experts
(MoE) architecture and early fusion, delivering state-of-the-art results for its
size class.
Try in Vertex AI
View model card in Model Garden
Model availability
United States ML processing
United States us-east5:
Property
Description
Model ID llama-4-scout-17b-16e-instruct-maas
Capabilities
Knowledge cutoff date August 2024
Versions
llama-4-scout-17b-16e-instruct-maas
Supported regions
us-east5
Multi-region
Quota limits
Pricing See Pricing.
Llama 4 Scout 17B-16E
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2025-08-21 UTC.