Meta Broadens Llama 4 Access via Amazon Bedrock

Users can now build with Meta's Llama 4 Scout and Maverick models, known for image understanding and large context, via Amazon Bedrock's managed platform.

Meta is pushing its latest Llama 4 AI models out through multiple channels, making the technology available both as a managed service via partners like Amazon Web Services and through its own newly previewed developer API.

Llama 4 Scout 17B and Llama 4 Maverick 17B are now accessible as fully managed, serverless endpoints on its Amazon Bedrock platform. This availability provides developers with a ready-to-use option without managing underlying infrastructure, although access must first be requested via the Amazon Bedrock console.

The timing closely followed Meta’s own LlamaCon event on April 29, where the company previewed the Llama API in limited release, signaling a strategy to meet developers with different infrastructure preferences and customization needs.

Llama 4 on Bedrock: Managed Multimodal AI

The Bedrock offering provides a simplified path for integrating Llama 4 Scout and Maverick, models initially detailed by Meta on April 6. These models feature a Mixture-of-Experts (MoE) architecture — activating only necessary neural network parts per task for efficiency — with Scout having 16 experts (17B active/109B total parameters) and Maverick having 128 experts (17B active/400B total parameters).

They also incorporate native multimodal capabilities through ‘early fusion’, processing images and text jointly from the pretraining stage.

On Bedrock, developers can utilize these features using the unified Bedrock Converse API, a consistent interface across various Bedrock models that handles inputs like text and images and supports streaming output. AWS provides SDK support, including Python examples detailed in their blog post, to facilitate integration.

AWS suggests these models are suited for tasks like building multilingual assistants or enhancing customer support with image analysis. Bedrock currently supports substantial context windows: 3.5 million tokens for Scout and 1 million for Maverick, allowing for extensive inputs. Specific pricing for Llama 4 on Bedrock is available on the AWS website.

Distinct Paths: Bedrock Simplicity vs. API Customization

While Bedrock offers ease of use, Meta’s own Llama API preview targets developers seeking deeper control. This dual approach serves different needs, with Bedrock appealing to those wanting managed infrastructure and the Llama API catering to users prioritizing customization.

A key distinction, based on comparing the features announced for each platform, is that the fine-tuning and evaluation tools highlighted with the Llama API preview are not initially available directly within the Amazon Bedrock environment.

Developers needing to fine-tune Llama 4 would currently need to use Meta’s API, potentially leveraging experimental serving options with Cerebras and Groq, or self-host the models.

This multi-avenue distribution strategy reflects the significant investment needed for large model development; reports from mid-April indicated Meta had previously sought co-funding from AWS and Microsoft for Llama training, potentially offering feature influence in return.

Model Background and Broader Rollout

Users engaging with Llama 4 via Bedrock or other means should note Meta’s stated objectives during its development. The company publicly discussed efforts to tune Llama 4 to address perceived political biases often found in models trained on broad internet data.

In its official Llama 4 announcement, Meta stated, “It’s well-known that all leading LLMs have had issues with bias—specifically, they historically have leaned left when it comes to debated political and social topics… This is due to the types of training data available on the internet.”

This AI tuning occurred alongside platform policy changes, such as ending Meta’s US third-party fact-checking program in January 2025. At the time, Meta’s global policy chief Joel Kaplan cited moderation complexities, noting internal reviews suggested “One to two out of every 10 of these actions may have been mistakes.”

Furthermore, the models’ training data remains subject to ongoing copyright lawsuits alleging the use of large datasets of books obtained from sources like LibGen via BitTorrent. 

Beyond developer access (which also includes SageMaker JumpStart and Azure AI Foundry and Azure Databricks), Llama 4 powers Meta’s consumer products, including the new standalone Meta AI app launched today, which integrates with the company’s Ray-Ban smart glasses.

Currently, Llama 4 on Bedrock is available in the US East (N. Virginia) and US West (Oregon) AWS Regions, with cross-region access from US East (Ohio). Further details for developers can be found in the Meta Llama models section of the Bedrock User Guide.

Markus Kasanmascheff
Markus Kasanmascheff
Markus has been covering the tech industry for more than 15 years. He is holding a Master´s degree in International Economics and is the founder and managing editor of Winbuzzer.com.

Recent News

0 0 votes
Article Rating
Subscribe
Notify of
guest
0 Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments
0
We would love to hear your opinion! Please comment below.x
()
x