MENLO PARK / SAN FRANCISCO — In its most aggressive move yet to achieve “silicon independence,” Meta Platforms has unveiled a comprehensive roadmap for four new in-house AI chips. The announcement, made on March 11, 2026, marks a major escalation in the company’s effort to reduce its multi-billion-dollar reliance on external vendors like Nvidia and Broadcom.

The MTIA Roadmap: Six-Month Release Cadence

The new processors are part of the Meta Training and Inference Accelerator (MTIA) program. To keep pace with the “exploding” demand for AI, Meta Vice President of Engineering Yee Jiun Song confirmed the company will now release a new iteration of its custom silicon roughly every six months.

Chip ModelStatus / Target DeploymentPrimary Function
MTIA 300Deployed NowRanking and recommendation for Facebook/Instagram feeds.
MTIA 400Late 2026Generative AI inference (text, image, and video generation).
MTIA 450Early 2027High-bandwidth GenAI inference; optimized for Llama models.
MTIA 500Late 2027Flagship “superintelligence” chip with 10 petaflops of performance.

Key Technical Breakthroughs

  • Generative AI Shift: While previous MTIA generations focused solely on social media algorithms (ranking/ads), the MTIA 400 and beyond are specifically designed for “inference”—the process of running large language models like Llama 4.

  • Liquid-Cooled Systems: Meta has designed an entire hardware ecosystem around these chips, featuring server racks roughly the size of refrigerators that utilize advanced liquid cooling to manage the massive thermal output of AI processing.

  • The “Rubin” Bridge: Despite the in-house push, Meta remains Nvidia’s largest customer. These custom chips are intended to complement rather than immediately replace Nvidia’s high-end GPUs, handling specific internal tasks more efficiently.

“We are building out capacity so quickly that we need to have the state-of-the-art chip to deploy at any given time. Custom silicon allows us to squeeze more performance per dollar out of our data center fleet.” — Yee Jiun Song, VP of Engineering, Meta

Strategic Context: The $135 Billion Bet

The chip roadmap is a core pillar of Meta’s 2026 capital expenditure plan, which has been revised upward to between $115 billion and $135 billion. Much of this spend is directed toward “gigawatt-scale” data centers, including the $50 billion “Prometheus” site in Louisiana, which will be among the largest AI facilities on Earth.

By controlling the full stack—from the PyTorch software framework down to the actual silicon—Meta aims to achieve up to 6x higher throughput and better energy efficiency than what is possible with generic, off-the-shelf hardware.


Discover more from News

Subscribe to get the latest posts sent to your email.

Leave a Reply