Meta custom AI processor

Meta, the parent company of Facebook, Instagram, and WhatsApp, has been actively developing its own custom artificial intelligence (AI) processors to enhance its AI capabilities and reduce reliance on third-party hardware like Nvidia GPUs.

Meta custom AI processor

Meta, the parent company of Facebook, Instagram, and WhatsApp, has been actively developing its own custom artificial intelligence (AI) processors to enhance its AI capabilities and reduce reliance on third-party hardware like Nvidia GPUs. The company's efforts are part of a broader trend among tech giants to create domain-specific silicon optimized for particular workloads, especially as AI applications become more central to their services.

Meta's custom-designed AI processors, known as the Meta Training and Inference Accelerator (MTIA), are part of a family of chips targeting inference workloads, which are crucial for AI applications that apply learned capabilities to new data. The MTIA is designed to provide greater compute power and efficiency than traditional CPUs and is customized for Meta's internal workloads[1][2][4][5][6][14][15].

The MTIA chip is fabricated using a 7nm process by TSMC and operates at 800 MHz. It employs the RISC-V instruction set architecture (ISA), which is an open-source alternative to the x86 and ARM architectures. The chip is designed to handle high volumes of concurrent operations, often using lower-precision arithmetic, which is typically sufficient for AI workloads and allows for more computations per watt of power[2][3].

Meta's custom silicon program reflects the ongoing shift from general-purpose CPUs to domain-specific silicon optimized for specific tasks. The company has been building custom hardware using GPUs since 2016, but began working on its own custom chips in 2020 when it realized that GPUs were not always optimal for running Meta’s specific recommendation workloads at the desired levels of efficiency[2][3].

In addition to the MTIA, Meta has introduced the Meta Scalable Video Processor (MSVP), an ASIC designed to accelerate live-streaming and video on demand (VOD) content. The MSVP chip will specialize in encoding video content in both production and delivery, which is a significant challenge for platforms like Facebook that serve billions of video views per day[2][5].

Meta's transition to using its custom chips could result in substantial savings, potentially reducing annual energy expenses by hundreds of millions of dollars and cutting down on third-party chip purchases. The deployment of Artemis processors will not only optimize power consumption of Meta's datacenters but will also free up Nvidia's popular H100 processors for AI training[1][7].

The company's ambitions do not end with Artemis and inference acceleration. Meta is reportedly developing a more sophisticated processor that could run AI training workloads, similar to Nvidia's H100 GPUs. This initiative is part of Meta's broader strategy to develop in-house silicon to lessen its dependence on Nvidia's processors, although Meta has no plans to completely eliminate Nvidia's GPUs from its datacenters[1][7].

Meta's AI infrastructure advancements, including the MTIA and MSVP chips, are part of an ambitious plan to build the next generation of Meta’s AI infrastructure. These efforts will enable the company to develop larger, more sophisticated AI models and deploy them efficiently at scale, which is crucial for the company's long-term vision of the metaverse and its AI-driven applications[5][11].

Citations:
[1] https://www.tomshardware.com/tech-industry/meta-to-deploy-custom-designed-artemis-ai-processor-alongside-commercial-gpus
[2] https://www.datacenterfrontier.com/servers/article/33005340/closer-look-metas-custom-asic-for-ai-computing
[3] https://encord.com/blog/meta-ai-chip-mtia-explained/
[4] https://www.theverge.com/2023/5/18/23728678/meta-ai-new-chip-mtia-msvp-datacenter
[5] https://about.fb.com/news/2023/05/metas-infrastructure-for-ai/
[6] https://www.zdnet.com/article/meta-unveils-first-custom-artificial-intelligence-chip/
[7] https://www.theregister.com/2024/02/02/meta_ai_chips/
[8] https://www.techradar.com/pro/meta-has-done-something-that-will-get-nvidia-and-amd-very-very-worried-it-gave-up-on-gpu-and-cpu-to-take-a-risc-y-route-for-ai-training-and-inference-acceleration
[9] https://www.youtube.com/watch?v=dzsQFJL-guA
[10] https://engineering.fb.com/2023/10/18/ml-applications/meta-ai-custom-silicon-olivia-wu/
[11] https://techcrunch.com/2023/05/18/meta-bets-big-on-ai-with-custom-chips-and-a-supercomputer/
[12] https://www.hpcwire.com/2024/01/25/metas-zuckerberg-puts-its-ai-future-in-the-hands-of-600000-gpus/
[13] https://www.cnbc.com/2023/11/15/microsoft-reveals-maia-ai-processor-and-cobalt-arm-based-chip.html
[14] https://ai.meta.com/blog/meta-training-inference-accelerator-AI-MTIA/
[15] https://ai.meta.com/blog/meta-ai-infrastructure-overview/
[16] https://www.reuters.com/technology/meta-deploy-in-house-custom-chips-this-year-power-ai-drive-memo-2024-02-01/
[17] https://www.reuters.com/technology/nvidia-chases-30-billion-custom-chip-market-with-new-unit-sources-2024-02-09/

Subscribe to TheBuggerUs

Don’t miss out on the latest issues. Sign up now to get access to the library of members-only issues.
jamie@example.com
Subscribe