According to a Reuters report, the tech giant developed these AI chipsets in collaboration with the chipmaker Taiwan Semiconductor Manufacturing Company (TSMC). Meta reportedly completed the tape-out or the final stage of the chip design process recently, and has now begun deploying the chips at a small scale.
This is not the first AI-focused chipset for the company. Last year, it unveiled Inference Accelerators or processors that are designed for AI inference. However, Meta did not have any in-house hardware accelerators to train its Llama family of large language models (LLMs).
Citing unnamed sources within the company, the publication claimed that Meta’s larger vision behind developing in-house chipsets is to bring down the infrastructure costs of deploying and running complex AI systems for internal usage, consumer-focused products, and developer tools.
Interestingly, in January, Meta CEO Mark Zuckerberg announced that the company’s expansion of the Mesa Data Center in Arizona, USA was finally complete and the division began running operations. It is likely that the new training chipsets are also being deployed at this location.
The report stated that the new chipsets will first be used with Meta’s recommendation engine that powers its various social media platforms, and later the use case will be expanded to generative AI products such as Meta AI.
In January, Zuckerberg revealed in a Facebook post that the company plans to invest as much as $65 billion (roughly Rs. 5,61,908 crore) in 2025 on projects relating to AI. The expenses also accounted for the expansion of the Mesa Data Center. It also includes hiring more employees for its AI teams.