Meta plans to employ a new version of a custom chip to support its efforts in the field of artificial intelligence in its data centers for this year, according to an internal document reviewed by Reuters.
By utilizing the chip, it aims to reduce the heavy reliance of the company on NVIDIA chips that dominate the market and control the high expenses associated with running artificial intelligence technology in its efforts to launch AI products.
This new thin piece referred to internally as Artemis, is the second generation of internal silicon chips announced by Meta last year.
Meta is earnestly striving to enhance its computing power in industrial AI products, which it integrates into its applications like Facebook, Instagram, WhatsApp, as well as devices like Ray-Ban smart glasses.
Meta is investing billions of dollars to acquire specialized chips and modify data centers to accommodate them.
According to Meta’s report, the successful use of the new chip could save hundreds of millions of dollars in annual energy costs and billions of dollars in chip procurement expenses.
Tech companies are heavily investing in chips, infrastructure, and the power needed to operate artificial intelligence applications, as these elements represent a massive sink for investment, somewhat equaling the gains made in the tech boom era.
The Meta spokesperson confirmed the company’s plan to put the developed chip into production in 2024. He pointed out that the chip works in collaboration with hundreds of thousands of AI-specific GPU units that are ready for use, purchased by the company.
In his statement, the spokesperson said, “We believe that our internally developed accelerators substantially complement commercially available GPU units in achieving optimal performance and efficiency in Meta’s workloads.”
Last month, Meta’s CEO, Mark Zuckerberg, stated that the company intends to acquire approximately 350,000 H100 chips from NVIDIA by the end of the year. Meta currently possesses computational capacity estimated at 600,000 H100 chips.
Utilizing its chip marks a positive development for the company’s in-house project in producing AI chips.
This transition comes after executives decided in 2022 to halt the first version of the chip. At that time, the company decided to purchase GPUs from NVIDIA worth billions of dollars.
NVIDIA almost entirely controls the AI training process, providing massive amounts of data to large language models to teach them how to perform tasks.
The new chip executes a process called inference, which involves using model algorithms to generate responses to user requirements.
Reuters mentioned last year that Meta developed a new chip that can perform both training and inference tasks in a manner similar to GPU units.
Last year, the company shared details about the first version of the Meta training and inference program. The announcement described this release as a learning opportunity.