The Facebook parent said that it is working on a new AI-optimized data center design and the second phase of its 16,000 GPU supercomputer for AI research. Credit: Sdecoret / Getty Images Facebook parent company Meta has revealed plans for the development of its own custom chip for running artifical intelligence models, and a new data center architecture for AI workloads. “We are executing on an ambitious plan to build the next generation of Meta’s AI infrastructure and today, we’re sharing some details on our progress. This includes our first custom silicon chip for running AI models, a new AI-optimized data center design and the second phase of our 16,000 GPU supercomputer for AI research,” Santosh Janardhan, head of infrastructure at Meta, wrote in a blog post Thursday. Meta’s custom chip for running AI models, called Meta Training and Inference Accelerator (MTIA), is designed to provide greater compute power and efficiency than CPUs on the market today, according to Janardhan. MTIA is customized for internal workloads such as content understanding, feeds, generative AI, and ad ranking, the company said, adding that the first version of the chip was designed in 2020. Meta’s announcement of the strides it is making to produce its own custom chips for running AI models comes at a time when other large technology companies — driven by the proliferation of large language models and generative AI —are either working on or have already launched their own chips for AI workloads Earlier this month, news reports claimed that Microsoft was working with chip-maker AMD to develop its own chip for running AI workloads. AWS has also released its own chip for running AI workloads. On its part, Meta also said Thursday that its new data center design will be optimized to train AI models, a process that enables them to better their performance as they ingest more data.. “This new data center will be an AI-optimized design, supporting liquid-cooled AI hardware and a high-performance AI network connecting thousands of AI chips together for data center-scale AI training clusters,” Janardhan wrote, adding that the new data center systems will be faster and more cost-effective to build than earlier facilities. In addition to the new data center design, the company said that it was working on developing AI supercomputers that will support training of next-generation AI models, power augmented reality tools, and support real-time translation technology. ENDS Related content news Cisco patches actively exploited zero-day flaw in Nexus switches The moderate-severity vulnerability has been observed being exploited in the wild by Chinese APT Velvet Ant. By Lucian Constantin Jul 02, 2024 1 min Network Switches Network Security news Nokia to buy optical networker Infinera for $2.3 billion Customers struggling with managing systems able to handle the scale and power needs of soaring generative AI and cloud operations is fueling the deal. By Evan Schuman Jul 02, 2024 4 mins Mergers and Acquisitions Networking news French antitrust charges threaten Nvidia amid AI chip market surge Enforcement of charges could significantly impact global AI markets and customers, prompting operational changes. By Prasanth Aby Thomas Jul 02, 2024 3 mins Technology Industry GPUs Cloud Computing news Lenovo adds new AI solutions, expands Neptune cooling range to enable heat reuse Lenovo’s updated liquid cooling addresses the heat generated by data centers running AI workloads, while new services help enterprises get started with AI. By Lynn Greiner Jul 02, 2024 4 mins Cooling Systems Generative AI Data Center PODCASTS VIDEOS RESOURCES EVENTS NEWSLETTERS Newsletter Promo Module Test Description for newsletter promo module. Please enter a valid email address Subscribe