– Ampere and Qualcomm are partnering to offer an AI-focused server using Ampere’s CPUs and Qualcomm’s Cloud AI100 Ultra AI inferencing chips.
– Ampere is looking to profit from the AI boom and is focusing on fast and power-efficient server chips.
– Ampere also announced a partnership with NETINT to build a server that pairs Ampere’s CPUs with NETINT’s video processing chips for video transcoding and speech-to-text capabilities.
Ampere and Qualcomm are teaming up to offer an AI-focused server that uses Ampere’s CPUs and Qualcomm’s Cloud AI100 Ultra AI inferencing chips. While Ampere specializes in power-efficient server chips, they are looking to capitalize on the AI boom by combining forces with Qualcomm. Together, they aim to provide a server-level solution that caters to the growing demand for running large AI models efficiently.
The collaboration with Qualcomm is part of Ampere’s goal to offer best-of-breed solutions in the data center market. The companies share similar interests in building highly efficient solutions, with Qualcomm focusing on various parts of the market and Ampere specializing in server CPUs. Ampere’s roadmap update includes the new 256-core AmpereOne chip with 12-channel DDR5 RAM, providing data center customers with improved memory access capabilities.
Ampere emphasizes not only performance but also the power consumption and cost-efficiency of their chips in the data center, especially for AI inferencing tasks which are compared favorably against Nvidia’s A10 GPUs. Ampere is not phasing out its existing chips and emphasizes that even older chips still have use cases. In addition to the partnership with Qualcomm, Ampere also announced a collaboration with NETINT to build a solution combining Ampere’s CPUs with NETINT’s video processing chips for transcoding multiple live video channels and using OpenAI’s Whisper speech-to-text model. Ampere’s CEO, Renee James, highlighted the company’s focus on efficiency and high performance in the computing space.