Intel Announces New GPUs and AI Accelerators for Workstations and AI Workloads



Intel, at the Computex 2025 global tech event, has introduced a series of innovative hardware solutions designed to meet the rising demands of AI, high-performance computing, and professional creative workflows—the Intel Arc Pro B60 and Arc Pro B50 GPUs—along with updates to its Gaudi 3 AI accelerators. These solutions are designed for scalability, offering enhanced performance across demanding applications like AI model inference and complex rendering tasks. Aimed at developers, AI researchers, and enterprises, they aim to meet the performance requirements of modern workloads.

Intel Arc Pro B-Series GPUs for Workstations and AI Tasks

The Arc Pro B60 and Arc Pro B50 GPUs, built on the Xe2 architecture, extend the Arc Pro series. The B60 comes with 24GB of memory, while the B50 offers 16GB, making them ideal for demanding tasks like AI inference, rendering, and simulation. These GPUs are optimized for use in professional workstations across industries such as architecture, engineering, and construction (AEC).

The Arc Pro B-Series GPUs feature Xe Matrix Extensions (XMX) AI cores for accelerating AI workloads and advanced ray tracing units to enhance graphical performance. Designed for both small and large-scale projects, these GPUs provide high memory capacity and multi-GPU scalability, ensuring they can meet the growing demands of modern workloads.

Compatible with both Windows and Linux, the Arc Pro GPUs offer flexibility for developers. The Linux versions come with a containerized software stack, simplifying AI model deployment. Additionally, the GPUs will receive ongoing software updates and optimizations.

Intel also introduced Project Battlematrix, a workstation platform that supports up to eight Arc Pro B60 GPUs, providing up to 192GB of video RAM for large-scale AI models with up to 150 billion parameters.

Intel Gaudi 3 AI Accelerators for Scalable AI Inference

The Gaudi 3 AI accelerator line has been expanded with PCIe cards and rack-scale systems. The Gaudi 3 PCIe cards are designed for scalable AI inference in data centers, allowing businesses to run models ranging from smaller configurations to full-scale models like Llama 4 Scout. These PCIe cards are expected to be available in the second half of 2025.

For large-scale AI models, the Gaudi 3 rack-scale systems can support up to 64 accelerators per rack and deliver 8.2 terabytes of high-bandwidth memory. These systems are optimized for real-time AI inferencing, offering low-latency performance. With an open, modular design, Gaudi 3 systems help avoid vendor lock-in, providing flexible solutions for cloud service providers (CSPs) and large enterprises. Liquid cooling further ensures high performance while maintaining a manageable total cost of ownership (TCO).
Intel AI Assistant Builder for Custom AI Agents

Also released is the Intel AI Assistant Builder, now publicly available on GitHub. This open-source framework enables developers to quickly create and deploy custom AI agents on Intel-based platforms. Designed to help organizations build tailored AI solutions efficiently, the framework aims to optimize performance for a wide range of business needs. This release follows its debut at the CES 2025 event and promises to simplify the development of localized AI agents.

Learn more about Intel’s new releases on their website post.


Comments Section

Leave a Reply

Your email address will not be published. Required fields are marked *


Back to Top - Modernizing Tech