Americas

  • United States

Server news roundup from Supermicro, Dell, Intel and Lenovo

News
Mar 04, 20247 mins
CPUs and ProcessorsData CenterGenerative AI

Supermicro and Lenovo are expanding their AI hardware offerings, Intel is previewing chips designed for 5G and AI workloads, and Dell is embracing telecom.

A businessman holding a computer tablet, with a hologram of a graph above the tablet.
Credit: Miha Creative / Shutterstock

There has been a lot of activity in the hardware market over the past few weeks. Here are some of the announcements from major hardware OEMs.

Supermicro drives AI to the edge

Supermicro announced an expansion of its portfolio of AI solutions for edge locations, such as public spaces, retail stores, and industrial infrastructure. Supermicro says its application-optimized servers with Nvidia GPUs are meant to make it easier to fine-tune pre-trained models and to deploy AI inference solutions at the edge, where the data is generated.

This eliminates the need to send data to the cloud for processing and then send it back to the edge, where it’s required. Customers can now use pre-trained large language models (LLMs), optimized for performance and available with Nvidia AI Enterprise software, at their edge locations and use the models for real-time decision-making.

Supermicro’s Hyper-E server, the SYS-221HE, is optimized for edge training and inferencing and supports dual socket CPUs along with three Nvidia GPUs from a variety of models, including the H series, the a series, and L-series of GPU cards.

The Supermicro SYS-621C-TNR12R is an all-in-one rackmount 2U platform for cloud data centers and two double-width GPUs. The Supermicro SYS-111E-FWTR is a high-density 1U edge system that comes with a single Intel Xeon processor and two PCIe 5.0 x16 FHFL slots, designed more for networking and edge applications.

The compact Supermicro SYS-E403-13E promises data-center-level performance in a box PC form factor and features one Xeon processor and up to three Nvidia GPUs. The ultra short-depth SYS-211E-FRN2T, with a system depth of 300mm, is specifically designed to fit in space-constrained environments found at the networking edge and features up to a 5th Gen Intel Xeon processor.

There are also systems without GPUs, starting with the 2U SYS-211SE-31D/A, a multi-node system with three independent nodes, each with a 5th Gen Intel Xeon processor, three PCIe 5.0 x16 slots, and up to 2TB of DDR5 memory.

Finally, there is the SYS-E300-13AD compact IoT server, which offers a 13th Gen Intel Core processor and is sized at just 265×226 mm (10×8 in), making it the smallest system to fit an Nvidia GPU.

Dell gets a nod from Nokia for 5G

Nokia has selected Dell Technologies as its preferred infrastructure partner for Nokia AirFrame servers used in private wireless and enterprise edge use cases. AirFrame is designed for running telco-related virtualized and cloud-native software workloads in a secure environment for carrier grade networks. Enhancements including advanced packet, crypto, GPU and workload-specific acceleration are designed to boost AirFrame’s performance compared to traditional IT servers.

The two companies will work together to help transition existing AirFrame customers to Dell’s infrastructure portfolio, which includes purpose-built Dell PowerEdge servers for telecom network workloads from core to edge to RAN.

The Nokia Digital Automation Cloud (NDAC) private wireless solution will become Dell’s preferred private wireless platform for enterprise customers’ edge use cases. The companies will work together to integrate Nokia’s NDAC solution with Dell NativeEdge, its edge operations software platform, to provide a scalable solution for enterprises.

The two companies will also collaborate on platform and application testing and lifecycle management in the Dell Open Telecom Ecosystem Lab. Dell and Nokia plan to certify workloads on Dell Telecom Infrastructure Blocks that support Nokia Cloud offerings, while also continuing to collaborate on OEM engagements.

Intel has its own AI story

Intel at MWC previewed its latest Xeon processors for telcos, codenamed Sierra Forest and Granite Rapids-D, along with a new product called Edge Platform.

The chips are specifically designed to handle 5G and edge applications and AI workloads. Sierra Forest, new in the first half of this year, is built on the unique design of 288 Efficient-cores (E-cores) per socket. E cores are smaller and more power efficient than the larger Performance cores (P-cores), and they are intended for lighter weight processing. Intel claims it will deliver a 2.7x performance improvement per rack when compared to its prior generation platform.

Granite Rapids-D, due in 2025, is designed for edge solutions, with a built-in AI accelerator and will feature the latest generation of Performance-cores (P-cores). Granite Rapids-D is based on Intel’s 4th Generation Intel Xeon Scalable Processors, also known as Sapphire Rapids EE, which reduced power consumption of vRAN workloads by 20% while doubling network capacity.

The two CPUs will also come with an updated version of the Intel Infrastructure Power Manager (IPM) software, which Intel says will reduce power consumption by 30% without any impact on major telco performance metrics.

Lenovo launches integrated Edge AI solutions

At Mobile World Congress, Lenovo announced custom AI applications for the global telco industry designed to help businesses efficiently deploy new AI use cases and turn data into faster insights. Launch customers include Deutsche Telekom, Orange and Telefonica.

The products are part of a larger pocket-to-cloud portfolio of Lenovo hybrid AI solutions officially known as Edge AI Solutions for Telco, and they’re designed to utilize data at the far edge of the network while reducing energy consumption. They include:

  • A new multi-cloud edge computing architecture with Telefonica that’s aimed at enabling mission-critical applications for smart cities, such as allowing a city to use AI for video analytics to identify smoke and fire, support public safety and improve emergency response times. This architecture combines Telefonica’s Telco Cloud with Lenovo ThinkEdge servers and Motorola’s push-to-talk technology to offer multi-cloud at the edge.
  • An expanded and extended partnership with Orange Group that’s aimed at bringing high performance, availability and energy efficiency to telco clients. The program will cover areas critical for future-proofing telecommunication infrastructure, such as new open radio access and automation.
  • A jointly validated end-to-end deployment and orchestration for Open RAN with the Lenovo ThinkEdge SE455 V3 to offer a cloud-based, software-defined networking solution. Lenovo claims that the ThinkEdge SE455 V3 server can double the number of orchestrated workloads per socket and offer more than a 50% reduction in overall Open RAN power consumption.
  • Lenovo launched Intel’s Edge Platform with Lenovo ThinkEdge for developing, deploying, running and managing edge applications at scale with cloud-like simplicity. It is built on the Lenovo ThinkEdge SE350 V2, SE360 V2 and SE450 products and uses Lenovo’s Open Cloud Automation (LOC-A) software to deploy worldwide AI applications at the edge in a matter of minutes with one-touch provisioning from a single device.

Lenovo partners with Anaconda

Lenovo also announced a partnership with Anaconda, a provider of Python and R language source code for artificial intelligence, machine learning and data science platform, to empower Lenovo’s data science workstations.

The partnership will involve preloading Anaconda’s open-source tools on Lenovo’s ThinkStation and ThinkPad workstation products, which come with high-end Intel processors and Nvidia GPUs. The combination allows data scientists to create and deploy AI solutions safely and securely. Most AI resources are in the cloud, but there has been a concerted effort to bring AI to the client side to minimize traffic and data movement.

Lenovo’s workstation portfolio includes systems designed for a broad range of AI workflows, varying by industry, size and price point. These are not your average desktops; they start with single CPU and GPU mobile workstations on the low end and scale to dual CPU and four GPU configurations for advanced AI workflows.

Of the many different versions of Python, Anaconda is one of the most enterprise oriented. Anaconda Python is targeted at developers working with math, statistics, engineering, data analysis, machine learning, and related applications. It comes with many of the most common libraries used in commercial and scientific Python work, like SciPy, NumPy, and Numba.

Anaconda Navigator is available for download to use on current and future generation Lenovo Workstations. 

Andy Patrizio is a freelance journalist based in southern California who has covered the computer industry for 20 years and has built every x86 PC he’s ever owned, laptops not included.

The opinions expressed in this blog are those of the author and do not necessarily represent those of ITworld, Network World, its parent, subsidiary or affiliated companies.