The small form factor HPE Edgeline EL8000 is designed for AI tasks such as computer vision and natural-language processing. Later this month, HP Enterprise will ship what looks to be the first server aimed specifically at AI inferencing for machine learning. Machine learning is a two-part process, training and inferencing. Training is usign powerful GPUs from Nvidia and AMD or other high-performance chips to “teach” the AI system what to look for, such as image recognition. Inference answers if the subject is a match for trained models. A GPU is overkill for that task, and a much lower power processor can be used. Enter Qualcomm’s Cloud AI100 chip, which is designed for artificial intelligence on the edge. It has up to 16 “AI cores” and supports FP16, INT8, INT16, FP32 data formats, all of which are used in inferencing. These are not custom Arm processors, they are entirely new SoCs designed for inferencing. The AI100 is a part of the HPE Edgeline EL8000 edge gateway system that integrates compute, storage, and management in a single edge device. Inference workloads are often larger in scale and often require low-latency and high-throughput to enable real-time results. The HPE Edgeline EL8000 is a 5U system that supports up to four independent server blades clustered using dual-redundant chassis-integrated switches. Its little brother, the HPE Edgeline EL8000t is a 2U design supports two independent server blades. In addition to performance, Cloud AI100 has a low power draw. It comes in two form factors, a PCI Express card and dual M.2 chips mounted on the motherboard. The PCIe card has a 75 watt power envelope while the two M.2 form factor units draw either 15 watts or 25 watts. A typical CPU is draws more than 200 watts, and a GPU over 400 watts. Qualcomm says Cloud AI 100 supports all key industry-standard model formats including ONNX, TensorFlow, PyTorch, and Caffe that can be imported and prepared from pre-trained models that can be compiled and optimized for deployment. Qualcomm has a set of tools for model porting and preparation including support for custom operations. Qualcomm says the Cloud AI100 is targeting manufacturing/industrial customers, as well as those with edge AI requirements. Use cases for AI inference computing at the edge include computer vision and natural language processing (NLP) workloads. For computer vision, this could include quality control and quality assurance in manufacturing, object detection and video surveillance, and loss prevention and detection. For NLP it ncludes programming-code generation, smart assistant operations, and language translation. Edgeline servers will be available for purchase or lease through HPE GreenLake later this month. Related content news Pure Storage adds AI features for security and performance Updated infrastructure-as-code management capabilities and expanded SLAs are among the new features from Pure Storage. By Andy Patrizio Jun 26, 2024 3 mins Enterprise Storage Data Center news Nvidia teases next-generation Rubin platform, shares physical AI vision ‘I'm not sure yet whether I'm going to regret this or not,' said Nvidia CEO Jensen Huang as he revealed 2026 plans for the company’s Rubin GPU platform. By Andy Patrizio Jun 17, 2024 4 mins CPUs and Processors Data Center news Intel launches sixth-generation Xeon processor line With the new generation chips, Intel is putting an emphasis on energy efficiency. By Andy Patrizio Jun 06, 2024 3 mins CPUs and Processors Data Center news AMD updates Instinct data center GPU line Unveiled at Computex 2024. the new AI processing card from AMD will come with much more high-bandwidth memory than its predecessor. By Andy Patrizio Jun 04, 2024 3 mins CPUs and Processors Data Center PODCASTS VIDEOS RESOURCES EVENTS NEWSLETTERS Newsletter Promo Module Test Description for newsletter promo module. Please enter a valid email address Subscribe