Run.AI is launching software that orchestrates shared use of organizations' GPU resources rather than dedicating entire processors to specific AI workloads. Credit: Art24h / Getty Images Among the greatest component shortages bedeviling everyone is that of GPUs, both from Nvidia and AMD. GPUs are used in Bitcoin farming, and with massive farms around the world gobbling up every GPU card, getting one is nigh impossible or prohibitively expense. So customers need to squeeze every last cycle out of the GPUs they have in service. An Israeli company called Run:AI claims it has a fix with a pair of technologies that pool GPU resources and maximize their use. The technologies are called Thin GPU Provisioning and Job Swapping. Not the most creative of names but they describe what the two do in tandem to automate the allocation and utilization of GPUs. Data scientists and other AI researchers often receive an allocation of GPUs, with the GPUs reserved for individuals to run their processes and no one else’s. That’s how high performance computing (HPC) and supercomputers operate, and getting processor allocation just right is something of a black art for administrators. With Thin GPU Provisioning and Job Swapping, whenever a running workload is not utilizing its allocated GPUs, those resources are pooled and can be automatically provisioned for use by a different workload. It’s similar to the thin provisioning first introduced by VMware for storage-area networks, where available storage disk space is allocated but not provisioned until necessary, according to a statement by Run:AI. Thin GPU Provisioning creates over-provisioned GPUs, while Job Swapping uses preset priorities to reassign unused GPU capacity. Together, Run:AI says, the two technologies maximize overall GPU utilization. Data Scientists, whose specialties aren’t always technical, don’t have to deal with scheduling and provisioning. At the same time, IT departments have control over GPU utilization across their networks, the company says. “Researchers are no longer able to ‘hug’ GPUs—making them unavailable for use by others,” said Dr. Ronen Dar, CTO and co-founder of Run:AI in a statement. “They simply run their jobs and Run:AI’s quota management, Thin GPU Provisioning and Job Swapping features seamlessly allocate resources efficiently without any user intervention.” Thin GPU Provisioning and Job Swapping are currently in testing in Run:AI customer labs. They are expected to be generally available in Q4 2021. Run:AI was founded in 2018 and has $43 million in funding. Related content news Pure Storage adds AI features for security and performance Updated infrastructure-as-code management capabilities and expanded SLAs are among the new features from Pure Storage. By Andy Patrizio Jun 26, 2024 3 mins Enterprise Storage Data Center news Nvidia teases next-generation Rubin platform, shares physical AI vision ‘I'm not sure yet whether I'm going to regret this or not,' said Nvidia CEO Jensen Huang as he revealed 2026 plans for the company’s Rubin GPU platform. By Andy Patrizio Jun 17, 2024 4 mins CPUs and Processors Data Center news Intel launches sixth-generation Xeon processor line With the new generation chips, Intel is putting an emphasis on energy efficiency. By Andy Patrizio Jun 06, 2024 3 mins CPUs and Processors Data Center news AMD updates Instinct data center GPU line Unveiled at Computex 2024. the new AI processing card from AMD will come with much more high-bandwidth memory than its predecessor. By Andy Patrizio Jun 04, 2024 3 mins CPUs and Processors Data Center PODCASTS VIDEOS RESOURCES EVENTS NEWSLETTERS Newsletter Promo Module Test Description for newsletter promo module. Please enter a valid email address Subscribe