Americas

  • United States

10 predictions for the data center and the cloud in 2019

Opinion
Dec 04, 20187 mins
Data CenterHybrid Cloud

What will 2019 hold for data centers and the cloud, both public and private? Here are our educated guesses.

nw forecast intro mini logo IDG

It’s that time of year again, where vacations are planned, going to the mall looks like something out of “Braveheart,” package theft from doorsteps is rampant, and people try their best not to offend. In other words, it’s Christmastime.

This leads to an inevitable tradition of looking back at the year and at what will come. For some time, I’ve done general looks back, but this year we are narrowing the focus to the data center and cloud, since the real battle these days is to find a balance between the cloud and on-premises implementations.

Much of what I am predicting has been hinted at in research or emerging trends, so I’m not sticking my neck too far out. I’m just making logical assumptions and conclusions based on past evidence. Hopefully that will improve my accuracy rate. And now, here’s 10 predictions for data centers and the cloud.

1. Edge computing matures but needs a business model

This is not hard to figure out. Everyone loves the idea of edge computing. Data center operators see it as a chance to lighten the load on central servers, and businesses see it as a chance to have sub-10 millisecond response time. Vendors such as Vapor IO and Schneider Electric are coming out with different models for placement at base stations, and 5G is beginning its national rollout.

The problem is who pays for it. That still hasn’t been worked out. Will it fall to cellular providers, or will it fall to the car makers who want connected cars? This industry has a long track record of dreaming up technologies and thinking of the business model later, and edge computing is an expensive idea in search of an owner. That needs to be sorted out in 2019.

2. Water cooling expands

When Google launched version 3.0 of its Tensor Processing Unit AI chip, it also revealed it had switched to water cooling because air was no longer sufficient. With CPUs getting over 200 watts and GPUs hitting 300 watts, air cooling simply doesn’t cut it any more. Water is thousands of times more efficient at heat removal than air and more companies are overcoming their apprehension about the coolant springing a leak. Also, in some cases, they have no choice. The demand for more processing power is driving the move to liquid cooling as much as anything.

3. More AI to cover for human error

Data centers have thousands and thousands of moving parts: the individual servers, the cooling system, the power system, and the network to connect it all. Up to now, that has been manually configured and once in place, left alone. But there is a new class of artificial intelligence (AI), as demonstrated by startup Concertio, that puts AI in charge of optimizing the equipment through continuous monitoring and adjustment. I’ve seen other instances where AI was used as a constant, tireless monitor to adjust systems. I figure we will see more efforts going forward.

4. Data-center growth continues

The data center is not dying, OK? Let’s get that straight. There are more compute demands than ever, especially with the advent of AI, and the cloud has proven to have its expensive downsides. What it means, though, is the data center is being repurposed. Some workloads are going to public cloud providers, while others are being assigned to the data center. This includes anything with massive data sets, such as BI, analytics, and AI/ML, because moving it to the cloud is expensive. The data center is changing, becoming more versatile and more powerful.

5. Workloads move from endpoints to data centers

Data in and of itself is worthless unless it is acted upon and processed, and a smartphone really isn’t the device for it. Smartphones, tablets, and PCs are heavy data collectors but not suited for analytics or any type of AI, so that data is being sent up to the cloud for processing. The same applies to the Internet of Things (IoT). Your 2019 model car isn’t going to process the data; it will be sent up to a data center for processing.

6. Microservices and serverless computing take off

Virtualization is nice, but it’s resource-heavy. It requires a full instance of the operating system, and that can limit the number of VMs on a server, even with a lot of memory. The solution is containers/microservices and on the extreme, serverless computing. A container is as small as 10MB in size vs. a few GB of memory for a full virtual machine, and serverless, where you run a single function app, is even smaller.

As apps go from monolithic to smaller, modular pieces, containers and serverless will become more appealing, both in the cloud and on premises. Key to the success of containers and serverless is that the technologies were created with the cloud and on-premises systems in mind and easy migration between the two, which will help their appeal.

7. AWS and Google focus on hybrid cloud

Amazon Web Services (AWS) and Google entered the cloud market with no legacy and a sales pitch of a pure cloud play. Microsoft and IBM, on the other hand, had a huge legacy software installed base and pitched hybrid cloud, striking a balance between cloud and on-premises systems. It enabled Microsoft to rocket to the number two position in the cloud market very quickly and has given IBM a considerable boost, as well. And now AWS and Google are wising up. AWS has introduced a raft of new on-premises offerings, while Google is also shoring up its on-premises services and has hired ex-Oracle cloud chief Thomas Kurian, who was making a hybrid pitch at Oracle that brought him into conflict with Larry Ellison.

8. Bare metal continues to grow

Bare metal means no software. You rent CPUs, memory capacity, and storage. After that, you provide your own software stack — all of it. So far, IBM has been the biggest proponent of bare-metal hosting followed by Oracle, and with good reason. Bare metal is ideal for what’s called “lift and shift,” where you take your compute environment from the data center to a cloud provider unchanged. Just put the OS, apps, and data in someone else’s data center.

And since IBM and Oracle are two major enterprise software vendors, it’s only natural they would want customers to keep using their software but run it in their data centers rather than shift to a SaaS provider. However, AWS is getting into the bare-metal scene, as are major hosting and cloud providers such as Internap, Equinix, and Rackspace. It appeals to enterprises as well as SMBs and for the same reason: so they don’t have to host the hardware.

9. It’s a year of reckoning for Oracle

Oracle really needs to make some hard decisions this year — and quick. Its cloud business is faltering and not keeping up with the big four (AWS, Microsoft, Google, IBM). Its licensing is still too complicated. It tried to grab a massive Defense Department project called Jedi from AWS and lost because it didn’t have the reach of AWS. It has been fairly quiet about its hardware business, and now it has lost its cloud business leader. Oracle hasn’t quite made the leap to the cloud as gracefully as Microsoft, but if it’s going to do it, it has to be now.

10. Cloud providers battle for desktops

Microsoft isn’t the only vendor looking at the desktop as a means of connecting to the cloud; all of the major cloud vendors are interested in the virtual desktop market. And with Windows 7 reaching end of life in January 2020, that means 2019 will be a year of transition. The question becomes, will people just jump to Windows 10 and thus cement Microsoft’s hold, or will they embrace things such as AWS WorkSpaces or Google Chromebooks?