The 3 Real Reasons for Sky-High Cloud Costs (They’re Not What You Think)

The 3 Real Reasons for Sky-High Cloud Costs (They’re Not What You Think)

Cloud repatriation is a hot topic, with IT industry analysts predicting that more organizations will migrate workloads from public cloud platforms back onto on-premises infrastructure. Cost is the usual reason. Many organizations have seen their cloud spend spiral out of control and are looking to rein in those costs.

The problem is that few organizations understand why their cloud costs are skyrocketing. They also assume they have no control over those costs. In reality, however, there are steps they could take to reduce their cloud expenses while still enjoying the benefits of a public cloud environment.

Let’s look at the three real reasons for sky-high cloud costs.

coworkers looking at computer

Giving developers too much freedom.

If you turn developers loose in the cloud, they’ll spend money hand over fist. They’ll provision 16 cores and 128GB of RAM for a service because it’s not their money. Cloud providers offer tools to control that spending, but they don’t promote them because they have no incentive to do so. As a result, very few organizations use them.

These tools can compare the resources provisioned for a service against actual usage and allow you to set soft rules for sending out an alert. For example, the alert might note that you’ve spun up 16 cores but the service has never used more than two. The developer will receive the alert; if the developer doesn’t respond within a couple of days the manager will get the alert. You can also set hard rules that automatically throttle resources that are way out of proportion to actual usage.

cloud computing icons over hands on laptop

Not using the cloud correctly to begin with.

Most organizations use the cloud by turning up a bunch of virtual machines (VMs), and that’s the most wasteful way to consume cloud services. The cloud wasn’t built to operate that way. When the cloud was conceived, the only way to get into it was to be cloud-native. You had a cloud-specific language that would summon resources into existence, and you didn’t have to think about them. Instead of saying, “connect to database X at IP address,” you would say, “give me database,” and it would figure out everything else.

Because so many organizations complained that their custom binaries wouldn’t operate in the cloud environment, public cloud providers shoehorned the ability to run VMs into the cloud after they built it. And they priced them accordingly. They don’t want to run that way, even though that’s how the vast majority of workloads run in the cloud.

It’s okay to run that way for a year, but if you find yourself running that way for more than a year, you are not ready to go to the cloud. When you adopt cloud resources, you need to be prepared to redesign the application to be cloud native. An application that requires 16 or more VM instances and costs tens of thousands of dollars a month could be re-implemented as a set of Lambda functions, consume more cloud-native services, or live in Docker containers. This allows you to achieve the same objectives on a greater scale for far less cost.

calculator and graphs

Failing to understand the fully burdened cost of workloads.

When you purchase something from a cloud service provider, you see the fully burdened cost of operating that object. You don’t get to make decisions like, maybe I don’t need the latest update. Maybe I don’t need the latest patch. Maybe I don’t need to have somebody walk through the data center every day and make sure every light is blinking as it should. Maybe I don’t need to back up this data because it’s ephemeral.

In the cloud, there isn’t a way to scale back cost by accepting risk. If you’re willing to assume risk, you can generally run a workload cheaper on-premises. If not, you’re better off running in the cloud because you’re going to pay that fully burdened cost one way or another.

Get Started Today!