Why Hardware Matters in the Cloud

Why Hardware Matters in the Cloud

Like it or not, all clouds have hardware and operating systems, and your application may perform better on one particular cloud or instance type than it would on others. The cloud runs on electrical devices that were optimized in different ways to achieve different objectives. We generally can’t tell from a human perspective, but those impacts are there.

Image 1- cloud.jpg

Operating Systems

On Azure, you’re most likely running on a Windows-based system. On AWS, you’re most likely running a Linux-based system. For most applications, that’s not going to make a difference. However, if you are running a Linux-based virtual machine on a Windows-based cloud, you have about 3 percent greater overhead because of “endedness.”

In computing, there is the concept of which end is the beginning of the number. Windows was built primarily for accounting purposes, so it is a little-ended operating system. The number starts on the small end and gets larger reading right to left. Linux was designed for scientific purposes, so it is a big-ended operating system. It’s not as worried about pinpoint accuracy as with computing very large numbers quickly. Because they view numbers in different orders, you have more overhead due to translation when you virtualize one on top of the other.

This is a big mistake that many people make without even realizing it. If you take a Windows platform and virtualize it on top of Linux, you pay a penalty. If you take a Linux platform and virtualize it on top of Windows, you also pay a penalty.

In AWS you can generally only buy Linux instances, but in Azure it gets muddy. You can buy Windows or Linux instances, which may be hosted on Windows or Linux. GCP gets even stranger because they’re primarily their own hosting core.

Image 2- cloud.jpg

Coprocessors

Certain instances have Intel-based processors, while others might be ARM- or AMD-based. That can have a significant impact on your application’s performance depending on what it’s doing.

Each of these processors speaks a different level of instruction set. The ARM processor has the most restricted language set at a hardware level, allowing it to perform simple workloads faster. However, when you start to do things like encrypt and decrypt operations, it comes at a major disadvantage. It has to take additional steps compared to the AMD or Intel platform. Encryption and decryption cannot be done in a single instruction set on the ARM.

There is also a distinct difference between the AMD and Intel platforms, and that generally comes down to mathematics. The Intel platform has an extended instruction set called AES-NI that goes beyond the x86 and the x64 language set. It is explicitly for offloading network communications and encrypt/decrypt operations from the processor. It also has the ability to hand off more floating point operations. If you’re doing things like statistical analysis, the Intel processor series will usually execute those instructions in fewer hardware instruction steps than the other processor types. It could take up significantly less processor time and thus require a smaller instance than if you chose an instance type that wasn’t aligned with what your application is trying to do.

If you have an application that’s constantly running encrypt/decrypt operations, you’ll lose performance trying to run it on an ARM. If you have an application that is doing floating-point math, you’ll lose performance on an AMD and an ARM. The Intel processor is the Ferrari, but it comes with the Ferrari price tag.

If you specify an Intel processor set when choosing a cloud instance, it will be more expensive. However, if your application performs those kinds of workloads intensely, your overall cost will be less because you’ll need fewer instances. If you’re just running a Python script that doesn’t encrypt anything, you can use the cheapest processor available.

Image 3- cloud.jpg

How DeSeMa Can Help

This “last mile” optimization won’t impact most developers. If you’re running an application in one or two instances, you’re probably not going to care. But if you’re running, say, millions of instances of Kubernetes clusters and could save 10 percent — that adds up quickly. If you’re Visa or Bank of America and have millions of servers processing hundreds of millions of transactions, 3 percent matters a lot.

This is where in-depth knowledge of cloud environments comes into play. The DeSeMa team can look at your application’s language, dependencies and preferred operating system and determine if it would run more efficiently on Windows versus Linux or vice versa. We can then determine what cloud and instance type is most cost-effective for your application.

Get Started Today!