These 3 avoidable costs should be easy homeruns for anyone in public cloud today!
We frequently see customers overspending on these 3 items:
Failure to size instances properly
Failure to tier storage appropriately
Failure to plan capacity
Let’s dive in.
Failure to Size Instances Properly
It’s painful to see this one happen over and over again. It typically comes from a hasty lift-and-shift migration and rebuilding what already exists 1:1 with a datacenter. And worst of all, it’s EX-PEN-SIVE.
Public cloud instance sizes should not (under most circumstances) be identical to self-hosted VMs. Why? Using tools like image templates and auto-scaling makes the need for large VMs running 24/7 a thing of the past. Making use of the elasticity provided by public cloud should adjust how organizations consume and scale compute.
How can you size instances properly?
My simplest advice is use the tools available to you to assess your utilization and recommended sizing! Do not settle for a self-made spreadsheet (unless you’re
and you have it mapped to pricing APIs on the backend…). From native migration tools in AWS, Azure, and GCP to 3rd party assessment tools like Cloudamize, organizations have a wide variety of products and capabilities to help inform sizing recommendations.Please do not settle with “close enough” if you value your wallet. Doing the due diligence here will pay off massively.
Failure to Tier Storage Appropriately
This one can be shockingly expensive at large volume. Sure, object storage is supposed to be cheap, but nevertheless it’s an absolute head scratcher when a simple storage tier change could save 50-75% and it’s not performed.
Why does it happen? It happens because organizations don’t know frequency of access of their data, or how long it needs to reside in one spot. Throwing a bunch of data into a standard storage tier eventually turns into a multi-year long commitment, and that multi-year commitment costs 2-4x as much as tiering it.
Making use of infrequent access storage tiers or archive storage is a massive money saver for large volumes of data. I think sometimes that the idea of needing to store data somewhere for 90-180 days, as well as the slow withdrawal times scares some organizations away from doing so, but that analysis to determine what’s appropriate is worth it.
Making use of something as simple as a lifecycle plan to shift that data periodically is a good step. AWS even has an Intelligent Tiering service that will do it on your behalf (for a small fee).
Even better is identifying those requirements beforehand and migrating to the appropriate storage tier. Check them out!
Failure to Plan Capacity
Alas, this one is perhaps the hardest to do but it’s also the most worth it.
If customers plan to keep using the platform over a multi-year span, they should not only pay as they go (PAYG) for computing power when they move to it. We prefer them to have a long-term plan, that might include PAYG but should make use of savings plans as well.
Why do customers do this? The main reason we see is this: They are unsure what they’ll actually need down the road and don’t want to overcommit by paying upfront.
This is understandable because needs do change, but the broad majority of the time it’s still better to get the reduced rate. How many cloud adopters get on the platform and get off it within < 3 years? If that’s a possibility, is public cloud the best option?
It comes down to planning. If the sizing analysis (see #1) is performed correctly, there’s an understanding of what is required at baseline in terms of utilization, and a reasonable growth rate, then paying for a longer-term commitment seems like a no-brainer.
Additionally, there are tools designed for this problem. Tools like ECO make committing to those reservations a lot less intimidating because they can be sold back at a later date if those needs do change.
One thing that might make this a harder decision is modernization initiatives. Modernizing can have a drastic effect on what will be required day-to-day, so in that case it would be better to assess those usage patterns at a later time. However, if it can be determined that X amount of compute still will be required over Y period of time, leveraging a savings plan while doing so could still be advantageous.
Planning capacity is a surefire way to save money in public cloud—don’t skip it.
Conclusion
Ultimately, each of these comes down to planning, and lots of it. It’s not always interesting to do, and you’ll need buy-in for it, but the savings are well-worth it. Making use of existing tools, defining requirements, and leveraging outside expertise can help you dodge these 3 avoidable expenses in public cloud.