5-minute read

A common selling point for companies to implement a public cloud solution for their data estate is that they can eliminate or minimize the capital expenditures (capex) that would otherwise be commonplace in support of private cloud solutions.

Eliminating capex, however, doesn’t mean that we no longer have to think about how much we’re paying for our data estate. Not having to dole out a big capital investment every few years to refresh (on-premises) hardware can, indeed, be a boon to many organizations. And a pay-as-you-go, consumption-based billing model can definitely improve the financial bottom line for some business cases.

But a victory over capex doesn’t mean that the war is over. The war to minimize costs and maximize value is never over. Organizations that have made a strategic investment in a public cloud platform must shift their focus into minimizing those (new) operational expenditures.

Any public cloud platform worth its salt is going to have tools to assist with minimizing operational expenditures (opex). And certainly, that’s where organizations should start—by targeting the low-hanging fruit. Do you really still need that proof-of-concept architecture from 2020? If not, delete it. Do those dev/test servers really need to be available 24/7? These are simple examples of opex optimizations that should be considered standard operating procedure. But no automated tool is going to be able to identify fundamental design defects in your data estate. And that’s where the biggest return on your investment could be hiding.

Public cloud computing exposes a wide variety of tools and possibilities for a broad range of use cases. Matching the right tool with the right problem is more important than ever, because a mismatch can inflate your resource utilization, which will directly affect your operational expenditures. For example, using a tool designed to process unstructured data (video, audio, images, etc.) for your structured or semi-structured data might simplify your solution space, but it’s going to create a lot of inefficiencies both in the processing itself and in the satellite processes that would be necessary to support that architecture. Similarly, trying to process image data using a tool designed to store and process structured data would be extremely inefficient.

That being said, at very small scales, there’s a case to be made that minimizing flavors—i.e., minimizing tools —can be a worthy tradeoff against computational efficiency. And, even further, capital expenditure can sometimes provide safe harbor for inefficient data management practices. For example, if your organization only runs Monday through Friday, then is it really a problem that your data models take 24 hours to process every weekend? Maybe not. Maybe that’s okay and fixing that problem might not yield much of a return. On the surface, at least, the cost of that inefficiency is some electricity in your data center, the dollar value of which could be extremely difficult to quantify versus lower CPU/memory consumption across that same timeframe. But with a pay-as-you-go model, those compute resources directly and significantly affect your operational spend. Furthermore, if we’re talking about a public cloud solution, then we’re most likely talking about a scale where these sorts of inefficiencies are not trivial, and their consequences will have a material effect on your bottom line.

When we work with clients to optimize their data estates, we start by helping them identify high-value opportunities; then we break these down into specific, actionable recommendations. For example, we often perform Well-Architected Cloud Reviews (based on the AWS Well-Architected Framework) to find cost-saving opportunities, maximize the business value of data assets, and ensure that data estates adhere to security and reliability best practices.

If you’re in the planning phase of a public cloud implementation—or maybe you’re staring down the barrel of another big capex investment because you think you’re outgrowing your existing resources—the same advice applies: Make sure you’ve got your data estate in order. Use the right data management strategies for the right problems. You don’t want to discover that you have fundamental defects in how you’re managing your data after you’ve completed a costly migration project and are now being taxed for your technical debt every month.

Similarly, what if you don’t need to pull the trigger on those big, new servers? What if your existing infrastructure could actually support your needs for another five years? Your existing capex investment could start to look a lot better if that were true, and that might be the reality if your data estate isn’t currently following industry best practices. Start optimizing for opex—that is, start optimizing your compute—before you start getting billed for it. The results could have a dramatic effect on your strategy.

For organizations that have already made a strategic public cloud investment and found themselves with monthly sticker shock as the consumption-based bills roll in, addressing foundational data management problems is more important than ever. The public cloud doesn’t ameliorate those problems; it accentuates them. Now you’re getting taxed every time those inefficient data management practices are invoked. In your private cloud, every time that one-hour report runs, it’s inconvenient. Time is money, but you can let it run over lunch. In the public cloud, though, the meter is running for that hour. “We can just spin up a bunch of resources to make it finish in one minute though, right?” Perhaps we can, but that just drives up our consumption even faster, and those compute resources aren’t free.

The rule of thumb has always been that you can’t solve software problems with hardware solutions. The public cloud doesn’t make overpowering design flaws with compute resources any wiser—but it’ll certainly let you try, for a price.

Like what you see?

Paul Lee

Chris Klingeisen is a lead developer in the Logic20/20 Advanced Analytics practice.

Author