Public vs. Private Cloud: Price isn’t enough

Last October Simon Wardley and I stood on a rainy sidewalk at 28th St. in NYC arguing politely (he’s British) about the future of cloud adoption. He argued, rightly, that the cost advantages from scale would be overwhelming compared to home-brew private clouds. He went on to argue, less certainly in my view, that this would lead inevitably to their wholesale and deep adoption across the enterprise market. 
 
I think Simon bases his argument on something like the Rational Economic Man theory of the enterprise. Or, more specifically, the Rational Economic CFO. If the costs of a service provider are destined to be lower than the costs of internally-operated alternatives, and your CFO is rational (most tend to be), then the conclusion is foregone.
 
And of course, costs are going down just as they are predicted to. Look at this post by Avi Deitcher, Does Amazon’s Web Services Pricing Follow Moore’s Law? I think the question posed in the title has a fairly obvious answer. No. Services aren’t just silicon, they include all manner of linear terms, like labor, so the price decreases will almost certainly be slower than Moore’s law, but his analysis of the costs of a modestly sized AWS solution and in-house competition is really useful. 
 

Not only is AWS’ price dropping fast (56% in three years), but it’s significantly cheaper than building and operating a platform in house. Avi does the math for 600 instances over three years and finds that the cost for AWS would be $1.1M (I don’t think this number considers out-year price decreases) vs. $2.3M for DIY. You’re mileage may vary, but these numbers are a nice starting point for further discussion.
 
These results raise an interesting question, if the numbers are so compelling, why did Walmart just reveal that they are building a ginormous private cloud? Why would anyone?
 
Let’s look at some numbers. There is a large east coast bank that has about 120,000 server images. Let’s assume to simplify things that those aren’t just virtual images but map to hardware nearly one to one. 
 
I’m almost certain that BigBank's costs aren’t as high per unit as 600 Corp's. At over 100,000 instances they’ll experience at least some of the scaling benefits that AWS itself has. Rather than do a careful estimate, let’s just assume that their efficiency is half way between AWS and 600 Corp. If that were the case, their cost to buy, operate, and manage those servers might look like: ($1.1M + $2.3M)/2 * 120,000/600 ~ $340M over three years. I’ve seen it reported that they have 4000 developers and a total annual IT budget of $750M so these numbers seem at least reasonable.
 
So, if Rational CFO makes big bank switch everything to AWS, and ignoring switching costs for now, what will they save?
 
$340M – $1.1*120,000/600 ~ $120M in savings over three years, or about $40M per year. Again, this leaves out transition costs, and it also ignores out-year AWS price decreases, but it’s probably in the ball park.
 
$40M annually is a lot of money and no CFO would ignore it, rational or not. But total revenues at this company are $34B with profits of $10B/year. So, the incremental benefit of moving all compute to AWS is only .4% of profit. Again, that’s not nothing, but it’s not a top-line driver of business results.
 
Interestingly, even if AWS were to continue improving its cost advantage through a combination of increasing scale and Moore’s Law, all the way to free, a move would still only improve this company’s bottom line by .8% annually. 
 
I think this is the point that some public cloud proponents miss. We are talking about decisions that at least feel like high risk and they don’t seem to produce the material levels of ROI necessary to give up control.
 
This is not unlike the choices consumers make every day when they buy a car and choose the convenience of an SUV over the fuel economy of alternatives. For many people, the incremental fuel cost just isn’t that big of a deal in the context of their total household budget. If they do choose not to go with an SUV it’s often because of other concerns.
 
I think private cloud will be around, at least in very large enterprises, for a long time and for similar reasons. The control the CIO (and general counsel) seeks will trump the narrower interests of Rational Economic CFO. And I don’t see lots of CIO’s taking huge risks and kicking off expensive five year transition plans to improve profitability by .4%.
 
Two more thoughts before I wrap this up.
 
It’s possible that the meta-trend of corporate digitization (meaning, IT has a front end business enabler rather than just  back end record keeper) will make IT costs a more material component of a lot of businesses. This might change the character of an analysis like this for some businesses, however, I used a bank as my example, and they are already using IT aggressively at the front end of the business and have high IT costs relative to revenues. These guys have the most to gain by switching, and so far, aren’t. In fact, so far they are the kinds of companies most interested in projects like Open Compute because they see the future in their own data centers.
 
On the other hand, we might argue that companies that use IT less aggressively would be more likely to take advantage of public cloud. The argument here would be that, since they run fewer servers, their internal costs are higher on a per-server basis (they are less efficient because of lower scale) so their apparent savings per server would be higher. This is probably true, but they would also see savings *as a percentage of profit* even lower and less material. Better savings on a per unit basis, but perhaps even lower down the CIO / CFO’s priority list on a magnitude-of-impact basis.
 
Another common argument toward public cloud is “well, of course the legacy stuff is stuck, but the new stuff will go to the cloud.” This may be true, and there are obvious examples of this happening, but I don’t think it’s any more of an iron clad argument than it’s more general cousin.
 
Moving some workloads to the cloud while maintaining core systems in house adds complexity and almost as much perceived risk as moving everything, but for much lower apparent savings (How much would that bank I mentioned save by putting 100 machines on AWS?). This will certainly happen, especially for discrete workloads that are time-variant, but I’m not convinced that moving all new workloads to AWS is anyone’s low energy state.
 
I’ll caveat all of this with a “who knows?” and a shrug. However, if you’re confused as to why enterprises are taking so long to adopt public cloud, it might not be because they are stupid, it might just be that the risk relative to the savings isn’t enough to drive the behavior you were expecting (or depending on).
 
Simon, what did I get wrong?

Comments

  1. David Mytton - March 6, 2015 @ 12:37 pm

    If you’re comparing pure cloud compute (e.g. EC2, Google Compute Engine) costs then it’s almost always cheaper to run your own environment in a colo facility. You can get very low rates on commodity hardware either prepaid or leased. Indeed, this is often the way companies scale, ending up with running their own kit.

    The thing with WalMart is that’s not where it ends. They’re building a lot of custom systems on top of those components, for example their own DBaaS. This is where the scale of AWS or Google comes in – they have the resources to dedicate to building and optimising software projects that are created on top of their already heavily optimised infrastructure, and it’s these costs that are much harder to compare.

    It includes things like hiring engineering talent, building a new system (or using components from and customising OpenStack), ongoing maintenance, updates, improvements, security, etc. This is where the costs add up. It’s the whole SaaS argument – why build something internally that isn’t your core business and could be outsourced to someone else who will do a much better job?

  2. Dmourati - March 7, 2015 @ 1:47 pm

    I like the SUV analogy and I appreciate the overall analysis.

    I agree price is not the only dynamic to consider. Two additional factors:

    1. Elasticity

    Think seasonality of demand, scaling up and down, rightsizing and so on. This impacts price but also impacts monitoring, alerting, and repairing because of the law of large numbers.

    2. Time to market

    Data centers right in your backyard take time to build. Data centers across the globe take much longer. One concept that has stuck with me from Chris Anderson’s The Long Tail: “it is easier to move bits than atoms.”

  3. swardley - March 7, 2015 @ 2:41 pm

    Hi Jim,

    Not quite how I remember the conversation. Factors involved in adoption – efficiency in provision, increase in demand (price elasticity effects, long tail of unmet demand, evolution of higher order systems), rate of innovation of new services (ecosystem effects), ability to take advantage of new sources of wealth (development speed, reduced cost of failure), inertia (16 different forms) & competitor actions. When talking about the shift of infrastructure from product to utility then all these factors come into play.

    Efficiency of Amazon’s provision. Do remember that since IT is price elastic and Amazon has a likely constraint e.g. acquiring land and building data centres then Amazon will certainly have to manage its pricing carefully i.e. if it dropped pricing too quickly then demand could exceed supply. So, you need to consider future pricing as well. In all likelihood AWS EC2 is operating at 60%+ margin but this will reduce over time.

    Increase in demand. One of most amusing cloud ‘tales’ is the one that it’ll save money. Infrastructure is a million times cheaper today than 30 years ago but has my budget dropped a million fold in that time? No. We don’t tend to save money, we tend towards doing more stuff. This is Jevons Paradox. What we need to be mindful of is that our competitors will do more stuff. Which is why you need to be careful about future pricing. If your IT budget is 2% of total budget and your competitor has a 10x advantage then you might shrug it off as a small part of the budget. But, your competitor is likely to end up doing more stuff and suddenly (just to keep up) you’ll find you’re spending a lot more than 2%.

    Rate of Innovation of new service. There are numerous ecosystem games to play in a utility world (such as ILC) which enables a provider to simultaneously be innovative, efficient and customer focused. This seems to be happening with Amazon as all three of those metrics appear to be accelerating. This provides direct benefit for the users of that environment in terms of new service release.

    Ability to take advantage of new source of wealth. Key here is speed and reducing the cost of failure both of which a utility provider offering volume operations of good enough but standard commodity components provides.

    Inertia. We all have inertia to change (loss of previous investment, changes in governance / practice, loss of social capital, loss of political capital etc – 16 different forms in total). There will be a counter to any change

    HOWEVER …

    The competitive pressure to adapt are often not linear but have exponential effects. If an adaptation gives competitors greater efficiency, increasing access to new services, increases their ability to take advantage of new sources of wealth then as more of your competitors adapt then the pressure on you mounts. This creates the Red Queen Effect (prof van. valen).

    As a result these forms of change are not linear but exponential. It can take 10 years for such a technology change to reach 3% of the market and a further 5 years to hit 50%. Because of inertia to change and due to its non linear nature many companies (especially competing vendors) get caught out. However, in all such markets there are usually small niches that remain.

    There is also no reason why commoditisation has to lead to centralisation. Many of the forces can be countered. Unfortunately due to the incredibly sucky play of often past executives within competitors then in this case centralisation (to AWS, MSFT and Google a distant third) seems very likely. Some of those past executives were warned in 2008 about how to fragment the market by creating a price war with AWS clones forcing demand beyond Amazon’s ability to supply due to the data centre constraint. It’s shocking that they were so blinded that they’ve got large companies into this state.

    So,

    1) Will infrastructure centralise to those players of AWS, MSFT and GooG? Yes plus clones of those environment. Competitor’s have shown pretty poor strategic play in the past and this is now the most likely outcome.

    2) Will everything go to public infrastructure clouds? No. There will be niches. There is also inertia to the change but the pressure will mount (Red Queen) as competitors adopt cloud. The change usually catches people out due to its exponential nature.

    3) Is it just price? No. Multiple factors involved. Price is one of those factors.

    4) Why is Walmart building a modest sized private cloud? Probably because it’s concerned over Amazon’s encroachment into its own retail industry. In all likelihood they will end up adopting Azure or GooG over time.

  4. PerlDean - March 9, 2015 @ 1:52 am

    The public vs private debate is more or less that same as commercial vs open source.

    Why would anyone buy an Oracle database when PostgreSQL (and others) are so excellent and free?

    The answers are many. But they do.

  5. Jim Stogdill - March 10, 2015 @ 4:04 pm

    Thanks for the comments everyone. Simon, you are right that I didn’t consider all of the reasons why public cloud will be important, I only looked at price. I meant this is as a simple analysis that says “no matter how cheap cloud computing gets, even if it’s free, that won’t on its own be a compelling argument for everyone in the face of real or perceived loss of control.” The point of my back of the envelope calculation was just to show that the numbers aren’t so ridiculously compelling that control will end up being a less important factor.

    I agree that elasticity is important – and in cases where the compute required isn’t connected to customer records etc. it will be easier to take advantage of.

    I reached out to a friend of mine (who will remain anonymous) with the question “Are you guys considering moving anything to public cloud?” Here is his partial response:

    “Public cloud has our security people in a frenzy. We are so risk averse to putting client sensitive data in the cloud. It has been such a battle to even have the conversation. I think we are slowly warming up to the idea that we can do this and that the industry is moving this way. It really comes down to what kind of data. The moment it touches client account information, it becomes off limits. Segmentation data about our market, firm data, etc, those tend to be okay but is it worth it is the question. Worth isn’t just the cost of moving it from the data center to the cloud but the dev to unwind it or build new services. In general, I think our data center costs after a zillion years of doing this are very low.”

  6. pwb - March 10, 2015 @ 4:15 pm

    Sweet spot? https://www.brkt.com

  7. Keith Shinn - March 12, 2015 @ 1:32 pm

    One other point, typically there are additional cost beyond the AWS charges to running an app. those cost are typically included in recovery cost for and enterprise but not considered in the comparison. In general, i think this depends. If enterprises continue to operate as they have in the past the answer would be simple Simon is right, but if the enterprise learns from AWS, use open source model to develop shared tools, then it may still be a race.

    either way nice thoughtful write up!

Leave a Reply

Your email address will not be published / Required fields are marked *