Too big to use utility computing ?

Dear users of S3, EC2, and other ‘utility’ computing stuffs:

Here’s a crude and completely oversimplified evolution of infrastructure needs of a growing website, with an assumption:

Evolution of web infrastructure

Have you ‘outgrown’ your original use of utility computing, for whatever reason ? If so, what was the reason? Financial? Technical?

Why I’m asking:

I’m in the process of writing a book on the topic of capacity planning for web architectures, so I’m interested in what you’ve got to say.

13 Comments

  1. Xaprb   •  

    I wouldn’t use Ask as an example of how to scale. Their advertising API is nothing but errors and downtime, slow response, errors, more downtime… it is not pretty to work with.

  2. john   •  

    I’m not suggesting asking Ask how to scale.

    I’m using them as an example of companies large enough to not benefit from the advantages of using utility computing like S3 and EC2.

    For example, Google and Yahoo! don’t use S3 and EC2. This is probably because they wouldn’t see any technical or financial advantage for doing so at their scale.

    I’m looking to hear from people who can say: “We used to use S3 or EC2, but we got so big that it was cheaper/easier/more efficient to run our own servers.”

  3. Pingback: Jengates Blog » Blog Archive » links for 2008-02-28

  4. Pingback: tecosystems » links for 2008-03-01

  5. Jenni   •  

    SonicLiving uses EC2 for some of our event parsing, automation, and static content, but we use dedicated hosts through Softlayer for the rest.

  6. prakash   •  

    somewhere in between the “bigger” and “OMG” a CDN would fit in, after which you can think about building your own data centers.

  7. Gary   •  

    I work for a large company that rolled their own “S3”. It’s not that what Amazon is doing wouldn’t technically fit our needs, but instead for the following reasons:

    1. Lawyers – our lawyers don’t like Amazon’s EULA.
    2. Who owns the data – if we store our user’s bits on Amazon’s servers then Amazon really owns our data.
    3. Cost – after you factor in the costs of getting a data center and buying the hardware it starts being cheaper than using S3.
    4. Control your own destiny – if you want to run a successful operation it’s pretty important to know what you’re doing at this level.
    5. Side effects – since we now have a really big disk farm, we have a CPU to do other interesting things.

  8. john   •  

    Gary: that is great information…I’d love to hear any more detail you might have, off the record of course. Email me at jallspaw @ yahoo . com if you wouldn’t mind being a bit interviewed. 🙂

  9. Ask Bjørn Hansen   •  

    Over at http://www.yellowbot.com/ we launched on “our own stuff”. For our particular application the “cloud computing” stuff just doesn’t quite scale the right way — or rather it’d have taken us a lot more time to develop. We also needed enough boxes anyway (more than a handful, less than dozen) that it was cheap enough to get a little extra for redundancy…

    For the “rent a server” option we calculated the costs and (not counting sysadmin time dealing with hardware) it was clearly much cheaper over ~18 months or something like that to run our own stuff.

    We run on very standard hardware, but are maxing it out with memory (32GB per box), 4 disks in some of the 1U servers, that sort of thing that seemed to get expensive fast from the typical “rent a server” places. Also the support and response times from the cheap rent-a-server places have been pretty bad.

    – ask

  10. leo   •  

    People often forget that not only “S3, EC2, and other ‘utility’ computing stuff” is dirty cheap and easy, own hosting is as well. At some scall it might be easier (and for sure cheaper) to have your own hardware.

    The thing that makes this amazon stuff, in my point of view, so cool is the scalling and the bussiness math you can do with it.

    When you’re a startup (or you plan a new product) you usually have no idea how big the impact will be and how many user/traffic you will get. If you’re not so successfull you have at least no horible expensiv hardware doing idle cycle. And if you’ve a real good bussiness model in which you can calculate income for every user and request/traffic for every user you might be able to start with black numbers on your business calculation.

    We had a littel project with a customer and it could be a big success, in this case we would had need more disk and more bandwith. But it was hard to predict. So we did it with S3. If it would be a big impact we had to pay a lot to amazon but probably we would be able to get this bill our customer, if it would be no success fine we had to pay only some cents (what’s much cheaper then the disk and bandwith).

    By the way, it was no big success and we don’t get a amazon bill bigger then some dollars.

    On the other hand we tried once to push some traffic over an squid on ec2 (we know that’s not a very clever purpose for ec2, but it was mainly for testing). We run it for 8 hours and the bill was a bit high.

  11. Adam Jacob   •  

    We work with lots of startups that are below the “somewhere in here” threshold, or who are about to cross over.

    With managed hosting, at places like Rackspace or ServePath, the “cheaper to run your own stuff” threshold is usually between 8-12 machines. Even if you include some skew for the additional effort involved in the rack and stack/system load phases, you’ll probably start saving money in about 12-18 months.

    With EC2, I think the threshold is quite a bit harder to compute. $72/month for the size of one of their standard nodes is about half of what you’ll pay for a similarly sized managed host (more than that if it’s with Rackspace.) On the flip side, you still have to solve some difficult technical problems (how do you monitor your elastic infrastructure? how will you solve the ephemeral storage issue? what about mysql/postgres failover/replication?)

    When we see people using EC2, it’s often when they are quite small (brand new.) They usually wind up migrating to either physical colocation or managed hosting as they grow, often because their application design wasn’t really built for the cloud in the first place, so the technical hurdles of scaling it there become overwhelming.

    My $0.02.

    Adam

  12. Randy Bias   •  

    Cheap is relative. It’s really only ‘cheaper’ at the far end of the scale (Yahoo, Google, Microsoft, Amazon).

    At the lower end it may be cheaper in total dollar cost, but certainly isn’t in terms of time and opex.

    One problem I have with your curve is that it’s usually more of a step function. For example, you can usually build out your own datacenter without a network engineer at the low end until you hit a certain scale, but then you need a specialist. This happens for people, hardware, etc.

    It’s only when you are fully ‘at scale’ that cost savings can be achieved across the board.

    Still everyone makes different choices based on their level of sensitivity around cost or time.

    –Randy

  13. Pingback: And we’re back « Dan McKinley

Comments are closed.