I'm currently hashing out my price points for a hosting service that I plan on launching on Amazon AWS.
The biggest line item in the budget is currently the machine time. I know that if I give a user x GB of transfer it costs me x cents, and the same with storage space. How do I calculate how much that customer costs me as part of the server hardware overhead?
To be honest some figures are needed; for me at least, or some idea of if the machine time is really a fixed cost or if it will vary based on the number and size of the user accounts.
I think more pertinent is what do your competitors charge and how can you be competitive (ie rapid deployment for RoR like Heroku)
Mathematically [from scratch for readability to all, in the case others may search this with no previous knowledge in the future]
profit(?) = total revenue(TR) - total costs(TC)
TR = price(P) x quantity(Q)
total variable costs(TVC) = (VC)Q
TC = fixed costs(FC) + (VC)Q
From your question you are stating that the machine time/server hardware is a (FC).
Also you have to assume that the FC is variable in the long run and dependant on the (VC)Q
Assuming that (VC) = 1 ie 1GB of transfer and storage per customer. You will then just need to find the capacity that each FC (one hardware) can sustain and divide through using VC and then multiply by Q
ie FC is $100 and VC is $1 (multiplied by 100 [the capacity that the FC can bare]) then
FC/VC = 100/100 = 1 then
(FC/VC)*Q = 1*Q = your TC
I realise that this answer is ridiculously simplified but based on the info provided i would be amazed if anyone could provide you with a better one.
Anyway I hope that this helps.
It's a fair and simple question. There isn't a simple answer.
Suppose you have a fixed cost of $1,000. You are confident that this will service up to 250 customers, and you have built a business case with an expectation of selling to 50-150 customers. What is the correct cost allocation per customer?
Maybe it's $20. That sets a conservative expectation - the $1,000 is spread across the 'low case' 50 customers.
Maybe it's $10. That's the midpoint expectation - $1,000 spread across the central view of 100 customers.
Maybe it's $4. That's $1,000 divided by the 250 customers the infrastructure could actually support.
And maybe it's $0. That's recognising that in the general case, a new customer is not generating new cost.
All these (based on that very basic sketch) are valid, but that range under-states real life complexities.
For instance, what happens as you approach 250 customers? Maybe there's another $1,000 cost slug coming your way. Or maybe there are economies of scale, and the next chunk of cost will be $500. Or maybe there are dis-economies, and you'll have to spend $1,500. Each of those calculations needs revising, and there are some more methodologies that you could use to address that 'semi-fixed' character.
But then step back. If your expectation were 20 customers not 100, would you have done things differently - could you have chosen a fixed cost of $500 in this case, or even avoided fixed costs altogether? Often. So where does that leave those numbers?
Before you despair, life becomes rather simpler when instead of talking about 'cost,' you qualify by saying, for instance, 'cost from the perspective of the pricing decision.' And I suspect this is the type of cost you may be concerned about.
Then you should be looking first and foremost at the marginal cost - which is probably the lowest of all the figures you can work out. You should have alarm bells ringing if your pricing goes below this cost - and either eliminate these cases by price and proposition design, or build in risk management if, for instance, this arises from patterns of use you can't know in advance.
Then your pricing process will be (I would hope) pricing based on opportunity, which will be based on views including
As a final note to anyone who's horrified by the thought that the cost floor might essentially ignore fixed costs, consider the following: