Instant Estimate For Application Hosting

Get an instant estimate for the leading cloud platforms including Cloudflare, Amazon Web Services, Google Cloud, Microsoft Azure, Alibaba Cloud, Oracle, and IBM Cloud. The tool is designed to work without asking you a thousand technical questions.

The estimate results should give you a ballpark figure for the hosting costs to run your serverless application, but ultimately the tool can't compensate for inefficient architectures, clumsy code, and poor DevOps. Likewise the tool cannot compensate positively for optimisations that are unique to your application.

We offer a free consultation to answer any of your questions and assist with fine tuning the estimations to your application.

DEVELOPMENT PREVIEW

This page is a development preview.  Please do not share this page.  Content subject to change, pending verification.

Get an Instant Estimate

For further details on each parameter, see below sections.

Email address is optional and not required to get an instant estimate. By providing your email address you maybe contacted by a representative via email.

Questions & Answers

Whats the difference between the app models data storage needs?

Let's imagine two apps that have very contrasting persistent data needs, one being a video streaming app like Netflix, and the other being a logistics management app. The video streaming apps data storage would be optimised for relatively large chunks of data that are organised at a low level with no particular importance other than how often the chunks are being read. The video streaming app has a typical need for a simple key-value storage. On the other hand, the logistics management app would have many complex calculations with a huge amount of relational data. The persistent data storage needs for the logistics app would be a database optimised for relational data, possibly over thousands of data points.

Why are the regions broken down into global, Australia, and China?

The regions are separated into global, Australia, and China based on significant pricing differences charged by most of the cloud platforms. For China there are also regulatory reason. Please read the bandwidth tab for more info.

What isn't this estimation tool suitable for?

This tool is not suitable for any apps that have large amounts of infrequently accessed data. You're still welcome to use this to get a ballpark figure but consider that a more appropriate data storage solution may only cost a tenth the estimated amount.

Pre-defined App Models

To provide an instant estimate with as few questions as possible, we're using a method of automatically matching your input details to one of numerous pre-defined models which is then used as the basis of the estimate.

The infrastructure & software engineering needs of most apps can be oversimplified into just a handful of models. A video streaming app would have very different needs to a logistics planning app.

Cloud providers including GCP, AWS, Alibaba Cloud and Microsoft Azure, offer many different services for data storage that are relevant to the needs of different apps. This is the case with not only data storage but with almost every aspect which can make it bit of a minefield to determine the right services for your apps needs. One service to another can have drastically varying outcomes in performance and cost. The oversimplified models are used to define which are the applicable services offered by the various cloud providers and then the costings can be calculated from your input parameters.


The 4 Fundamental Billing Units

Ultimately all these metrics need to be broken down into their fundamental billable units of requests, data storage, bandwidth, and regions to give you a normalised comparison thats easily understood.

Each of these fundamental billing units have their own intricacies that requires in a complex calculation to get a final estimate. Furthermore the exact breakdown of how one provider to the next calculates the price of one unit varies.

Once again, the pre-defined app models are used to specify all the intricate details required by each cloud provider in their own way of calculating the final price per unit.


China & Australia Regions

The instant estimate tool will first calculate your pricing for the adjusted global average excluding Australia and China because these two regions have distinctly higher rates for bandwidth. Furthermore, each cloud platform has their own complications in China which require vastly different solutions and pricing surrounding those solutions.

The tool offers inputs to specify the percentage of users in Australia, and China. The estimates given for Australia/China are the sum of additional costs to serve those users in Australia/China. But in some cases, the portion for China is entirely subtracted from requests, data storage, and bandwidth, and then lumped into the China estimate because an entirely different solution is required that doesn't pertain to the respective providers estimates.

To serve users in Australia and China incur costs beyond other regions so the sum of additional costs are displayed as a seperate row in the estimate results. For a simplified example of the regional estimates, let's say the global bandwidth rate is $0.12/GB but bandwidth in the Australian region is $0.19/GB, so that additional $0.07 per GB goes into the Australia region sum.

Requests

Every cloud platforms has their own intricacies with regards to the calculation of a request unit. Here's a simplified breakdown of each cloud platforms method to calculate the cost of a single request. Because all the cloud platforms calculate requests in different volumes, for the purposes of this breakdown the amounts will be normalised to a cost per million.

For comparative simplicity all the tables below are calculated at a basis of 5 ms CPU time and 128MB memory allocation, however the instant estimate tool uses a dynamic calculation of CPU time and GB-seconds depending on the app model and input parameters.

Please consider that 5ms CPU time is not the same as wall-time. Using these services as middleware will experience wall-time in the hundreds of milliseconds and some cloud platforms such as GCP will charge you for Ghz-s based off the wall-time despite being mostly idle while waiting for I/O or API sub-requests.

We did not factor wall-time in the cost tables below because there is practically no difference between wall-time and CPU time in the way we develop our serverless application. The models used in this tool are defined how we develop serverless applications and for us, we eliminate massive margins between wall-time and CPU time by handling any need for sub-requests differently and also taking careful consideration to how Javascript works to achieve ultra high efficiency of the CPU utilisation, and process multiple things in parallel despite being single threaded.

How do we cut down our margins between wall-time and CPU time?

Well any time you need to use I/O or send something out over the network it will take hundreds of times longer than if you were able to handle it on the CPU, so for starters lets focus a typical middleware case that involves your customer facing API having to communicate with your on-premise backend API.

Let's imagine a situation where you’re operating your own on-premise infrastructure for sending SMS. And you have an API where customers are able to send SMS on demand from their backend in a form like JSON.

Typically we see serverless used as middleware to process minimal logic on the edge, and make sub-requests to your backend, waits for a response and then finally will return a response to your customers API request based on the final outcome of the sub-request.

The problem with this typical middleware request-response cycle is that your API is hanging for potentially hundreds of milliseconds while it reaches out over the internet to your on-premise servers and waits for the response.

So how you can speed up your API responses and cut out a massive amount of wall-time is by moving all the validation logic to the edge. Because all logic you need to guarantee a successful sub-request has been moved to the edge, you don't need to process the sub-request immediately, instead you can immediately respond to your customers API request and enqueue their SMS for sending asynchronously.

From your on-premise infrastructure, it’ll reach out to the edge nodes and fetch any SMS from the queue for processing at a configured concurrency your on-premise infrastructure is able to handle.

Correctly implemented, SMS won’t take any longer to be sent than before, but you will have cut out massive amounts of wall-time and have coincidentally added protection against demand surge overwhelming your on-premise servers.

With the way your team develop serverless applications, your milage may vary.


Cloudflare

Cloudflare Workers have a cap of 50ms per request, and have a fixed memory allocation of 128MB per isolate. Unlike AWS and GCP, there is no flexibility to adjusting the memory allocation and the 50ms CPU time cannot be extended without moving to their Workers Unbound product which comes at a considerably higher cost. That said, a 128MB memory allocation and 50ms of CPU time is very generous and under most circumstances sufficient. The wall-time limit is 15 minutes.

1,000,000 requests x 0.0000005 = $0.50
Total $0.50 per million

Amazon Web Services (AWS)

Amazon's Lambda@Edge charges a base price for every request, plus GB-seconds. The CPU time is not particularly relevant with Lambda@Edge as the GB-seconds metric is based off the wall-time. Wall-time is typical for GB-seconds because the memory needs to be allocated the whole time your code is being executed, even if the process is mostly idle waiting for I/O or an API response.

1,000,000 requests x 0.0000006 = $0.60
625.00 GB-s x 0.00005001 = $0.03
Total $0.63 per million

Google Cloud Platform (GCP)

Google's functions hosting charges a base price for every request plus Ghz-seconds and GB-seconds. A big caveat here is that both the Ghz-seconds and GB-seconds are based off wall-time and are rounded up to 100ms on every request; this is unusual because Ghz-seconds is a measurement of CPU time, and rounding up both metrics to 100ms on every request can have drastic discrepancies in your billings from your expectations.

To clearly illustrate the point about rounding up a per request basis instead of your hourly/daily sum; imagine doing your shopping at the supermarket and instead of rounding up your total to the next dollar, they rounded up every line item to the next dollar.

1,000,000 requests x 0.0000004 = $0.40
20,000 GHz-s x 0.0000025 = $0.05
12,800 GB-s x 0.0000100 = $0.128
Total of $0.578 per million

Above table is for tier 1 pricing which includes only 3 USA, 2 Europe, and 3 Asia points-of-presence. Enabling tier 2 regions will get you access to an additional 14 points-of-presence including a presence in Australia. The tier 2 pricing is:

1,000,000 requests x 0.0000004 = $0.40
20,000 GHz-s x 0.0000035 = $0.07
12,800 GB-s x 0.0000140 = $0.1792
Total of $0.6492 per million

Data Storage

The calculations for persistent data storage depends firstly on which storage solution is suitable for your app, for this estimation tool we have oversimplified the options to include only a key-value solution and a relational database solution from each cloud provider.

During the planning and development phases, careful and thorough attention needs to be given to the data structure, behaviour and interactions. Poor data structure decisions and sloppy interactions can have drastic consequences in your apps performance and operating costs.

Although the estimation tool only considers 2 solutions from each provider, most of the cloud providers offer a wide variety of data solutions which should also be given thorough consideration.


Cloudflare Workers KV

Cloudflare only offer 1 native solution for persistent data storage which is a simple key-value store. For any truly complex relational data that cannot be resolved cryptographically then it's necessary to use a 3rd party relational database service.

Per gigabyte of storage$0.50
Per million reads$0.50
Per million writes$5
Per million deleted$5
Per million lists$5
Per gigabyte of bandwidthNot applicable
Per gigabyte of transferNo charge

A cost example to store a 100MB file on Cloudflare Workers KV, and serve that file once to 1000 users. Given the maximum size of a single KV store being 25MB, the file would need to be split into 4 parts. Depending on the nature of that file, you may even want to divide it up even more times to optimise delivery between the KV store, the worker node, and the user.

100MB stored persistently$0.05/month
1 worker invocation to handle users request for saving the file$0.0000005
Write 100MB file to KV in 25MB parts$0.00002
1000 worker invocations to handle user requests to download the file$0.0005
4000 reads because file is broken up into 25MB parts$0.002
Data transfer between KV and WorkerNo charge
Bandwidth between Worker and userNo charge
Total$0.0525205

Cloudflare do not provide any controls over the replication & scale of your data across multiple regions, instead they handle this automatically depending on how often each item is being read and from which regions. Cloudflare do not charge any additional cost the replication & scale of your data across multiple regions & datacenter's. Cross region is not applicable and free of charge.


Amazon S3 Standard

For persistent data storage of the key-value nature, we will be using Amazon S3 Standard in this example. We will be using the Singapore location, please note that the prices will vary from region to region.

Per gigabyte of storage$0.025
Per million reads$0.40
Per million writes$5
Per million deletesNo charge
Per million lists$5
Per gigabyte of bandwidth$0.12
Per gigabyte of transfer$0.09*

A cost example to store a 100MB file on S3 Standard, and serve that file once to 1000 users. For this example we will assume the source-of-truth is in Singapore but all 1000 of the user requests are located from somewhere else in the world, meaning cross region data transfer charges will be incurred. The example is applicable to using S3 Standard in conjunction with Lambda@Edge.

100MB stored persistently$0.0025/month
1 lambda invocation to handle users request for saving the file$0.00005061
Same region bandwidth from Lambda to S3No charge
Write 100MB file to S3$0.000005
1000 lambda invocations to handle user requests to download the file$0.05061
1000 reads from S3 to Lambda nodes across regions.$0.0004
Data transfer between S3 and Lambda across regions.$9
Bandwidth between Lambda and user$12
Total$21.05356561

* Data transfers between the S3 bucket and the node serving requests in the same region do not incur any cost but this can be complex to mitigate in your app and most likely the data transfer costs for accessing your persistent data will remain a significant cost. Data transfers from the S3 to users directly, or non-amazon servers will be charged at a higher rate of $0.12 per gigabyte.


Amazon Aurora

Amazon Aurora is a SQL relational database for cloud apps. We will be adding a pricing table and scenario cost example soon.


Google Firestore

For persistent data storage of the key-value nature, we will be using Google Firestore in this example. The price varies across two tiers of regions. For the pricing table below we'll use the Hong Kong zone because Singapore is not an option. The Hong Kong zone is in tier 1 which is cheaper than tier 2.

Per gigabyte of storage$0.18
Per million reads$0.60
Per million writes$1.80
Per million deletes$0.20
Per million listsNot applicable
Per gigabyte of bandwidth$0.14*
Per gigabyte of transfer$0.14*

A cost example to store a 100MB file on Firestore, and serve that file once to 1000 users. For this example we will assume the source-of-truth is in Hong Kong but all 1000 of the users requests are located somewhere else, but not in Australia or China which would incur a higher cost.

100MB stored persistently$0.018/month
1 Cloud Functions invocation to handle the user request for saving the file.$0.0000129
Same region bandwidth from Cloud Functions to FirestoreNo charge
Write 100MB file to Firestore$0.0000018
1000 Cloud Function invocations to handle user requests to download the file$0.0129
1000 reads from Firestore to Cloud Function across regions.$0.0000006
Data transfer between Firestore and Cloud Function across regions.$14
Bandwidth between Cloud Function and the user$14
Total$28.0309153

* The bandwidth and transfer prices per gigabyte depend on the region where your source-of-truth is setup, for example if your source-of-truth is in the US and you need to move data between other regions located in the US the cost per gigabyte is only $0.01; and transferring data between the same multi-region is free. The bandwidth and transfer costs per gigabyte quoted in the table above is applicable for when your users are somewhere outside your source-of-truth region but not in Australia or China which have their own rates. For Australia the price per gigabyte is $0.19 if your data is not also located in Australia. For China the price per gigabyte is $0.23 and your data won't be in mainland China because Google have no datacenters there. Price per gigabyte does reduce as your volumes increase.


Google Cloud Spanner

Google's cloud-native relational database. We will be adding a pricing table and scenario cost example soon.

Bandwidth

Peering and transit

Peering is a settlement-free agreement between one or more networks to connect & exchange traffic directly without having to pay a 3rd party to "transit" the traffic across the internet. Peering involves a physical interconnection of the networks so transit cannot be completely avoided but the less you need to transit, the lower your bandwidth costs. Because it's ultimately a physical constraint on which networks they're able to physically interconnect with, it becomes a numbers game where the cloud platform with the biggest global presence will usually have the biggest advantage.


How are the regions defined?

The shape and size of regions are defined by 3 main influences. In part by countries and provinces for obvious reasons; in part by the telecommunications and network operators in those regions; and in part by the cloud platforms points-of-presence.

The real drawing up of the regions by cloud platforms comes down to price per gigabyte. Multiple bordering regions with the same rates and no special regulatory requirements can be drawn up as a single region. The regions vary from one cloud platform to another because the rates they're able to get from the telecoms and network operators in each region depends on a lot of factors.

A cloud platform with hundreds of datacenters strategically dotted around the globe is able to reshape their regions because they have a much greater presence in more networks. In the strategic location and building of each datacenter, they are able to enter in agreements with the local network operators in that region and get direct access to their networks so that communication between end-users and their closest datacenter is able to happen within the same network.

Bandwidth costs for a well designed serverless architecture is almost always a fraction of the cost compared to a centralised infrastructure because with many points-of-presence your users will likely be in the same region as one.

About the Network Quality Score

Network quality is referring to latency, throughput, packet loss, and consistency. Latency is typically due to the physical distance between your users and their closest point-of-presence.

The network ratings displayed in the estimation results table does not factor price. The ratings are given by our own testing from hundreds of physical locations globally.

High packet loss alone is only a small consideration in the rating since this can be mitigated with practically no impact on the users experience. Every 15% increment of packet loss is counted as only 1 point deduction. Realistically high packet loss is to be expected for a portion of users and some causes of packet loss are outside the scope of your cloud providers capabilities.

Some examples of high packet loss that are outside the scope of a cloud platforms responsibilities include:

Throughput, download rates experienced by your users is not necessarily related to the distance, but it can matter in terms of routing or points of congestion experienced as the data makes its way to your users.

The points work on the basis that all start at 10/10 and we subtract points based on the following:

Packet lossSubtract 1 point for every 15% packet loss.
LatencySubtract 1 point for every 100ms of round-trip latency.
Throughput (speed)Subtract 1 point for every 10% slower than the benchmark. The benchmark is set by the fastest cloud platform at that test location.
ConsistencySubtract a maximum of 2 points relative to the margins of deviation for tests conducted at different times of the day.

Throughput is tested by measuring the download speed over multiple gigabytes at each test location. The fastest cloud platform sets the benchmark which all the other results are graded against. For example if cloud A gets 95Mbps and cloud B gets 28Mbps then cloud B will be marked down 7 points because it's 70% slower than cloud A which set the fastest benchmark. We always round down the results to increments of 10 so 72% is floored to 70%, and 68% is floored to 60%.

Artificial limitations imposed by the cloud platforms are most evident on multi-gigabit connections and those artificial limitations reflect poorly on those cloud providers but when averaged out in all test locations its only a 1-2 point reduction at most.

For example, if one test locations has a 2.5Gbps connection, cloud A sets a benchmark of 2.1Gbps and cloud B capped out at 100Mbps due to artificial limitations then cloud B will be marked down by 9 points because it was 95% slower than cloud A. But averaged out over all test locations the final score may still be high at a 9/10 network quality rating if everything else tested positively.