A Solution to Serverless Adoption — Cloud Run, AWS Fargate

Problem — Solution — Case Study — Cost Optimization — Serverless Transformer Kit


This article talks about the serverless business model being the latest trend and the concerns faced by developers/companies trying to adopt them, how Cloud Run and AWS Fargate can solve the problem. A detailed Case Study using a real-life application on Cloud Run and Fargate based on performance, scalability, ease of configuration, and Cost. The inference from the results shows that Cloud Run infrastructure provides 70% of cost reduction from the traditional server architecture on the cloud.

As a result, I have created a Serverless Transformer Kit(STK) to provide easier adoption to serverless on Google Cloud using Cloud Run + Cloud SQL for all 3 tiered applications using docker containers as their packaging model. It also provides an option to use Cloud Build as CI/CD for the application. The CI/CD and the entire application infrastructure is provisioned using Terraform.

Here’s the GitHub repository to access the STK Code

Serverless Adoption:

Today, Serverless has been emerging as the latest development model followed by companies as it offers a lot of flexibility. It is basically an abstraction of infrastructure components by managing and provisioning resources on demand.

Serverless can be the right fit for applications whose request processing is just a few hours in a day and performs a basic CRUD with non-compute-intensive business logic.

One of the trending and the most widely used serverless platform is AWS Lambda. AWS Lambda is FaaS (Function as a service), where the developer just focuses and writes the business piece of code and the rest is taken care of by Lambda by provisioning compute on invocation. Other few FaaS examples are Google Cloud Function, Azure Functions, IBM Functions,...

The major concern in adopting FaaS is that existing applications have to be rewritten to the language abiding by the implementation methods that FaaS supports considering the Max memory and execution time limits. This brings in a lot of investment in terms of development and cost.


As a solution to this, cloud providers are offering a service that runs docker images on containers with the serverless model. This approach is in its early stages in the market and is evolving with improvements to ease the usage.

Cloud Run:
It is a Google Cloud offering that uses any docker image to run as containers on demand by automatically scaling and provisioning compute. This was built by Google using the open standard Knative. Knative extends Kubernetes to provide serverless operations using containers.

AWS Fargate:
Fargate is offered as a launch type on Amazon ECS. Amazon ECS is a container management service that orchestrates and manages containers launched using ECS. Fargate uses a serverless model to provision resources based on the container CPU and memory requirements.

Cloud Run and AWS Fargate have been the dominants among other services from various cloud providers. Hence, I chose these two services to perform a case study to find out the value each one offers.

Case Study:

This case study was done to effectively choose a service to deploy containers that provide the following attributes:

  1. Ease of deployment
  2. Cost Savings
  3. Flexibility with configuration
  4. Performance
  5. Changes needed to application code


A web application that has 120 users with an average of 100 API calls per user session and daily usage of 30 user sessions. This use-case can be related to any internal applications used by enterprises, blog sites, public sites of institutions,…

Consider, this application is currently deployed on AWS using Amazon EC2 for backend service, Amazon RDS for MySql Database and an Elastic Load Balancer to receive traffic from the frontend. This platform will be running 24*7 and costs will be added for the time even when there is Zero traffic on application.

How Cloud Run can enhance this application?

By using Cloud Run as an infrastructure unit, we can eliminate the AWS EC2 instance and Elastic Load Balancer that runs 24*7 and can replace it with the fully managed Cloud Run service. All that is needed, is the Docker image of the application to be pushed to Google Cloud Registry(GCR). GCR is a store for docker images.

Today, Docker containers have become the default norm for any application as it has the ability to run seamlessly on any platform without any code changes. As the tagline says, “Build once, run anywhere”.

Create a Cloud Run service by providing the docker image from GCR, along with the vcpu and memory, and the rest is managed by Cloud Run. It creates a service and provides a secure https endpoint that scales from 0 to max of 1000 containers with each container handling up to 80 simultaneous connections.

How AWS Fargate can enhance this application?

Fargate can eliminate the AWS EC2 component by launching an ECS service with the Fargate launch type task. Here the configuration and setup have additional items to be taken care of apart from the vcpu and memory. Networking and autoscaling have to be configured. A load balancer is required to expose the Fargate service to the frontend.

  1. Create an ECS Cluster.
  2. Create a task definition by providing the docker image URL, vcpu and memory, and execution role.
  3. Create a service for the task definition by providing networking information like vpc, subnets,… and autoscaling information. Attach a load balancer to route incoming traffic.

These above steps will create a serverless infrastructure using Fargate. Additionally, use ACM to attach an SSL certificate with the load balancer to secure the network communication.

Cost — The most important aspect:

There is a 70% reduction in the costs by using Cloud Run when compared with the existing server-based infrastructure on Cloud. Cloud Run offers 2 million free requests each month post which the cost is $0.40 per million requests. Hence our application costs have come down as the usage is within the limits offered. The complete pricing table of Cloud Run can be found here.

The surge in the pricing of AWS Fargate is because of the service model to have a task (container) running always behind a load balancer. AWS Fargate does not support scaling to 0 in this aspect yet.

There is an open Github issue on the AWS containers roadmap — A proposal to provide managed Knative and a competitor to Cloud Run.

Here’s a calculation on how the cost may increase for the same application use case we considered above. Cloud run cost for a million requests is around $2.90 per month and for 10 million requests is $33.14 per month.

Definitely this offers a lot of capabilities that can be done with the serverless approach and has filled up a void that AWS Lambda couldn't fill, ie, running Docker containers fully managed on-demand.

This is not over yet! Here’s something more interesting!

Serverless Transformer Kit:

With all the observations above on serverless, I have created the Serverless Transformer Kit(STK) to create/transform your application infrastructure to Google Cloud with Cloud Run + Cloud SQL + Cloud Build for CI/CD. All the infrastructure resources are bundled using Terraform.

STK will work on existing 3 tiered applications that use docker containers.


The above architecture shows the resources that will be created by STK. Also, the CI/CD can be ignored and Cloud Run can alone be used for experimentation.

Here’s the GitHub repository to access the STK Code

Three Simple Steps:

Step 0:

Create a Google Cloud Account and set up a new Project.
Note down the Project ID and Project Number as this will be used later below.

Step 1:

From the Cloud Console, Navigate to Cloud Build service page.
Click on Triggers > Connect Repository
Select Github or Bitbucket option and authenticate to access your repository.

Note: This will create a mirrored repository using Cloud Source. Cloud Source is a git service managed by Google Cloud.

Step 2:

Use the gcp-cloud-build-tf from the downloaded repo to create the CI/CD.

terraform initterraform plan -var repo_name=<SOURCE_REPO_NAME> -var project_number=<PROJECT_NUMBER> -var infra_tf_bucket=<GCS_BUCKET_NAME> -out planterraform apply plan

Note: Use the PROJECT_NUMBER here. This step creates a Cloud Build Trigger with the Mirrored Cloud Source repository, Attaches required roles to the Cloud Build Service Account to provision Cloud Run, Cloud SQL resources.

Step 3:

Copy the infra folder and cloudbuild.yaml from the downloaded repo to your application repository.
Push the code changes to your repository.

Note: The application should contain a Dockerfile. This step triggers a Build action and Cloud Build will execute the build commands as stated in cloudbuild.yaml file. 1. Performs a Gradle Build, 2. Creates an docker image and pushes to GCR, 3. Execute terraform apply .

That’s It!

Check out the Cloud build on the Console for the build progress. Once the build is complete, you can get the Cloud Run Service Url from the output section.

Things to Consider:

No approach is an ideal one. The selection of an approach has to be determined by considering the requirements of your application. Here are a few things to consider before considering the solution we discussed,

  1. With the Cloud Run approach of starting up Containers on demand, there comes the problem of Cold Start. Cold start is a lag in response to the first API call after a long time of no network traffic. This lag is basically the container startup time. Container startup times can be optimized using various techniques.
  2. Fargate can be a better fit when it comes to predictive loads and batch processing as it supports up to 4vcpu and 30GB task memory whereas Cloud Run is limited with 2vcpu and 2GB memory on fully managed mode and can be increased by using Cloud Run on GKE Cluster.
  3. In case if for any other business reasons, if this approach of using Cloud Run can’t be adopted for production, it can be used in lower environments of application like DEV, QA to reduce costs to a greater extent. Also, this can be useful for developers who are working on a proof of concepts/research work to focus on the application and let the STK take care of the infrastructure.