Using Amazon EKS, Creating a Multi-Tenant SAAS Solution
As more businesses switch to the software-as-a-service (SaaS) delivery model, many of them focus their solutions on the Amazon Elastic Kubernetes Service (Amazon EKS). For SaaS companies, the programming model, cost effectiveness, security, deployment, and operational characteristics of EKS constitute a strong model.
The EKS paradigm also offers a number of fresh multi-tenant concerns to SAAS architects and developers. You must now consider how the fundamental SAAS concepts of isolation, on-boarding, and identity are implemented in an EKS context.
We have developed a sample EKS SAAS solution to give developers and architects a practical example of how these concepts are implemented. This serves as an example of how multi-tenant architecture and design best practices are implemented in an EKS setting.
We’ll go through the main architectural components of the EKS sample architecture in this post. We’ll look at automating tenant on-boarding, managing tenant workloads, and isolating tenants inside an EKS cluster.
This solution comes with an example SAAS application that may be used in its whole, as well as an administrative portal for managing your SAAS environment.
Selecting a Model for Isolation
On Amazon EKS, there are several methods to start designing and developing a multi-tenant SaaS system, each with a unique set of tradeoffs. With EKS, you have a variety of options that might affect the work involved in implementation, the complexity of operations, and cost effectiveness.
As an illustration, some people could decide to segregate their tenants by using a cluster-per-tenant strategy. This would have a straightforward tale of solitude but may cost a lot of money.
Others could choose a shared computing approach, where isolation is managed at the application level and all tenants are mixed together inside the cluster and namespace. This is quite cost and operational efficient, as you could imagine, but it also represents a weaker isolation paradigm.
Namespace-per-tenant isolation, which deploys each tenant into the same cluster but isolates them from one another using namespaces and a variety of built-in and add-on Kubernetes constructions, sits in the midst between these two extremes. This type of approach, where tenant resources are not shared between renters, is referred to as a “silo” model.
This namespace-per-tenant configuration strikes a decent balance between cost effectiveness and privacy. We decided to use this paradigm in the EKS SaaS example solution because of this. This decision has a substantial effect on all aspects of our system, including tenant isolation, tenant isolation, and tenant traffic routing.
Let’s look at the top level components of the architecture used by this sample solution before delving into the specifics of the SaaS EKS solution. Each of the layers that make up the EKS SaaS solution are shown below.
First, the EKS SaaS experience includes three different types of application flavors. These correspond to the typical application categories present in many SaaS setups. The initial application, or landing page, serves as the page that is visible to the general public and where users may locate and subscribe to our service. Visitors to this website can start the registration procedure and add a new tenant to the database.
The sample commerce application is the following application. Here, we have developed a straightforward e-commerce application that interacts with tenant-specific microservices running in the EKS cluster and offers some fundamental order and product capabilities. Your multi-tenant SaaS application will arrive here.
The SaaS provider administrative console is the last application. Access is likewise managed by Amazon Cognito in this. You would use this application to manage and adjust your tenant’s settings and policies as the SaaS provider. These programmes all communicate with services that are active within an EKS cluster.
There are two different categories of services that run in this cluster. First, the shared services layer represents all the common services that are needed to support all the operational, management, identity, onboarding and configuration capabilities of a SaaS environment.
The other category of services is part of the managed tenant environments. The services running here represent the different deployed tenant environments running the microservices of our application. We have separated deployments for each tenant of our system, and we will explore the rationale for that architectural decision below.
What Is Provided
Now that we are familiar with the high-level architecture, let’s examine what is supplied during the installation of the EKS SaaS service.
We must deploy the foundational version of our environment before we can consider tenants and apps.
The real EKS cluster that hosts our service serves as the infrastructure. It also contains the necessary supporting infrastructure, including Amazon CloudFront distributions, AWS identity and access management (IAM) roles, and backing Amazon Simple Storage Service (Amazon S3) buckets.
The eksctl CLI utility is used to deploy the EKS cluster together with the associated virtual cloud (VPC), subnets, and network address translation (NAT) gateways. This command line interface (CLI) makes it easier to create the different AWS CloudFormation scripts needed to install an EKS cluster that is ready to use. The tenant and shared environments of your EKS SaaS solution are operated by this cluster.
Although we made use of standard techniques for setting up and deploying the cluster, you should consider how to better protect the network cluster in light of the particular requirements of your environment. Find out more about how to construct a completely secure EKS cluster.
The open-source NGINX ingress controller and external DNS are two examples of the Kubernetes API objects that we deploy into the cluster after it has been set up.
The ingress controller is essential in assisting client applications’ multi-tenant requests get routed. Together with external DNS, which automatically generates Amazon Route 53 DNS entries for any subdomains referred to in our ingress resources, it complements external DNS.
In our case, this is only api.DOMAIN.com. The DOMAIN referenced here represents a custom domain you will configure upon deployment.
This baseline architecture includes Amazon S3 and is where each of the web applications presented in this solution are hosted as static websites. We use CloudFront distributions’ custom domain names for content distribution. Each website is built and copied to its respective S3 bucket upon deployment.
In addition to providing CloudFront services and S3 buckets, the base stack also deploys a wildcard certificate that corresponds to the custom domain name specified. The three online apps mentioned above, as well as the public-facing shared and tenant-specific web services, are all accessible via HTTPS connections provided by this certificate.
Deploying the shared services component of our SaaS system is also part of the configuration of this basic environment (registration, tenant management, and user management). They help us manage admin and tenant users as well as onboard and manage renters.
Various AWS resources are used by the shared services. Tenant management uses an Amazon DynamoDB table to store and manage tenant data. Users that are saved in Cognito are managed through user management.
The Java Spring Boot framework is used to implement each microservice that runs in EKS. Each of the system’s microservices has an Amazon Elastic Container Registry (Amazon ECR) repository built for it during the deployment of the baseline stack. Each service is created and pushed to the appropriate repository at deploy time.
Two of the three web apps in this solution have Amazon Route 53 DNS records added as the last step in the baseline configuration. Both the admin console and the landing page are set to admin.DOMAIN.com and www.DOMAIN.com, respectively. Until a tenant is installed, the sample e-commerce application page does not get Route 53 Alias.
After building your foundational infrastructure, you may start to consider the infrastructure required to support tenants when they sign up for your SaaS service.
The architecture we’ve chosen here implements our isolation using a namespace-per-tenant paradigm, necessitating the deployment of distinct resources for each tenant. Below, we’ll go through this isolation concept in in depth.
The architecture shown above demonstrates how our application’s microservices are set up in our foundational infrastructure. These were placed in the same cluster that was used to install the environment’s shared services. The main distinction is that none of these namespaces or microservices are developed until a tenant actually signs on.
Of course, there are more moving pieces in the constructions required to make these microservices functional. The information that follows offers a more detailed look at the components of our tenant settings.
The assets that the SaaS application uses to access each of our tenant namespaces are present in this basic flow. We authenticate and route tenants into our environment using distinct user pools and domains.
The solution employs an NGINX ingress controller to direct traffic to our namespaces as you proceed downstream. The recently announced AWS Load Balancer Controller is another option that may be utilised in this situation.
Our sample e-commerce application’s backend is represented by our order and product services. Each tenant receives a copy of these microservices, which are configured with tenant-specific information at deploy time, even though these services are launched from the ECR repository that is shared by all tenants.
The NGINX ingress resources and all of our tenant-specific assets, such as the microservices, are deployed in their respective namespace. We also assign pod and network security policies for extra security along with an IAM policy for the tenant’s service account.
The “machine” that coordinates the configuration and deployment of these items into our EKS cluster is the AWS Code* services, shown at the bottom of figure 4. The Code* projects are described as CloudFormation resources with placeholder parameters for tenant-specific data.
You can see our DynamoDB tables on the right. In the EKS SaaS solution, we intended to demonstrate several data splitting scenarios. For instance, the order microservices utilise a silo storage partitioning strategy with distinct DynamoDB tables for each tenant. Each new tenant that is introduced to the system results in the creation of a fresh order table.
In the pooled partitioning model used by the product microservice, tenant data is mixed with other data from other tenants in the same database and retrieved using a partition key that contains the tenant’s unique identity.
IAM roles that protect the order tables stop any cross-tenant access. IAM roles also regulate access to the product tables. In this instance, we’ve implemented IAM criteria based on partition keys in the DynamoDB table that are unique to each tenant.
Taking on new tenants
A seamless onboarding process that coordinates all of the services needed to get a new tenant up and running welcomes them into the system. SaaS companies must have an automated, low-friction onboarding process in order to have a repeatable, scalable method for onboarding new tenants.
We have a lot of moving elements for our system, and the onboarding process takes into account all of them. We must first create the administrator user and the new tenant. The tenant’s Kubernetes namespace and policies must then be configured, and the application microservices must be deployed into that namespace.
The process of onboarding is shown in Figure 5. Filling out a sign-up form is the first step in the process, simulating the page you’d typically present as the first impression for prospective renters. The administration application initiates onboarding.
This flow is presented to show how the identical onboarding procedure would seem if it were run through an internal procedure. The main lesson is that in order to onboard a tenant into the system, both of these processes rely on the same fundamental mechanism.
The following stages describe what happens in what order during the tenant onboarding process:
The landing page or admin application sends a request to the tenant registration service with the tenant’s onboarding information.
To enter tenant information into Amazon DynamoDB, the registration service dials the tenant management.
The tenant’s new user pool is created by the registration service.
To create the tenant administration user in the recently formed user pool, the registration service contacts the user management service.
The provisioning of tenants’ application services is started by the registration service, which orchestrates the deployment of their resources using AWS CodeBuild and CodePipeline. To do this, a namespace for the tenant must be created, and the product and Order microservices must be deployed to that namespace.
At the network and data levels, tenant isolation security controls are enforced.
Namespace Isolation and Beyond
In an Amazon EKS cluster, we employ a namespace-per-tenant paradigm to build an isolation layer for tenants and their resources.
However, by default, a Kubernetes namespace does not offer a rigid isolation barrier for the resources contained inside. To prevent any cross-tenant access, we must add further features.
In this illustration, we’ve utilized network rules and pod security to allow cross-tenant access at the pod namespace level. To guarantee isolation, we’ve deployed IAM roles for Service Accounts.
With this method, credentials are isolated, making it such that a container can only get credentials for the IAM role linked to the service account to which it belongs. This denotes a container operating in a different namespace and belonging to a different pod.
Figure 6 shows how we use isolation when each namespace tries to access a different tenant resource as you progress down the diagram (in this case, DynamoDB tables). For the product and other tables, there are two different types of isolation.
Each tenant has their own order table for our order microservice (using a silo storage model). We have IAM rules in place that limit access at the table level.
For all tenants, the product microservice has a single table. Our IAM policies restrict access to the actual table elements in this instance.
You must consider how to isolate at the network level even if these structures assist enforce our isolation approach. In an EKS cluster, all pods can freely communicate with one another both inside and across namespaces because Kubernetes by default allows all pod to pod traffic.
Each tenant of our order microservice has an own order table (using a silo storage model). Access is restricted at the table level by IAM rules.
Tigera Calico, which enables us to create network isolation by implementing five-grained network restrictions at the namespace level, has been used to construct network policies in order to prohibit this cross-namespace access scenario.
This article looks at some of the important factors that might affect how you go about planning and developing a SaaS solution using Amazon EKS. It should be obvious that in order to fully utilise the flexibility and power of EKS, you’ll need to think outside the box in order to implement some of the fundamental SaaS architectural tenets.
You will gain a better understanding of the total end-to-end process of developing a fully functional EKS SaaS solution on AWS as you explore the EKS SaaS sample application we have provided here. This should give you a decent head start while letting you still customize it to the regulations that fit your SaaS environment’s requirements.
We encourage you to look at the solution repository for a more detailed look at this solution. You may discover detailed deployment instructions as well as a more development-focused manual there that will help you grasp all the moving parts of the system.
Concerning AWS SaaS Factory
At each point in their SaaS journey, enterprises may benefit from AWS SaaS Factory. We can assist with new product development, application migration, and SaaS solution optimization on AWS. To learn more about best practices and technical and business material, go to the AWS SaaS Factory Insights Hub.
The AWS SaaS Factory team encourages SaaS developers to get in touch with their account representative to learn more about engagement models.
Register to receive updates on events, resources, and news related to SaaS on AWS.