Skip to main content
logoTetrate Service ExpressVersion: Latest

Achieve High Availability across Clusters

With TSB and TSE, you have several options to configure your platform so that applications operate in an HA fashion across clusters. In each case, once the Platform Owner ("Platform") has prepared the platform suitably, the Application Owner ("Apps") need only deploy and publish their service from two or more clusters to take advantage of HA.

Options for High Availability

  1. Option 1: Use Tetrate's Edge Gateway solution

    Use Tetrate's Edge Gateway to front-end several clusters and distribute traffic across them.

  2. Option 2: Use Tetrate's AWS Controller with AWS Route 53

    Use the AWS Controller to configure Route 53 automatically for services published from TSE or TSB.

  3. Option 3: Manually configure a GSLB Solution

    Manually configure a GSLB solution to distribute traffic across cluster entry-points and to perform health checks.

Before You Begin

When load-balancing across clusters, you will likely need to rely on a DNS GSLB solution to distribute traffic to the entry-points (edge gateways, etc) for each service. In this case, you need to consider how health checks should function.

Detailed, per-application health checks may be required once applications are deployed on the clusters, but to begin with, Infrastructure health checks are a good starting point. The purpose of the health checks is two-fold:

  • Verify that the workload cluster is functioning and reachable: for this, it's often sufficient to run a simple 'canary' service such as httpbin, and verifying that is it reachable via each entry point.
  • Determine the optimal way to reach each workload cluster: Edge and Internal Load Balancers are often configured to load-balance across local and remote proxies or clusters. This ensures they can always satisfy a request, even if it means using a remote target. For GSLB heath-check requests, configure each hop to only use the next local hop, failing if it's unavailable, so that the health check succeeds for entry points that use fast local paths.

Option 1: Configure Tetrate's Edge Gateway

The Edge Gateway solution is described in detail in the HA Design Guide. With Edge Gateway, you deploy edge load balancers in workload or dedicated clusters. The purpose of these gateways is to recieve traffic and forward it (load-balance) to working Ingress Gateways for the target service.

Background Information

Edge Gateways are managed by the Tetrate Platform with stability and reliability in mind. They are infrequently updated, and operate with as simple a configuration as appropriate. They are often deployed in dedicated K8s clusters to minimise the possibility of noisy-neighbour activity or interruption from adjacent workloads. Should you wish to deploy multiple Edge Gateways for maximum high availability, you can use a basic GSLB solution to distribute traffic across the Edge gateways.

Check out the following background resources:

Onboarding a New Application

When you onboard a new application with Tetrate's Edge Gateway, there are multiple touchpoints where you need to configure the traffic flow:

  • Configure the Gateway resource for the Ingress Gateways on the Workload Cluster(s), to publish the application from the cluster. Refer to the Deploy a Service content for details. It's generally not necessary to configure DNS for the Workload Cluster instances of an application
  • Configure the Gateway resource for the Edge Gateways on the Edge Cluster(s), to publish the application from the Edge Gateway and distribute traffic to the functioning Workload Cluster instances. Refer to the Tetrate HA Design Guide for details on how this can be done, and for the Hight-Availability considerations that may apply.
  • Configure DNS for the application FQDN, to direct traffic to functioning instances of the Edge Gateway. This is typically performed using a third-party, DNS-based GSLB service such as one provided by your cloud provider (for example, AWS Route 53) or a cloud-neutral solution such as NS1, CloudFlare or Akamai.

The specific steps are tightly-defined by the Edge Gateway configuration you have selected and the nature of the DNS GSLB solution in use.

Option 2: Use Tetrate's AWS Controller with AWS Route 53

When deploying workloads or Edge Gateways on Amazon EKS, the Tetrate platform can automatically maintain Route 53 DNS entries that reflect the intent of the Application Owner or Platform Owner with respect to exposed applications and services. Tetrate's AWS Controller monitors the Gateway resources and identifies the hostname DNS values within. Provided that matching Route 53 hosted zones exist and the Platform Owner has permitted access, the AWS Controller will then configure and maintain the necessary DNS entries so that clients can access the workloads through the gateways.

Background Information

Check out the following background resources:

Platform: Prepare the Cluster

To make this capability available to your Application Owners, you need to do three things:

  1. Create the Route 53 Hosted Zones

    Create the necessary Route 53 Hosted Zones for the dns entries (domains) you plan to use, e.g. .tetratelabs.io

  2. Deploy the AWS Controller on each cluster

    Enable an appropriate IAM service account on each cluster, and deploy the Tetrate AWS Controller. You can use the spec.providerSettings.route53.domainFilter setting to limit which of the Route 53 Hosted Zones can be managed from the cluster.

    We strongly recommend installing AWS Load Balancer Controller on each cluster, for the best integration between the Tetrate Platform, the Ingress Gateways and the Route 53 configuration and health checks.

  3. Explain what an Application Owner needs to know

    Share the details an Application Owner needs to know to use the Route 53 automation:

Option 3: Manually configure a GSLB solution

Option 3: You also have the option of using a third-party GSLB solution to distribute traffic across Tetrate-managed endpoints, either Edge Gateways or Ingress Gateways. Alternatively, a CDN can provide a front-end for the collection of Edge Gateways and Ingress Gateways.

Apps: Deploy an Application

Your administrator (Platform Owner) will explain what you need to know to deploy an application across clusters, configure health checks and test high availability. The criteria depend on the approach they have taken to prepare the platform.

With suitable configuration, you should have good control over how your services are published and traffic is shared across clusters. This will allow you to create a HA deployment, to manage health checks and operationalize common tasks such as draining a cluster in preparation for an application upgrade.