Skip to main content
logoTetrate Service BridgeVersion: next

Introducing High Availability Clusters with Tetrate Edge Gateway

This Guide explains how to configure a High Availability deployment that spans multiple workload clusters in multiple cloud regions.

Introducing Tetrate's Edge Gateway solution

Tetrate's Edge Gateway is a front-end proxy that is deployed to manage traffic and distribute it across multiple back-end workload clusters. It is tightly-integrated into the Tetrate Management Plane, offering an effective and simple user experience, and scales to very large numbers of clusters, regions and levels of traffic.

In particular, the Edge Gateway deployment pattern:

  • Supports both public and private IP addresses, bridging from public to private if necessary. An Edge Gateway deployment can reduce the number of public IP endpoints, which in turn reduces the attack surface and potentially the cost
  • Consolidates the functionality of the workload cluster Ingress Gateway, bringing it forward and closer to the client. Performing rate limiting or authorization at the Edge reduces the load on the Workload clusters
  • Secures the entire data-path, from the very first Edge Gateway to the destination service, using Tetrate's mTLS and tunneling capabilities
  • Can be optimized to reduce failover time and eliminate unnecessary hops in a failover scenario

Making the Edge Gateways highly-available

The Edge Gateway pattern is a two-tier pattern, where the Edge Gateways provide a first tier of load-balancing in front of a tier of Ingress Gateways in Workload clusters.

The Edge Gateways manage failover in the Workload clusters, and you can use any of the above solutions, such as a DNS-based GSLB solution, to manage the (less frequent) [edge-failover](failover of Edge Gateways).

In this Guide

This guide explains how to service critical services from multiple regions, with a configuration optimized for high availability and scalability. The initial design for the solution brief uses the following architecture:

  • Two separate cloud regions, potentially spanning multiple different cloud providers
  • One 'Edge Gateway' receiving internet traffic
  • Two 'Workload Clusters', one in each region, hosting the named service
  • A single named service accessed through a single DNS name
  • A third-party DNS service to distribute traffic to the Edge Gateway

Begin with the Getting Started Documentation to create this example deployment.