Skip to main content
logoTetrate Enterprise Gateway for Envoy (TEG)Version: v1.2.x

Deploying Gateways

In the Kubernetes Gateway API there is a split of ownership between Gateways and Routes: the platform team administering TEG typically owns Gateways and App teams own Routes. Sometimes the platform team delegates ownership of Gateways to individual app teams too — we cover that as well.

Deployment Model

As we discussed in the Gateway API introduction, a deployment of Envoys is one-to-one with a Gateway resource. Alongside deploying Envoy itself, Envoy Gateway also instantiates a LoadBalancer type Service to make that Envoy deployment accessible outside the cluster. There are two primary models for Gateway ownership: shared or ingress-per-team/ingress-per-app. Which we pick is a tradeoff of operability versus risk-tolerance.

Having too many Gateways with Load Balancers and public addresses can be challenging to manage, but shared Gateways are shared failure domains, and present an opportunity for one team to cause an outage for others. In practice, we see most organizations end up at a mix: there's a shared gateway which serves most application traffic (say, 80%), and a small number of highly critical applications (say, 20%) get their own gateways.

Fortunately, Envoy Gateway is flexible enough to support the full range of deployment models.

Setting up the GatewayClass

Before we can deploy any Envoys, we need to tell Envoy Gateway what those Envoys need to look like. We use the GatewayClass API for this. They can be very simple:

apiVersion: gateway.networking.k8s.io/v1
kind: GatewayClass
metadata:
name: eg
namespace: default
spec:
controllerName: gateway.envoyproxy.io/gatewayclass-controller

Quite a lot of settings about Envoy's deployment can be configured at the GatewayClass level — we cover that at length in the Configuring Envoy Deployment Parameters section below.

Deploying a Gateway into the Cluster

We cover installing TEG as well as explain how writing a Gateway resource results in a set of Envoys being deployed in the Detailed Installation Guide. The principle is the same here: whenever we deploy a Gateway resource in the cluster, Envoy Gateway deploys a set of Envoys that it manages in its own namespace.

We can apply a Gateway:

apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
name: default
namespace: default
spec:
gatewayClassName: eg
listeners:
- allowedRoutes:
namespaces:
from: Same
name: http
port: 32227
protocol: HTTP

and see the resulting Application, Pods, and Service in Kubernetes:

$ kubectl get deploy,pod,service --namespace envoy-gateway-system
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/envoy-default-default-64656661 1/1 1 1 4s
...

NAME READY STATUS RESTARTS AGE
pod/envoy-default-default-64656661-8657ff9786-hfl7w 1/1 Running 0 4s
...

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/envoy-default-default-64656661 LoadBalancer 10.98.17.15 127.0.0.1 32227:31690/TCP 4s
...
Envoy Gateway deploys into envoy-gateway-system

Note that Envoy Gateway deploys Envoy and its Service in the envoy-gateway-system namespace and not in the application namespace. This is to streamline permissions required by EG: otherwise it'd need permissions to create Deployments and Services every app namespace that wants to manage a Gateway.

Whether that Gateway is shared or not comes down to how Application developers write Routes to target it. Note the allowedRoutes parameter — this combined with Kubernetes RBAC governing which namespaces application developers can publish their Routes in effectively allow you to control who can attach configuration to each Gateway.

Deploying Multiple Gateways

Since Envoy deployments are one-to-one with Gateway resources, deploying more separate Envoy instances is as simple as authoring additional Gateway resources. We can create two more and have three Gateways in our cluster easily.

First we'll create a new Gateway just like before:

apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
name: two
namespace: default
spec:
gatewayClassName: eg
listeners:
- allowedRoutes:
namespaces:
from: Same
name: http
port: 32227
protocol: HTTP

So far all of our examples have deployed into the default namespace, but Envoy Gateway can work across all namespaces: there's no requirement to use default. To show this, we can create a new namespace and apply a Gateway there:

apiVersion: v1
kind: Namespace
metadata:
labels:
name: example
name: example
---
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
name: three
namespace: example
spec:
gatewayClassName: eg
listeners:
- allowedRoutes:
namespaces:
from: Same
name: http
port: 32227
protocol: HTTP

Regardless of the namespace we create the Gateway resource in, Envoy Gateway will create and manage the Envoy deployment in the envoy-gateway-system namespace:

$ kubectl get deploy,pod,service --namespace envoy-gateway-system
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/envoy-default-default-64656661 1/1 1 1 1m
deployment.apps/envoy-default-two-64656661 1/1 1 1 12s
deployment.apps/envoy-example-three-6578616d 1/1 1 1 4s
...

NAME READY STATUS RESTARTS AGE
pod/envoy-default-default-64656661-8657ff9786-hfl7w 1/1 Running 0 1m
pod/envoy-default-two-64656661-75bb6fb5d5-5thj7 1/1 Running 0 12s
pod/envoy-example-three-6578616d-78d56d48cf-k4kgk 1/1 Running 0 4s
...

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/envoy-default-default-64656661 LoadBalancer 10.98.17.15 127.0.0.1 32227:31690/TCP 1m
service/envoy-default-two-64656661 LoadBalancer 10.103.33.83 127.0.0.1 32227:30257/TCP 12s
service/envoy-example-three-6578616d LoadBalancer 10.111.175.76 127.0.0.1 32227:32020/TCP 4s
...

That's it! We have multiple Envoys running in our cluster, and a clue on how to start to manage them by restricting where Routes can be published that attach to them. Read our section on managing app developer access for more guidance on avoiding shared fate outages. Read the next section to find out how to tune the configuration for Envoys that EG deploys and manages.