Deploy the Istio Gateway
The Istio Gateway should be deployed separately from Istio deployment, because the gateway architecture requires additional planning work.
These instructions explain how Istio Gateway can be deployed after the architecture work is completed. The gateway will receive external traffic and forward it to the applications running inside of the EKS Cluster:
To deploy an Istio Gateway in EKS cluster the following objects are required:
- Envoy Gateway Deployment
- Envoy Gateway Pod
- Envoy Gateway Kubernetes Service
These are hosted in the EKS cluster. The creation of Kubernetes service (type LoadBalancer
) will trigger the creation of an AWS Classic LoadBalancer or Network Load Balancer, depending on the EKS cluster configuration.
The example below is taken from the Istio 'Deploying a Gateway' documentation, with slight changes:
We begin with our Namespace definition:
apiVersion: v1
kind: Namespace
metadata:
name: tid-ingress-ns
The Service object will trigger creation of AWS LoadBalancer and will allow calls to reach the gateway pod.
apiVersion: v1
kind: Service
metadata:
name: tid-ingressgateway
namespace: tid-ingress-ns
spec:
type: LoadBalancer
selector:
istio: tid-ingress-gw
ports:
- port: 80
name: http
- port: 443
name: https
The Gateway Deployment should then be placed in the namespace that we just created.
apiVersion: apps/v1
kind: Deployment
metadata:
name: tid-ingressgateway-gw
namespace: tid-ingress-ns
spec:
selector:
matchLabels:
istio: tid-ingress-gw
template:
metadata:
annotations:
# Select the gateway injection template (rather than the default sidecar template)
inject.istio.io/templates: gateway
labels:
# Set a unique label for the gateway. This is required to ensure Gateways can select this workload
istio: tid-ingress-gw
# Enable gateway injection. If connecting to a revisioned control plane, replace with "istio.io/rev: revision-name"
sidecar.istio.io/inject: "true"
spec:
containers:
- name: istio-proxy
image: auto # The image will automatically update each time the pod starts.
Role and RoleBindings may also be required. These allow the gateway to access kubernetes secrets, such as those used to store TLS certificates:
# Set up roles to allow reading credentials for TLS
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: tid-ingressgateway-sds
namespace: tid-ingress-ns
rules:
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get", "watch", "list"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: tid-ingressgateway-sds
namespace: tid-ingress-ns
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: tid-ingressgateway-sds
subjects:
- kind: ServiceAccount
name: default
To confirm that the steps worked as expected you can query tid-ingress-ns
:
kubectl get pods -n tid-ingress-ns
The result should show Ingress GW Pod running: