Skip to main content
logoTetrate Service ExpressVersion: Latest

Publishing a Service

Services are exposed through an Ingress Gateway, with AWS load balancing and optional Route 53-managed DNS

In this exercise, we'll configure an Ingress Gateway to expose and manage traffic to the productpage service in the Bookinfo application. We'll also outline the optional process to preconfigure the AWS Controller for the cluster.

If TSE's AWS Controller is enabled and configured, TSE will also ensure that the required DNS name is published and correctly resolves to the AWS load balancers that manage traffic to the Ingress Gateway(s) in the EKS clusters.

Prerequisites

Publish a Service with an Ingress Gateway and DNS entry

  1. Prepare AWS Controller Integration (optional)

    If you would like TSE to manage the Route 53 DNS entries for the services that you expose, enable the AWS Controller Integration.

  2. Deploy an Ingress Gateway

    Deploy an Ingress Gateway in the bookinfo namespace.

  3. Expose the Bookinfo productpage service

    Publish an Gateway resource to expose Bookinfo's productpage service. TSE will configure the Ingress Gateway, and trigger the AWS Controller service to deploy the DNS records.

Prepare AWS Controller Integration (optional)

This step is optional

If you don't want to configure the AWS Controller integration at this point, you can skip this step.

Prepare a Hosted Zone in Route 53

The Route 53 integration adds and manages records in a Route 53 Hosted Zone. In this exercise, we use the Hosted Zone for tse.tetratelabs.io.

Example Route 53 / DNS Configuration

tetratelabs.io was hosted on a third-party DNS server, and we delegated tse.tetratelabs.io to Route 53 and created a Hosted Zone for the subdomain:

  • We added tse.tetratelabs.io to Route 53 as a 'Hosted zone', and noted the NS records that Route 53 selected for that domain
  • We added matching NS records for tse.tetratelabs.io. to the third-party DNS servers that served the tetratelabs.io domain

Refer to the AWS Route 53 documentation for detailed instructions for your scenario.

You will need to identify a hosted zone that you can use for this optional integration.

Create an IAM service account to perform DNS changes

The AWS Controller requires an IAM policy that allows CRUD operations on Route 53 resources. Example policy json:.

cat <<EOF > AllowRoute53Updates.json
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"route53:ChangeResourceRecordSets"
],
"Resource": [
"arn:aws:route53:::hostedzone/*"
]
},
{
"Effect": "Allow",
"Action": [
"route53:ListHostedZones",
"route53:ListResourceRecordSets"
],
"Resource": [
"*"
]
}
]
}
EOF

aws iam create-policy \
--policy-name "AllowRoute53Updates" \
--policy-document file://AllowRoute53Updates.json

On each workload cluster, you should then create an IAM service account. Check that you have correctly set $EKS_CLUSTER_NAME and $REGION, and set $ACCOUNT to your 12-digit account ID:

eksctl create iamserviceaccount \
--cluster $EKS_CLUSTER_NAME \
--name "route53-controller" \
--namespace "istio-system" \
--region $REGION \
--attach-policy-arn "arn:aws:iam::${ACCOUNT}:policy/AllowRoute53Updates" \
--approve

Configure the AWS Controller integration in the TSE cluster

We will redeploy the TSE control plane on each workload cluster with the additional settings. The spec.providerSettings.route53.serviceAccountName settings must reference the IAM Service Account created above.

helm get values tse-cp -n istio-system > cp-values.yaml

helm upgrade tse-cp tse/controlplane \
  --version 1.8.0+tse \
  -n istio-system -f cp-values.yaml \
  --set spec.providerSettings.route53.serviceAccountName=route53-controller 

For more details, refer to the Route 53 Integration Guide.

Deploy an Ingress Gateway

Instruct TSE to deploy an Ingress Gateway (Envoy proxy), running it in the bookinfo namespace:

cat <<EOF > ingress-gw-install.yaml
apiVersion: install.tetrate.io/v1alpha1
kind: IngressGateway
metadata:
name: bookinfo-ingress-gw
namespace: bookinfo
spec:
kubeSpec:
service:
type: LoadBalancer
EOF

kubectl apply -f ingress-gw-install.yaml

This operation deploys an Envoy proxy and exposes it through a load balancer so that it is accessible from outside the Kubernetes cluster. You can discover the external Load Balancer address of the bookinfo-ingress-gw as follows:

kubectl get svc -n bookinfo bookinfo-ingress-gw
Load Balancer Access Policy

TSE's default load balancer access policy is internet-facing, ensuring that by default, ingress gateways are reachable from external (Internet) networks. This is in contrast to the stricter AWS Load Balancer Controller default of internal (see documentation for aws-load-balancer-scheme).

To restrict the Ingress Gateway to internal, you can annotate the LoadBalancer configuration in the IngressGateway resource:

spec:
kubeSpec:
service:
type: LoadBalancer
annotations:
service.beta.kubernetes.io/aws-load-balancer-scheme: internal

For more information, refer to the Amazon Load Balancing integration documentation.

Expose the Bookinfo productpage service

In TSE, you use the Gateway resource to expose services that are present in a Workspace. This involves two steps:

  1. Create a Gateway Group in the Workspace. The Gateway Group contains gateway-related configuration for services in part or all of the Workspace
  2. Add an Gateway resource to the Gateway Group. This resource specifies what services are to be exposed

Replace the hostname value with the DNS name you wish to use.

cat <<EOF > bookinfo-group-ingress.yaml
apiVersion: gateway.tsb.tetrate.io/v2
kind: Group
metadata:
displayName: bookinfo
name: bookinfo-gwgroup
organization: tse
tenant: tse
workspace: bookinfo-ws
spec:
displayName: bookinfo
namespaceSelector:
names:
- "*/bookinfo"
---
apiVersion: gateway.tsb.tetrate.io/v2
kind: Gateway
metadata:
organization: tse
tenant: tse
group: bookinfo-gwgroup
workspace: bookinfo-ws
name: bookinfo-gw
spec:
workloadSelector:
namespace: bookinfo
labels:
app: bookinfo-ingress-gw
http:
- name: bookinfo
port: 80
hostname: bookinfo.tse.tetratelabs.io
routing:
rules:
- route:
serviceDestination:
host: bookinfo/productpage.bookinfo.svc.cluster.local
port: 9080
EOF

tctl apply -f bookinfo-group-ingress.yaml

The Gateway resource associates an FQDN and port with a Kubernetes service. This configuration is translated to the appropriate proxy configuration to listen for matching requests and load-balance them to the named bookinfo/productpage.bookinfo.svc.cluster.local service.

The Ingress Gateway proxy can route and manage traffic in a variety of ways, as specified by the Gateway resource. For example, you can specify routing based on the HTTP path and other parameters in the incoming request, and you can manage traffic by rate-limiting, redirecting, and re-writing requests. You can specific additional behavior, such as authentication requirements, that is applied in the proxy.

Testing and Understanding the Configuration

Take a moment...

It may take several minutes for the AWS load balancer to complete provisioning, and for the external AWS DNS name ("GATEWAY_IP" in the example below) to be provisioned.

Access the Service without DNS

Acquire the IP address of the load balancers that manage traffic to the Ingress Gateway, and send a request through that address with the appropriate DNS name:

export GATEWAY_IP=$(kubectl -n bookinfo get service bookinfo-ingress-gw -o jsonpath="{.status.loadBalancer.ingress[0]['hostname','ip']}")

curl -s --connect-to bookinfo.tse.tetratelabs.io:80:$GATEWAY_IP \
"http://bookinfo.tse.tetratelabs.io/productpage" | \
grep "<title>"

Access the Service using DNS

If you configured AWS Controller or have otherwise mapped a DNS name to the GATEWAY_IP, you can access the service directly from your browser. Load http://bookinfo.tse.tetratelabs.io (replace with the DNS name you used), or use curl:

curl http://bookinfo.tse.tetratelabs.io/productpage

Troubleshooting

Route 53 Integration

For more information, check out the Route 53 Integration documentation.

DNS changes can take time to propagate. In the interim, you can verify configuration from three sources:

  1. What DNS names do your Istio Gateway resources use?

    kubectl get gateway bookinfo-gw -n bookinfo -o yaml
  2. Is AWS Controller running? What do the logs tell us?

    kubectl get pods -n istio-system -l app=aws-controller
    kubectl logs -n istio-system -l app=aws-controller
  3. What is the current Route 53 configuration?

    Check your AWS Console or other source for Route 53 configuration.

    AWS Controller configures health checks for the services it publishes. Legacy load balancers (Amazon CLB) may use inactive ports in their health checks. You can disable health checks (set Evaluate Target Health to No), or verify that they target an appropriate port (the nodeport value for port 80 or 443 in kubectl get svc -n bookinfo bookinfo-ingress-gw -o yaml)

Advanced Configuration

The AWS Controller allows you to configure advanced configuration options using annotations. User can add annotations to the Gateway resource to modify default options for DNS records. For example to change TTL to 60s add the following annotation to the Gateway resource:

apiVersion: gateway.tsb.tetrate.io/v2
kind: Gateway
metadata:
organization: tse
tenant: tse
group: bookinfo-gwgroup
workspace: bookinfo-ws
name: bookinfo-gw
annotations:
route53.v1beta1.tetrate.io/ttl: "60"

For list of all available annotations please refer to the Route 53 Integration documentation.

Cleaning Up

You can remove the Ingress Gateway and related services as follows:

tctl delete gwt --org tse --tenant tse --workspace bookinfo-ws --group bookinfo-gwgroup bookinfo-gw
tctl delete gg --org tse --tenant tse --workspace bookinfo-ws bookinfo-gwgroup
kubectl delete ingressgateway -n bookinfo bookinfo-ingress-gw