Skip to main content
logoTetrate Enterprise Gateway for Envoy (TEG)Version: v1.0.0

TEG Demo Install Guide

This guide is aimed at Engineers who are starting out with Kubernetes and the Gateway API and wish to try TEG and learn more about its features.

At the end of this guide you will have:

  • TEG running in your cluster
  • httpbin deployed into its own namespace
  • A dedicated gateway for Httpbin
  • A setup ready for you to continue to learn and experiment with the features of TEG
tip

If you are experienced with Kubernetes, the Gateway API and Envoy Gateway, feel free to head straight to the Installation Guide to dive deeper straight away.

The TEG demo installation chart comes with the necessary configurations to set up TEG and the required dependencies for rate limiting and observability.

The demo installation chart installs:

  • TEG
  • Redis for rate limiting
  • Grafana for visualisations
  • OpenTelemetry Collector for collecting OTEL format observability data
  • Loki as the log collection backend
  • Tempo as the tracing backend
  • Prometheus as metrics backend
  • Demo app for testing traffic

Prerequisites​

There are two options for you to try the demo out:

  • On a hosted cluster
  • On a cluster on your local machine

Demo Setup on a Hosted Cluster​

Demo Setup on your local​

If you are trying this out on a local machine, make sure you are ready to get started.

This guide requires that you have the following installed:

Let's create a cluster​

kind create cluster

Deploy the Demo Setup​

In this section, we will:

  1. Install TEG
  2. Deploy httpbin
  3. Set up a dedicated TEG gateway for httpbin
  4. Send a request to httpbin via the gateway
  1. Install TEG​

    Now we will install TEG using the teg-demo-helm chart.

    helm install teg oci://docker.io/tetrate/teg-demo-helm \
    --version v0.1.0 \
    -n envoy-gateway-system --create-namespace

    To make it easy to kick the tires, TEG also comes distributed as a demonstration helm chart.

    Next to TEG, the demonstration helm chart installs a full Prometheus + Grafana observability stack with ephemeral storage.

    Check out what is running

    kubectl get pod -n envoy-gateway-system

    Example output

    NAME                                                        READY   STATUS    RESTARTS   AGE
    envoy-default-eg-e41e7b31-7b77fbcc64-6fx2h 1/1 Running 0 109m
    envoy-gateway-7f9f4997cb-8rxq9 2/2 Running 0 109m
    envoy-httpbin-dedicated-gateway-c4239473-54487f477f-xlgs9 1/1 Running 0 107m
    envoy-ratelimit-56589c85dc-fg256 1/1 Running 0 108m
    grafana-64867f7d-26zxj 2/2 Running 0 109m
    loki-0 1/1 Running 0 109m
    loki-canary-hnvrg 1/1 Running 0 109m
    loki-gateway-84dd6454f-cldpl 1/1 Running 0 109m
    loki-logs-bcmhf 2/2 Running 0 109m
    otel-collector-54f57cc44d-m5rpn 1/1 Running 0 109m
    prometheus-79c754d584-72gxs 2/2 Running 0 109m
    teg-envoy-gateway-bbc9c47d6-pgdz5 1/1 Running 0 109m
    teg-grafana-agent-operator-75f947f68b-992n4 1/1 Running 0 109m
    teg-redis-86bb7d9b9d-ltw7q 1/1 Running 0 109m
    tempo-0 1/1 Running 0 109m
  2. Deploy an App​

    We are going to deploy httpbin to the cluster in a namespace called httpbin.

    1. Create Namespace

    kubectl create namespace httpbin

    2. Deploy sample

    kubectl apply -n httpbin -f https://raw.githubusercontent.com/istio/istio/master/samples/httpbin/httpbin.yaml

    3. Verify Deployment

    kubectl get pod -n httpbin

    You should see a httpbin pod running in the httpbin namespace.

    It's name will look something like this: httpbin-86b8ffc5ff-w2zlb

  3. Deploy a Gateway​

    In this example you will be applying configurations using stdin, for applying from file select the other tab above.

    Apply the config​

    Copy and paste the following into your terminal.

    cat <<EOF | kubectl apply -f -
    apiVersion: gateway.networking.k8s.io/v1beta1
    kind: Gateway
    metadata:
    name: dedicated-gateway
    namespace: httpbin
    spec:
    gatewayClassName: teg
    listeners:
    - name: http
    protocol: HTTP
    port: 80
    EOF

    Verify​

    kubectl get pods -n envoy-gateway-system \
    -l gateway.envoyproxy.io/owning-gateway-namespace=httpbin
  4. Configure the Gateway​

    Apply config​

    cat <<EOF | kubectl apply -n httpbin -f -
    apiVersion: gateway.networking.k8s.io/v1beta1
    # This is a simple HTTPRoute
    kind: HTTPRoute
    metadata:
    name: httpbin
    namespace: httpbin
    spec:
    parentRefs:
    - group: gateway.networking.k8s.io
    kind: Gateway
    name: dedicated-gateway
    rules:
    - matches:
    - path:
    type: PathPrefix
    value: /httpbin/
    filters:
    - type: URLRewrite
    urlRewrite:
    path:
    type: ReplacePrefixMatch
    replacePrefixMatch: /
    backendRefs:
    - group: ""
    kind: Service
    name: httpbin
    port: 8000
    EOF
  5. Check your Gateway​

    πŸ‘€ Let’s check your Gateway and let’s find out whether you have an LB or notΒ (if you don’t already know!)

    kubectl get gateway -n httpbin

    ADDRESS will be populated if you have a LoadBalancer (LB) and if it isn’t, you do not have an LB, but don’t worry, this guide will help you work around that on your machine with port forwarding.

    If you have an LB - ADDRESS is populated​

    You will see PROGRAMMED = True and ADDRESS populated

    NAME                CLASS   ADDRESS        PROGRAMMED   AGE
    dedicated-gateway teg 35.238.21.86 True 2m28s
    Verify​
    kubectl get svc -n envoy-gateway-system \
    -l gateway.envoyproxy.io/owning-gateway-namespace=httpbin

    Output Example You should see an EXTERNAL-IP populated if this is working as expected that matches the ADDRESS outputted from the kubectl get gateway -n httpbin command.

    NAME                                       TYPE           CLUSTER-IP   EXTERNAL-IP    PORT(S)        AGE
    envoy-httpbin-dedicated-gateway-c4239473 LoadBalancer 10.0.7.31 35.238.21.86 80:31583/TCP 5m17s

    If you do NOT have an LB - ADDRESS is not populated​

    You will see PROGRAMMED = False and that ADDRESS is not populated.

    NAME                CLASS   ADDRESS   PROGRAMMED   AGE
    dedicated-gateway teg False 13m
    note

    This is likely to happen on a local machine when you are using something like Kind or Minikube

    Do not fret, follow the instructions below for to set up Port Forwarding for your local machine with no LB to try out Envoy Gateway on your local machine.

  6. Set the $GATEWAY_ADDRESS​

    Choose the tab below that applies to your setup to configure $GATEWAY_ADDRESS that will set you up to try the

    We will set the $GATEWAY_ADDRESS to the external IP.

    export GATEWAY_ADDRESS=$(kubectl get gateway/dedicated-gateway -n httpbin -o jsonpath='{.status.addresses[0].value}')

    Verify​

    echo $GATEWAY_ADDRESS

    Check the output to ensure it is what you expect, it should be set.

  7. Try your TEG Demo Setup​

    Now when you’ve set the $GATEWAY_ADDRESS we are going to try it out!

    CURL

    curl -i http://${GATEWAY_ADDRESS}/httpbin/get

    Example Output

    HTTP/1.1 200 OK
    server: envoy
    date: Tue, 02 Apr 2024 15:48:29 GMT
    content-type: application/json
    content-length: 418
    access-control-allow-origin: *
    access-control-allow-credentials: true
    x-envoy-upstream-service-time: 15

    {
    "args": {},
    "headers": {
    "Accept": "*/*",
    "Host": "localhost:8899",
    "Traceparent": "00-d9eaaa5ed02283c641802dd30b7c58c0-a4ec6f331ca7373f-01",
    "Tracestate": "",
    "User-Agent": "curl/8.4.0",
    "X-Envoy-Expected-Rq-Timeout-Ms": "15000",
    "X-Envoy-Internal": "true",
    "X-Envoy-Original-Path": "/httpbin/get"
    },
    "origin": "10.244.0.29",
    "url": "http://localhost:8899/get"
    }
  8. TEG Demo is up​

    Congratulations! πŸŽ‰

    You now have the TEG Demo running and can start exploring policies and routing logic, as well as observe traffic and performance in Grafana.


Explore Policies​

First you configure policies and then you apply them to the Routes.

Below is an example on how to get started with a Global Rate-Limiting Policy and apply it to the route.

Global Rate Limiting Policy​

What is it? Global rate limiting applies a shared rate limit to the traffic flowing through all the instances of Envoy proxies where it is configured. i.e., if the data plane has 2 replicas of Envoy running, and the rate limit is 10 requests per second, this limit is shared and will be hit if 5 requests pass through the first replica and 5 requests pass through the second replica within the same second.

  1. Configure Policy​

    Config example

    apiVersion: gateway.envoyproxy.io/v1alpha1
    kind: RateLimitFilter
    metadata:
    namespace: httpbin
    name: ratelimit-1s
    spec:
    type: Global
    global:
    rules:
    - limit:
    requests: 1
    unit: Second

    Copy and paste the following into your terminal to apply the policy.

    cat <<EOF | kubectl apply -f -
    apiVersion: gateway.envoyproxy.io/v1alpha1
    kind: RateLimitFilter
    metadata:
    namespace: httpbin
    name: ratelimit-1s
    spec:
    type: Global
    global:
    rules:
    - limit:
    requests: 1
    unit: Second
    EOF
  2. Apply policy in filter on the Gateway​

    This is what we will add to the HTTPRoute names httpbin.

    - type: ExtensionRef
    extensionRef:
    group: gateway.envoyproxy.io
    kind: RateLimitFilter
    name: ratelimit-1s

    Copy paste the below in the terminal to modify the HTTP Route to add the RateLimitFilter on the HTTPRoute.

    cat <<EOF | kubectl apply -f -
    apiVersion: gateway.networking.k8s.io/v1beta1
    kind: HTTPRoute
    metadata:
    name: httpbin
    namespace: httpbin
    spec:
    parentRefs:
    - group: gateway.networking.k8s.io
    kind: Gateway
    name: dedicated-gateway
    rules:
    - matches:
    - path:
    type: PathPrefix
    value: /httpbin/
    filters:
    - type: URLRewrite
    urlRewrite:
    path:
    type: ReplacePrefixMatch
    replacePrefixMatch: /
    - type: ExtensionRef
    extensionRef:
    group: gateway.envoyproxy.io
    kind: RateLimitFilter
    name: ratelimit-1s
    backendRefs:
    - group: ""
    kind: Service
    name: httpbin
    port: 8000
    EOF
  3. Let’s try the RateLimiter​

    Let’s send too many requests and observe the output!

    curl -i http://$GATEWAY_ADDRESS/httpbin/get ; echo β€œ===” ; \
    curl -i http://$GATEWAY_ADDRESS/httpbin/get ;

    Output example

    HTTP/1.1 200 OK
    server: envoy
    date: Tue, 02 Apr 2024 16:43:36 GMT
    content-type: application/json
    content-length: 418
    access-control-allow-origin: *
    access-control-allow-credentials: true
    x-envoy-upstream-service-time: 9
    x-ratelimit-limit: 1, 1;w=1
    x-ratelimit-remaining: 0
    x-ratelimit-reset: 1

    {
    "args": {},
    "headers": {
    "Accept": "*/*",
    "Host": "localhost:8899",
    "Traceparent": "00-1e9bbc4981b753c19e5e2dc21350f166-0c07d3af5a6933c3-01",
    "Tracestate": "",
    "User-Agent": "curl/8.4.0",
    "X-Envoy-Expected-Rq-Timeout-Ms": "15000",
    "X-Envoy-Internal": "true",
    "X-Envoy-Original-Path": "/httpbin/get"
    },
    "origin": "10.244.0.29",
    "url": "http://localhost:8899/get"
    }
    β€œ===”
    HTTP/1.1 429 Too Many Requests
    x-envoy-ratelimited: true
    x-ratelimit-limit: 1, 1;w=1
    x-ratelimit-remaining: 0
    x-ratelimit-reset: 1
    date: Tue, 02 Apr 2024 16:43:36 GMT
    server: envoy
    content-length: 0

Observe traffic and performance​

See stats in Grafana​

TEG bundles a fully loaded Prometheus + Grafana stack with the demo helm install so you can immediately test drive TEG's observability capabilities.

Set up port forwarding for Grafana​

To access the Grafana deployment you can establish a port-forward tunnel to the Grafana dashboard.

kubectl port-forward -n envoy-gateway-system deployment/grafana 3001:3000

Visit http://localhost:3001 and use the following credentials to login:
Username admin
Password admin

Go to one of the dashboards and check it out, try this one: Envoy Pod Memory and CPU Usage

For better visualizations, you can try running continuous traffic to the demo application by running the following in a separate process:

while true; do curl -i http://${GATEWAY_ADDRESS}/httpbin/get; sleep 5; done

Metrics Dashboards​

The Grafana deployments come configured with two dashboards:

Envoy Global Dashboard​

The dashboard highlights aggregated metrics for all of your envoy proxy deployments, visualizing metrics:

  • Uptime
  • Resource consumption metrics
  • Upstream and Downstream connection and request metrics
  • Number of Active connections
  • Healthy Endpoints
  • HTTP latencies
  • TCP bytes received and transmitted

Envoy Clusters Dashboard​

This dashboard highlights metrics for clusters configured in your envoy proxy configuration, visualizing the metrics listed above for each cluster.

Logs visualization​

The demo chart configures the Loki data source with the Grafana dashboard. Users can dig into the access logs for the proxies by using the Explore tab in the Grafana deployment and selecting the Loki data source.

Tracing Visualization​

The demo chart also configures the Tracing data source with the Grafana dashboard. Users can dig into the access logs for the proxies by using the Explore tab in the Grafana deployment and selecting the Tempo data source.