Using TEG in Conjunction with an Istio Service Mesh
- Get a k8s cluster
- Install TEG
Caveats
- Neither the Pod nor Service definitions for the app can specify http2 for the
port (neither as a port name prefix, nor with the
appProtocol
field). This breaks connection between the TEG sidecar and the app sidecar.
Steps
Prepare TEG to Interoperate with Istio
Label TEG's namespace so that the data plane gets Istio sidecars. The control plane will too, but that's no bad thing.
kubectl label namespace envoy-gateway-system --overwrite=true istio-injection=enabled
The EG control plane Service is mis-defined; saying it takes plaintext traffic when actually it serves TLS. When Istio is handling traffic to the control plane this will actually matter, so patch it.
control-plane-tls.yamlspec:
ports:
- port: 18000
appProtocol: tlskubectl patch service -n envoy-gateway-system envoy-gateway \
--type strategic --patch-file control-plane-tls.yamlWe also need to tell the TEG sidecars not to process any traffic entering the gateway Envoys.
Normally, an app's sidecar would process traffic on the way in (auth, rate-limiting, etc), as well as on the way out (routing, mirroring, etc). But since the "app" in this case is an edge proxy, we don't want the sidecar to do its thing to traffic coming in from users, as there are several potential pitfalls:
- Strict mTLS would be unusable, as the Istio Envoys think they're sidecars not gateways, so they'd insist on istio-style mTLS from external clients.
- It wouldn't be possible to serve TLS from TEG without putting the sidecars in SNI routing mode on the inbound. It's also a performance optimisation, as TEG offers a superset of the sidecar's features.
The best way to not have the sidecars interfere with inbound traffic is to tell them not to even intercept it within the Pod; to not configure the iptables rules they'd normally use for that. This can be done with an annotation on the Pods:
traffic.sidecar.istio.io/includeInboundPorts=""
. The default value forincludeInboundPorts
is"*"
, meaning all ports. The empty string means no ports.In order to add an annotation, we use an EnvoyProxy resource as follows:
teg-sidecars-no-inbound.yamlapiVersion: config.gateway.envoyproxy.io/v1alpha1
kind: EnvoyProxy
metadata:
name: data-plane-sidecars
namespace: envoy-gateway-system
spec:
provider:
type: Kubernetes
kubernetes:
envoyDeployment:
pod:
annotations:
traffic.sidecar.istio.io/includeInboundPorts: ""kubectl apply -f teg-sidecars-no-inbound.yaml
We then alter the GatewayClass for this instance of the TEG control plane to point to that EnvoyProxy resource; applying its settings to all created proxy Pods.
gtwcls-use-envoyproxy.yamlspec:
parametersRef:
group: config.gateway.envoyproxy.io
kind: EnvoyProxy
namespace: envoy-gateway-system
name: data-plane-sidecarskubectl patch gatewayclass teg --patch-file gtwcls-use-envoyproxy.yaml --type merge
Install Istio
Left as an exercise to the reader. You probably want to use a profile that doesn't deploy the default Istio Ingress Gateway. They won't interfere, but it avoids confusion.
Restart TEG Control Plane
Now Istio is ready with its sidecar-injection webhook, we restart all the TEG control plane Pods, which will come back up with sidecars.
for d in envoy-gateway envoy-ratelimit teg-envoy-gateway teg-redis; \
do kubectl rollout restart deployment -n envoy-gateway-system $d; \
doneDeploy Test Apps
It's important this happens after Istio is installed, as they need sidecars too.
For example:
kubectl label namespace default --overwrite=true istio-injection=enabled
kubectl apply -f https://raw.githubusercontent.com/istio/istio/master/samples/httpbin/httpbin.yamlConfigure TEG
We now configure TEG to process traffic at the edge, in the normal way.
For example
apps-gateway.yamlapiVersion: gateway.networking.k8s.io/v1beta1
kind: Gateway
metadata:
name: apps
spec:
gatewayClassName: teg
listeners:
- name: http
protocol: HTTP
port: 80kubectl apply -f apps-gateway.yaml
httpbin-route.yamlapiVersion: gateway.networking.k8s.io/v1beta1
kind: HTTPRoute
metadata:
name: httpbin
spec:
parentRefs:
- group: gateway.networking.k8s.io
kind: Gateway
name: apps
hostnames:
- "www.example.com"
rules:
- matches:
- path:
type: PathPrefix
value: /httpbin/
backendRefs:
- group: ""
kind: Service
name: httpbin
port: 8000kubectl apply -f httpbin-route.yaml
noteWe don't remove the routed path prefix here. This is important, because the later intra-mesh stage will need it to differentiate requests, just like this HTTPRoute did. It can be removed in that later stage, for apps which can't be "mounted" on any path other than
/
.Configure Istio to Transport Traffic Through the Mesh
Now comes the crux of EG + Istio integration.
EG is listening on the edge of the network, processing requests as a gateway, and then just dumping them "on the wire", aimed at the designated internal service, with no idea that there's anything more than a Layer 3 Kubernetes cluster network in the way.
Likewise, Istio has attached sidecars to the TEG Pods, not knowing they're anything other than "normal" workloads in the mesh. Because TEG is acting as a gateway, it's receiving requests eg for
www.example.com
and putting them back on the wire still withHost: www.example.com
. Because Istio's sidecars are precisely that - configured to be sidecars, not gateways (which Istio also supports), they will ignore the IP destination of the requests coming out of TEG (which are correct), and instead try to route to the HTTP host (as if the microservice they're sidecar for wants to call an internet API).With Istio in outbound-traffic-mode
ALLOW_ANY
, this request will get sent to the realwww.example.com
- which is not what we want. If instead this were a production deployment and the host were your real domain, sayacme.com
, the traffic would infinite loop back to TEG. With Istio in outbound-traffic-modeREGISTRY_ONLY
, these requests get blocked as there's not a ServiceEntry declaringwww.example.com
(Istio returns a502
, which TEG forwards to the client).There are a couple of dirty solutions to this, which just about work. The most obvious of which is to have TEG re-write the HTTP hostname of the request to the name of the Service it's routing to. This is trivial with a filter in HTTPRoute, but it provides a bad user experience, and the new hostname has to be fully qualified (eg
httpbin.default.svc.cluster.local
) which reduces its portability.The correct solution is, having used HTTPRoute to configure routing through TEG, now configure further routing through the Istio mesh. This is done in the normal way with a VirtualService, but this VS must attach not to an Ingress Gateway, but to the special
mesh
Gateway which represents normal sidecars. Notice that this is where we can remove the path prefix, if desired.httpbin-mesh-vs.yamlapiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: example-mesh
spec:
gateways: ["mesh"]
hosts:
- www.example.com
http:
- match:
- uri: { prefix: /httpbin }
rewrite:
uri: /
route:
- destination:
host: httpbin
port: { number: 8000 }kubectl apply -f httpbin-mesh-vs.yaml
infoA ServiceEntry isn't actually needed, because SEs add entries to the (hidden) Istio service registry, which is only consulted if the traffic falls through on internal routes (which are precisely what a VS adds) and is thus determined to be external. Neither is a DestinationRule needed, because Istio's service registry entry for our httpbin Pod knows it has a sidecar, as it's under the control of the same mesh control plane.
Easiest is to think about the three different resources thus
- Gateway - punch port open. This would always exist with either TEG or Istio
- HTTPRoute - configure routing and features at the edge, eg authn, ratelimit
- VirtualService - configure routing and features in the mesh, eg mTLS, traffic shifting, traffic mirroring, fault injection
Of course, we could have used the Gateway API (specifically another HTTPRoute) to configure the intra-mesh routing, but we wanted to make the distinction easy.
Enable Strict mTLS
The above configuration will work just fine. However, so would a configuration where we had an Istio mesh containing our app, and a separate TEG, without sidecars. The difference is that there would have been TLS between TEG and the app, as the traffic "jumped the gap" (think of our setup as "lifting TEG into the mesh").
To avoid any chance of configuring this by accident, and as a general best practice, enable strict mutual TLS, and verify things still work.
strict-mtls.yamlapiVersion: security.istio.io/v1beta1
kind: PeerAuthentication
metadata:
name: "default"
namespace: "istio-system"
spec:
mtls:
mode: STRICTkubectl apply -f strict-mtls.yaml