Skip to main content
Version: 0.9.x

TSB service

Please follow these recommendations if you can't connect to TCC

Is IAM up and talking to LDAP server?

Using the kubectl command you should see that IAM is Ready (1/1) and Running:

kubectl -n tcc get pod -l app=iam

NAME READY STATUS RESTARTS AGE
iam-65ccb65464-qqzfx 1/1 Running 0 44h

Ensure that the LDAP configuration of IAM is setup correctly, depending on your LDAP needs:

kubectl -n tcc get cm ldap-config -o yaml --no-headers \
-o jsonpath='{.data.config\.yaml}'

server:
host: ldap
port: 389
tls: false
search:
basedn: dc=tetrate,dc=io
recursive: true
iam:
matchdn: "cn=%s,ou=People,dc=tetrate,dc=io"
matchfilter: "(&(objectClass=person)(uid=%s))"
sync:
usersfilter: "(objectClass=person)"
groupsfilter: "(objectClass=groupOfUniqueNames)"
membershipattribute: uniqueMember

LDAP authentication fails

Ensure LDAP configuration (search filter) is properly set.

  • If iam-config ConfigMap was changed, IAM pod has to be restarted.
kubectl -n tcc delete pod -l app=iam
  • Make sure LDAP secret is created and the base64 encoding is not adding line return (\n) at the end of lines. This may be the case if the secret was created outside of the helm template.
    You can decode the secret using the base64 command with the -d option on Linux or -D on MacOS.
    The decoded message should not contain a line return (note the ~$ at the end of the line meaning no line return was added. The last characters could be different depending on your shell setup):
# do NOT copy the ~$ characters when testing this command
kubectl -n tcc get secret ldap-credentials --no-headers \
-o jsonpath='{.data.binddn}'|base64 -d

cn=admin,dc=tetrate,dc=io~$

# Ex if the encoded string includes a line return at the end:
kubectl -n tcc get secret ldap-credentials --no-headers \
-o jsonpath='{.data.binddn}'|base64 -d

cn=admin,dc=tetrate,dc=io
$~

As an example, on Linux, use -n with the echo command to remove the line return from the encoded message:

echo -n "cn=admin,dc=tetrate,dc=io" | base64

Is TSB (TCC) Envoy up ?

Envoy is a proxy used to route traffic to the TSB (TCC) components. Ensure the Envoy component from the TSB (or TCC) namespace is running in your cluster:

kubectl -n tcc get pods -l app=envoy

NAME READY STATUS RESTARTS AGE
envoy-8664477cf9-5wps2 1/1 Running 0 45h

If any change is made to the Envoy configuration (ConfigMap tcc-envoy-yaml), the Envoy pod have to be restarted. Simply delete the pod so Kubernetes can start anew one with the updated configuration: kubectl -n tcc delete pods -l app=envoy

Ensure connections going through Envoy can reach destination and that no Timeout occurs. This can be achieved by checking the Envoy's logs and ensuring all requests have a 200 - status.

kubectl -n tcc logs -l app=envoy

[2020-03-25T12:57:54.433Z] "GET /service_inventory/type/v3%7Creviews%7Cbookinfo%7C-%7C-_0_0 HTTP/1.1" 200 - 0 347 5 0 "-" "Apache-HttpAsyncClient/4.1.2 (Java/1.8.0_212)" "aa95ad5c-ec44-45c3-a2e7-737987f4198c" "localhost:9200" "10.28.1.11:9200"
...

Configuration flow from TSB to clusters (TSBD)

You can dump the cluster's configuration that TSB (TCC) is sending to Istio. First port-forward istiod port 8080 and query using curl:

# port-forward
kubectl -n istio-system port-forward deployment/istiod 8080

# dump the config
curl -s http://localhost:8080/debug/configz |less

# check for sidecar sync status
curl -s http://localhost:8080/debug/syncz | less

Mesh or Sidecar troubleshooting

Sidecar not injected

If Istio sidecar is not injected into your pods, check the following two things:

  • Ensure the Namespace have the istio-injection=enabled label:
kubectl get ns --show-labels

NAME STATUS AGE LABELS
bookinfo Active 20h istio-injection=enabled
default Active 20h <none>
istio-operator Active 20h <none>
istio-system Active 20h istio-injection=disabled
kube-public Active 20h <none>
kube-system Active 20h <none>
tcc Active 20h istio-injection=disabled
  • Ensure your deployment does not explicitly disable injection by using the annotation sidecar.istio.io/inject: "false". Here is an example for the productpage Deployment in the bookinfo Namespace:
# example when no annotation is present
kubectl -n bookinfo get deployment -l app=productpage \
-o jsonpath='{.items[*].spec.template.metadata.annotations}'

# example with an annotation
kubectl -n bookinfo get deployment -l app=tsb-gateway-bookinfo \
-o jsonpath='{.items[*].spec.template.metadata.annotations}'

map[sidecar.istio.io/inject:false]

You can check that all your pods are using a Sidecar by listing them. Your injected pods should have one more container than you defined in the Deployment. Check the READY column which should have at least /2 containers. In the following example, all pods are injected (2/2) except the tsb-gateway which does NOT need a sidecar as it is an Istio proxy itself:

kubectl -n bookinfo get pods

NAME READY STATUS RESTARTS AGE
details-v1-94d5d794-rtd9v 2/2 Running 0 20h
productpage-v1-665ddb5664-7979q 2/2 Running 0 20h
ratings-v1-744894fbdb-4ksqz 2/2 Running 0 20h
reviews-v1-f7c7c7b45-kc2wk 2/2 Running 0 20h
reviews-v2-6cb744f8ff-nsxhx 2/2 Running 0 20h
reviews-v3-5556b6949-bsbx6 2/2 Running 0 20h
trafficgenerator-bb76446c8-7q4ql 2/2 Running 0 20h
tsb-gateway-7b7fbcdfb7-c54zm 1/1 Running 0 20h

A service can't reach some other services

In TSB (TCC), only services in the same Application can communicate, even if they are in different namespaces. First, ensure that Sidecars are injected to all pods by following the procedure above.

Check Sidecar logs from the sender and receiver:

kubectl -n bookinfo logs -l app=productpage -c istio-proxy

All requests from the log should have a status 200 -. If not the case, you should see a code like 500 UC. Please refer to the section RESPONSE_FLAGS of the Envoy documentation to identify the connection problem.