Skip to main content
Version: 1.2.x

Onboarding Clusters

This page explains how to onboard a Kubernetes cluster to an existing Tetrate Service Bridge management plane.

Before you start:

✓ Set-up the TSB management plane, visit the requirements and download page for the how-to guide.
✓ Verify that you’re logged in to the management plane. If you're not, do with the steps below.

Log into the Management Plane

If already logged in with tctl, you can skip this step.

tctl login

The login command will prompt you to set a organization, tenant, and provide a username and password. For onboarding clusters, you do not need to specify a tenant.

The username you log in with must have the correct permissions to create a cluster. This will allow you to configure the management plane and onboard a cluster.

Configuring the Management Plane

To create the correct credentials for the cluster to communicate with the management plane, we need to create a cluster object using the management plane API. To configure a cluster object, adjust the below yaml object according to your needs and save to a file.

apiVersion: api.tsb.tetrate.io/v2
kind: Cluster
metadata:
name: <cluster-name-in-tsb>
organization: <organization-name>
spec:
tokenTtl: "8760h"
Cluster name in TSB

<cluster-name-in-tsb> is the designated name for your cluster in TSB. You use this name in TSB APIs, such as namespace selector in workspaces and config groups. You will also use this name when creating a ControlPlane custom resource below.

Cluster token TTL

To make sure communication between the TSB management plane and the cluster is not disrupted, you must renew the cluster token before it expires. You can set tokenTtl to a very high value (e.g. 8760h or 1 year) to avoid having to renew the cluster token frequently.

Please refer to the reference docs for details on the configurable fields of a Cluster object.

To create the cluster object at the management plane, use tctl to apply the yaml file containing the cluster details.

tctl apply -f new-cluster.yaml

Deploy Operators

Next, you need to install the necessary components in the cluster to onboard and connect it to the management plane.

There are two operators you must deploy. First, the control plane operator, which is responsible for managing Istio, SkyWalking, Zipkin and various other components. Second, the data plane operator, which is responsible for managing gateways. These operators work independently from each other to decouple the upgrade of gateways from sidecar proxies.

tctl install manifest cluster-operators \
--registry <registry-location> > clusteroperators.yaml

The install manifest cluster-operators command outputs the Kubernetes manifests of the required operators. We can then add this to our source control or apply it to the cluster:

kubectl apply -f clusteroperators.yaml

Secrets

The control plane needs fewer secrets than the management plane as it only has to connect to the TSB management plane and Elasticsearch. The manifest render command for the cluster uses the tctl tool to retrieve tokens to communicate with the management plane automatically, so you only need to provide Elastic credentials, XCP edge certificate secret, and the cluster name (so that the CLI tool can get tokens with the correct scope). Token generation is safe to run multiple times as it does not revoke any previously created tokens.

tctl install manifest control-plane-secrets \
--elastic-password tsb-elastic-password \
--elastic-username tsb \
--cluster <cluster-name> \
> controlplane-secrets.yaml

You need to set up a tls secret named xcp-edge-cert so that the control plane can talk to the management plane over mTLS. The certificate must be created using the same chain of trust that was used to create the xcp-central-cert in the management plane, and have the tls.crt, tls.key, and ca.crt fields set.

cert manager

If you have installed cert manager in your TSB management plane cluster, we provide a convenience method in tctl to help create a control plane certificate. If you have used the demo installation profile for TSB, a cert-manager has been installed for your convenience. Add the following xcp-certs flag to the above install manifest command for automatic creation of your control plane cluster's xcp-edge-cert in this case.

tctl install manifest control-plane-secrets \
--xcp-certs "$(tctl install cluster-certs --cluster <cluster-name>)" \
...

Please note that you will need to have the current context of kubectl pointing to your management plane cluster when creating the secrets manifest with tctl install cluster-certs. When applying the resulting secrets manifest, don't forget to switch back current context of kubectl to the onboarding cluster.

The install manifest control-plane-secrets command outputs the required Kubernetes secrets. When saved to a file, we can add to our source control or apply it to the cluster:

kubectl apply -f controlplane-secrets.yaml

For more information, see the CLI reference for the tctl install control plane secrets command.

Installation

Finally, we need to create a ControlPlane custom resource in Kubernetes that describes the control plane we wish to deploy.

Cluster name in TSB

Make sure to replace <cluster-name-in-tsb> with values that you have set previously when creating cluster object in TSB management plane

apiVersion: install.tetrate.io/v1alpha1
kind: ControlPlane
metadata:
name: controlplane
namespace: istio-system
spec:
hub: <registry-location>
telemetryStore:
elastic:
host: <elastic-hostname-or-ip>
port: <elastic-port>
version: <elastic-version>
managementPlane:
host: <tsb-address>
port: <tsb-port>
clusterName: <cluster-name-in-tsb>
meshExpansion: {}

For more details on what each of these sections describes and how to configure them, please check out the following links:

This can then be applied to you Kubernetes cluster:

kubectl apply -f controlplane.yaml
note

To onboard a cluster, you do not need to create any data plane descriptions at this stage. Data plane descriptions are only needed when adding Gateways. For more information, see the section on Gateways in the usage quickstart guide.

Verify Onboarded Cluster

To verify a cluster has been successfully onboarded check that the pods have all started correctly.

kubectl get pod -n istio-system
NAME READY STATUS RESTARTS AGE
istio-ingressgateway-658bbbf444-q2gzj 1/1 Running 0 18s
istio-operator-57f9d6767f-h2txb 1/1 Running 0 49s
istiod-784d6d7969-887c9 1/1 Running 0 27s
oap-deployment-689d9546f5-gfbsx 2/2 Running 0 48s
otel-collector-5f5686f7f4-m9shr 2/2 Running 0 48s
tsb-operator-control-plane-55bb7f64d8-zv958 1/1 Running 0 13m
tsbd-766b56cfd9-p8v4k 1/1 Running 0 48s
vmgateway-7d665bcb54-f8ws5 1/1 Running 0 18s
zipkin-77bbf69d84-6l2xq 2/2 Running 0 49s