Skip to main content
logoTetrate Service BridgeVersion: 1.10.x

Onboarding Clusters

This page explains how to onboard a Kubernetes cluster to an existing Tetrate Service Bridge management plane.

Before you start, make sure that you've:

✓ Checked the requirements
Installed TSB management plane or demo installation
Login to the management plane with tctl
✓ Checked TSB control plane components

isolation boundaries

TSB 1.6 introduces isolation boundaries that allows you to have multiple TSB-managed Istio environments within a Kubernetes cluster, or spanning several clusters. One of the benefits of isolation boundaries is that you can perform canary upgrades of the control plane.

To enable isolation boundaries, you must update operator deployment with environment variable ISTIO_ISOLATION_BOUNDARIES=true and control plane CR to include isolationBoundaries field. For more information, see Isolation Boundaries.

FIPS installation

To install FIPS-validated build, you need to download the FIPS-validated tctl binary from the TSB CLI downloads page. Using the FIPS-validated tctl for these steps ensures that the installed control plane operator and its components are FIPS-validated.

FIPS-validated tctl only available for Linux amd64 platforms.

Creating Cluster Object

To create the correct credentials for the cluster to communicate with the management plane, you need to create a cluster object using the management plane API.

Adjust the below yaml object according to your needs and save to a file called new-cluster.yaml.

apiVersion: api.tsb.tetrate.io/v2
kind: Cluster
metadata:
name: <cluster-name-in-tsb>
organization: <organization-name>
spec: {}
Cluster name in TSB

<cluster-name-in-tsb> is the designated name for your cluster in TSB. You use this name in TSB APIs, such as namespace selector in workspaces and config groups. You will also use this name when creating a ControlPlane CR below. This name must be unique.

Please refer to the reference docs for details on the configurable fields of a Cluster object.

To create the cluster object at the management plane, use tctl to apply the yaml file containing the cluster details.

tctl apply -f new-cluster.yaml

Deploy Operators

Next, you need to install the necessary components in the cluster to onboard and connect it to the management plane.

There are two operators you must deploy. First, the control plane operator, which is responsible for managing Istio, SkyWalking, and various other components. Second, the data plane operator, which is responsible for managing gateways.

tctl install manifest cluster-operators \
--registry <registry-location> > clusteroperators.yaml

The install manifest cluster-operators command outputs the Kubernetes manifests of the required operators. We can then add this to our source control or apply it to the cluster:

kubectl apply -f clusteroperators.yaml
note

For configuring secrets and control plane installation below, you must create secrets and custom resource yamls for each cluster individually. In other words, repeat the steps for each cluster and make sure to pass <cluster-name-in-tsb> value that you set above then apply both yamls to the correct cluster.

Configuring Secrets

The control plane needs secrets in order to authenticate with the management plane. These include a service account key, Elasticsearch credentials, and CA bundles if using self-signed certificates for the management plane, XCP or Elasticsearch. Following are a list of secrets that you need to create.

Secret nameDescription
elastic-credentialsElasticsearch username and password.
es-certsThe CA certificate to validate Elasticsearch connections when Elasticsearch is configured to present a self-signed certificate.
redis-credentialsContains:
 1. Redis password.
 2. Flag to use TLS.
& 3. The CA certificate to verify Redis connections when Postgres is configured to present a self-signed certificate.
 4. Client certificate and private key if Redis is configured with mutual TLS.
xcp-central-ca-bundleThe CA bundle to validate the certificates presented by XCP Central.
mp-certsThe CA certificate to validate TSB management plane APIs if the management plane is configured to present a self-signed certificate. This is CA that created to sign front-envoy TLS certificate.
cluster-service-accountThe CA bundle to validate the certificates presented by XCP Central.
front-envoy as Elasticsearch proxy

TSB front-envoy can act as proxy to Elasticsearch that is configured in ManagementPlane CR. If you use this, make sure to set es-certs with front-envoy TLS certificate.

Using tctl to Generate Secrets

These secrets can be generated in the correct format by passing them as command-line flags to the tctl control-plane-secrets command.

First, create service account for the cluster using tctl, which returns a private key encoded as a JWK that the cluster will use to authenticate with the management plane. The private key is needed when rendering the secrets for the cluster.

Run the following command to generate a service account private key for the cluster:

tctl install cluster-service-account \
--cluster <cluster-name-in-tsb> \
> cluster-<cluster-name-in-tsb>-service-account.jwk

The TSB management plane does not store the private key, so it is recommended to run the command once and store the the private key in cluster-<cluster-name>-service-account.jwk securely. Each time it is run a new private key will be created and associated with the service account in addition to the older keys. The older keys will continue to work so it is safe to run this command multiple times.

Now use tctl to render the Kubernetes secrets for the cluster, providing the cluster name, service account key and Elasticsearch credentials. If using self-signed certificates for the management plane, XCP or Elasticsearch, the CA bundle must also be provided here.

Self signed certificate

If you use self signed certificate, you can use Demo install as reference how to set necessary CA bundle. You should have CA bundle from management plane installation step where you create your self-signed certificates. If you use front-envoy as Elasticsearch proxy, you must use front-envoy CA certificate for --elastic-ca-certificate

note

The following command will generate controlplane-secrets.yaml that contains Elasticsearch credentials and service account key.

tctl install manifest control-plane-secrets \
--cluster <cluster-name-in-tsb> \
--cluster-service-account="$(cat cluster-<cluster-name-in-tsb>-service-account.jwk)" \
> controlplane-secrets.yaml

For more information, see the CLI reference for the tctl install control plane secrets command. You can also check the bundled explanation from tctl by running this help command:

tctl install manifest control-plane-secrets --help

Applying secrets

Once you've created your secrets manifest, you can add to source control or apply it to your cluster.

kubectl apply -f controlplane-secrets.yaml

Intermediate Istio CA Certificates

By default the Istio CA will generate a self-signed root certificate and key and use them to sign the workload certificates. If you want to deploy a TSB control plane in multi-cluster environment, Istio in all clusters must use the same root certificate. See Intermediate Istio CA Certificates for more details on how to setup Istio CA in the control plane.

Automated Certificate Management

Since 1.7, TSB supports automated certificate management intermediate Istio CA certificates. Go to Automated Certificate Management for more details. If you want to use automated certificate management, you can skip this section.

Make sure that you have enabled this in your management plane installation by setting certIssuer.clusterIntermediateCAs in the management plane custom resource.

You also need to enable this in the control plane custom resource by setting components.xcp.centralProvidedCaCert as shown below.

Demo installation

Istio control plane that installed in the demo cluster is using generated self-signed root CA. If you want to include demo control plane in multi-cluster environment you must update Istio cacerts in the Demo cluster istio-system namespace with new one issued using same root CA that is used for other clusters.

Then restart istiod and all your workloads in the demo cluster so it will have new root CA. You also need to restart oap-deployment in istio-system namespace so workloads can send access log to OAP.

Control Plane Installation

Finally, you will need to create a ControlPlane custom resource in Kubernetes that describes the control plane you wish to deploy.

For this step, you will be creating a manifest file that must include several variables:

Field NameVariable NameDescription
hubregistry-locationURL of your Docker registry.
managementPlane.clusterNamecluster-name-in-tsbName used when the cluster was registered to TSB management plane.
managementPlane.hosttsb-addressAddress where your TSB management plane is running. This is external IP of front-envoy service or domain name that you use in your DNS entry. For AWS, use ELB DNS name.
managementPlane.porttsb-portPort number where your TSB management plane is listening. Default is 8443.
managementPlane.selfSigned<is-mp-use-self-signed-certificate>Set to true is you use self signed certificate for the management plane. If you are not using self-signed certificates, you can either omit these fields or specify an explicit false value.
telemetryStore.elastic.hostelastic-addressAddress where your Elasticsearch instance is running.
telemetryStore.elastic.portelastic-portPort number where your Elasticsearch instance is listening.
telemetryStore.elastic.selfSigned<is-elastic-use-self-signed-certificate>Set to true is you use self signed certificate for the Elasticsearch. If you are not using self-signed certificates, you can either omit these fields or specify an explicit false value.
telemetryStore.elastic.protocolelastic-protocolEither http or https with default to https. Note that default value will not stored in CRD.
telemetryStore.elastic.versionelastic-versionThe major version number of your Elasticsearch instance (e.g. if version is 7.13.0, the value should be 7
apiVersion: install.tetrate.io/v1alpha1
kind: ControlPlane
metadata:
name: controlplane
namespace: istio-system
spec:
hub: <registry-location>
managementPlane:
host: <tsb-address>
port: <tsb-port>
clusterName: <cluster-name-in-tsb>
selfSigned: <is-mp-use-self-signed-certificate>
telemetryStore:
elastic:
host: <elastic-hostname-or-ip>
port: <elastic-port>
version: <elastic-version>
selfSigned: <is-elastic-use-self-signed-certificate>
components:
xcp:
centralProvidedCaCert: true
internalCertProvider:
certManager:
managed: INTERNAL

For more details on what each of these sections describes and how to configure them, please check out the following links:

This can then be applied to your Kubernetes cluster:

kubectl apply -f controlplane.yaml
note

To onboard a cluster, you do not need to create any data plane descriptions at this stage. Data plane descriptions are only needed when adding Gateways. For more information, see the section on Gateways in the usage quickstart guide.

Verify Onboarded Cluster

To verify a cluster has been successfully onboarded check that the pods have all started correctly.

kubectl get pod -n istio-system
Output
NAME                                          READY   STATUS    RESTARTS   AGE
edge-66cbf867c6-rshqh 1/1 Running 0 1m25s
istio-operator-78446c59c5-dg28c 1/1 Running 3 1m25s
istio-system-custom-metrics-apiserver-557ffcfbc8-lpw2f 1/1 Running 0 1m25s
istiod-6d474df64f-2w8s8 1/1 Running 0 1m25s
oap-deployment-894544dd6-v2w77 3/3 Running 0 1m25s
onboarding-operator-f68684bf4-txwxn 1/1 Running 1 1m25s
otel-collector-765d5c6475-6zfnf 3/3 Running 0 1m25s
tsb-operator-control-plane-554c56d4f4-cnzjg 1/1 Running 3 1m25s
xcp-operator-edge-787fc64b8d-rhlth 1/1 Running 5 1m25s

Verify Cluster Status

Then check if cluster status is sent to the management plane. You can do this by using TSB UI. Login to TSB UI, then go to Clusters page and see if your newly onboarded cluster has the following information available: Provider, XCP Version, Istio Version and Last Sync.

You can also use tctl by executing the following

tctl get clusters <cluster-name-in-tsb> -o yaml | grep state: -A 5
  state:
istioVersions:
- 1.12.4-34a16db007
lastSyncTime: "2022-07-01T06:24:34.562924571Z"
provider: gke
xcpVersion: v1.3.0-rc31

If you see Cluster state, it means XCP edge in the control plane has connected to XCP central in the management plane.