Skip to main content
Version: 0.9.x

Onboarding Clusters

This page explains how to onboard a Kubernetes cluster to an existing Tetrate Service Bridge management plane.

Before you start:

✓ Set-up the TSB management plane, visit the requirements and download page for the how-to guide.
✓ Verify that you’re logged in to the management plane. If you're not, do with the steps below.

Log into the Management Plane

tctl config clusters set default --bridge-address <address:port>
tctl login

The login command will prompt you to set a tenant and provide a username and password.

The username you log in with must have the correct permissions to create a cluster. This will allow you to configure the management plane and onboard a cluster.

Configuring the Management Plane

To create the correct credentials for the cluster to communicate with the management plane, we need to create a cluster object using the management plane API. To configure a cluster object, adjust the below yaml object according to your needs and save to a file.

apiVersion: api.tsb.tetrate.io/v2
kind: Cluster
metadata:
name: <cluster-name>
tenant: <tenant-name>
spec:
tokenTtl: "1h"

Please refer to the reference docs for details on the configurable fields of a Cluster object.

To create the cluster object at the management plane, use tctl to apply the yaml file containing the cluster details.

tctl apply -f new-cluster.yaml

Deploy Operators

Next, you need to install the necessary components in the cluster to onboard and connect it to the management plane.

There are two operators you must deploy. First, the control plane operator, which is responsible for managing Istio, SkyWalking, Zipkin and various other components. Second, the data plane operator, which is responsible for managing gateways. These operators work independently from each other to decouple the upgrade of gateways from sidecar proxies.

tctl install manifest cluster-operators \
--registry <registry-location> > clusteroperators.yaml

The install manifest cluster-operators command outputs the Kubernetes manifests of the required operators. We can then add this to our source control or apply it to the cluster:

kubectl apply -f clusteroperators.yaml

Secrets

The control plane needs fewer secrets than the management plane as it only has to connect to the management plane and Elasticsearch. The manifest render command for the cluster uses our logged in tctl tool to retrieve tokens for communication with the management plane automatically, so we only need to provide Elastic credentials and the cluster name (so that the CLI tool can get tokens with the correct scope). Token generation is safe to run multiple times as it does not revoke any previously created tokens.

tctl install manifest control-plane-secrets \
--elastic-password tsb-elastic-password \
--elastic-username tsb \
--cluster <cluster-name> \
> controlplane-secrets.yaml

The install manifest control-plane-secrets command outputs the required Kubernetes secrets. When saved to a file, we can add to our source control or apply it to the cluster:

kubectl apply -f controlplane-secrets.yaml

Installation

Finally, we need to create a ControlPlane custom resource in Kubernetes that describes the control plane we wish to deploy.

apiVersion: install.tetrate.io/v1alpha1
kind: ControlPlane
metadata:
name: <cluster-name>
namespace: istio-system
spec:
hub: <registry-location>
telemetryStore:
elastic:
host: <elastic-hostname-or-ip>
port: <elastic-port>
version: <elastic-version>
managementPlane:
host: <bridge-address>
port: <bridge-port>
clusterName: <cluster-name>
tenant: <tenant-name>
meshExpansion: {}

For more details on what each of these sections describes and how to configure them, please check out the following links:

This can then be applied to you Kubernetes cluster:

kubectl apply -f controlplane.yaml
note

To onboard a cluster, you do not need to create any data plane descriptions at this stage. Data plane descriptions are only needed when adding Gateways.

Verify Onboarded Cluster

To verify a cluster has been successfully onboarded check that the pods have all started correctly.

kubectl get pod -n istio-system
NAME READY STATUS RESTARTS AGE
istio-ingressgateway-658bbbf444-q2gzj 1/1 Running 0 18s
istio-operator-57f9d6767f-h2txb 1/1 Running 0 49s
istiod-784d6d7969-887c9 1/1 Running 0 27s
oap-deployment-689d9546f5-gfbsx 2/2 Running 0 48s
otel-collector-5f5686f7f4-m9shr 2/2 Running 0 48s
tsb-operator-control-plane-55bb7f64d8-zv958 1/1 Running 0 13m
tsbd-766b56cfd9-p8v4k 1/1 Running 0 48s
vmgateway-7d665bcb54-f8ws5 1/1 Running 0 18s
zipkin-77bbf69d84-6l2xq 2/2 Running 0 49s