Skip to main content
logoTetrate Service BridgeVersion: 1.10.x

TSB Upgrade

This page will walk you through how to upgrade TSB using the tctl CLI, rendering the Kubernetes manifests for the different operators and applying them to the clusters to upgrade using kubectl.

Before you start, make sure that you have:

✓ Checked the new version's requirements

The upgrade procedure between operator-based releases is fairly simple. Once the operator pods are updated with the new release images, the newly spun up operator pods will upgrade all the necessary components to the new version for you.

Create Backups

In order to make sure you can restore everything when something goes wrong, please create backups for the Management Plane and each of your clusters' local Control Planes.

Backup the Management Plane

Backup the tctl binary

Since each new tctl binary potentially comes with new operators and configurations to deploy and configure TSB, you should backup the current tctl binary you are using. Please do this before syncing the new images.

Copy the tctl binary with version suffix (e.g. -1.3.0) to quickly restore the older one if needed.

cp ~/.tctl/bin/tctl ~/.tctl/bin/tctl-{version}

If you have misplaced your binary, you may be able to find the right version from this URL. However, it is strongly recommended that you backup your current copy to be sure.

Backup the ManagementPlane CR

Create a backup of the ManagementPlane CR by executing the following command:

kubectl get managementplane -n tsb -o yaml > mp-backup.yaml

Backup the PostgreSQL database

Create a backup of your PostgreSQL database.

The exact procedure for connecting to the database may differ depending on your environment, please refer to the documentation for your environment.

Backup the Control Plane Custom Resource

Create a backup of all ControlPlane CRs by executing the following command on each of your onboarded clusters:

kubectl get controlplane -n istio-system -o yaml > cp-backup.yaml

Upgrade Procedure

Download tctl and Sync Images

Now that you have taken backups, download the new version's tctl binary, then obtain the new TSB container images.

Details on how to do this is described in the Requirements and Download page

Create the Management Plane Operator

Create the base manifest which will allow you to update the management plane operator from your private Docker registry:

tctl install manifest management-plane-operator \
--registry <your-docker-registry> \
> managementplaneoperator.yaml
Customization

The managementplaneoperator.yaml file created by the install command can now be used as a base template for your Management Plane upgrade. If your existing TSB configuration contains specific adjustments on top of the standard configuration, you should copy them over to the new template.

Now, add the manifest to source control or apply it directly to the management plane cluster by using the kubectl client:

kubectl apply -f managementplaneoperator.yaml

After applying the manifest, you will see the new operator running in the tsb namespace:

kubectl get pod -n tsb
Output
NAME                                            READY   STATUS    RESTARTS   AGE
tsb-operator-management-plane-d4c86f5c8-b2zb5 1/1 Running 0 8s

For more information on the manifest and how to configure it, please review the ManagementPlane CR reference

Create the Control and Data Plane operators

To deploy the new Control and Data Plane operators in your application clusters, you must run tctl install manifest cluster-operators to retrieve the Control Plane and Data Plane operator manifests for the new version.

tctl install manifest cluster-operators \
--registry <your-docker-registry> \
> clusteroperators.yaml
Customization

The clusteroperators.yaml file can now be used for your cluster upgrade. If your existing control and Data Planes have specific adjustments on top of the standard configuration, you should copy them over to the template.

Review tier1gateways and ingressgateways

Due to a fix introduced in Istio 1.14, when both replicaCount and autoscaleEnaled are set, replicaCount will be ignored and only autoscale configuration will be applied. This could lead to issues where the tier1gateways and ingressgateways scale down to 1 replica temporally during the upgrade until the autoscale configuration is applied. In order to avoid this issue, you can edit the tier1gateway or ingressgateway spec and remove the replicas field, and since the current deployment will be already managed by the HPA controller, then this will allow you to upgrade the pods with the desired configuration.

You can get all the tier1gateways or ingressgateways by running:

kubectl get tier1gateway.install -A
kubectl get ingressgateway.install -A

Applying the Manifest

Now, add the manifest to source control or apply it directly to the appropriate clusters by using the kubectl client:

kubectl apply -f clusteroperators.yaml

For more information on each of these manifests and how to configure them, please check out the following guides:

Check the upgrade is successful

Once the upgrade is completed, review the following information from the Grafana dashboards to verify the upgrade status.

Management Plane dashboards

  • TSB Health provides the status for the different management plane components. All of them must be green before and after the upgrade.

  • TSB Operational Status provides more details from the TSB API. Checking previous values and compare with the values after the upgrade will give an insight of the performance. It also shows the error rate and latency, not only for the TSB API but also for the Postgres database.

  • Global Configuration Distribution provides information about config propagation from the management plane to the control planes.

  • MPC Operational Status shows the latency or errors that config generation system have. It's important to compare the Get All Config Objects duration as it will show the performance after the upgrade. Received configs shouldn't be empty and it will also show the updates from the TSB API to XCP.

Control Plane dashboards

  • Gitops Operational Status if Gitops is enabled in the control planes, provides information about accepted/rejected configurations and performance.

  • Control Plane Token Status shows if tokens are being rotated and if they are valid for each control plane.

  • XCP Edge status shows the status of XCP Edge in each control plane. Latency and performance for the Edges is shown and also if communication with Central is successful.

  • Istio Operational Status shows important information for each control plane like the error rate from istiod, the proxy convergence time and the time to root CA expiration.

Rollback

In case something goes wrong and you want to rollback TSB to the previous version, you will need to rollback both the Management Plane and the Control Planes.

Rollback the Control Plane

Scale down istio-operator and tsb-operator

kubectl scale deployment \
-l "platform.tsb.tetrate.io/component in (tsb-operator,istio)" \
-n istio-system \
--replicas=0

Delete the IstioOperator Resource

Delete the operator will require to remove the finalizer protecting the istio object with the following command:

kubectl patch iop tsb-istiocontrolplane -n istio-system --type='json' -p='[{"op": "remove", "path": "/metadata/finalizers", "value":""}]'


kubectl delete istiooperator -n istio-system --all

Scale down istio-operator and tsb-operator for the Data Plane operator

kubectl scale deployment \
-l "platform.tsb.tetrate.io/component in (tsb-operator,istio)" \
-n istio-gateway \
--replicas=0

Delete the IstioOperator Resources for the Data Plane

Since 1.5.11 the IOP containing the ingressgateways is split to have one IOP per ingressgateway. In order to rollback to the old Istio version, we will need to remove the finalizer protecting the istio objects and delete all the operators with the following commands:

for iop in $(kubectl get iop -n istio-gateway --no-headers | grep -i "tsb-ingress" | awk '{print $1}'); do kubectl patch iop $iop -n istio-gateway --type='json' -p='[{"op": "remove", "path": "/metadata/finalizers", "value":""}]'; done

kubectl delete istiooperator -n istio-gateway --all

Create the Cluster Operators, and rollback the ControlPlane CR

Using the tctl binary from the previous version, follow the instructions to create the cluster operators.

Then apply the the backup of the ControlPlane CR:

kubectl apply -f cp-backup.yaml

Rollback the Management Plane

Scale Down Pods in Management Plane

Scale down all of the pods in the Management Plane so that the it is inactive.

kubectl scale deployment tsb iam -n tsb --replicas=0

Restore PostgreSQL

Restore your PostgreSQL database from your backup. The exact procedure for connecting to the database may differ depending on your environment, please refer to the documentation for your environment.

Restore tctl and create the Management Plane operator

Restore tctl from the backup copy that you made, or download the binary for the specific version you would like to use.

mv ~/.tctl/bin/tctl-{version} ~/.tctl/bin/tctl

Follow the instructions for upgrading to create the Management Plane operator. Then apply the backup of the ManagementPlane CR:

kubectl apply -f mp-backup.yaml

Scale back the deployments

Finally, scale back the deployments.

kubectl scale deployment tsb iam -n tsb --replicas 1