Upgrade and Rollback Instructions
This comprehensive guide explains the procedures for upgrading and rolling back Istio installations in your Kubernetes environment. We cover two primary upgrade strategies—Canary and In-Place—using both Helm and istioctl methods. Our aim is to provide clear and engaging instructions that help you navigate the upgrade process with ease.
Before proceeding, please take a moment to review the Upgrade and Rollback Strategy document.
Prerequisites
Before beginning the upgrade process, ensure that you have:
- Administrative access to your Kubernetes cluster
- Helm 3.x installed (for Helm-based upgrades)
- The istioctl CLI matching both your current and target versions (you will need the previous version for rollback)
- Access to the Tetrate Istio distribution repository
- A backup storage location for configuration files
- Monitoring tools in place to keep a close eye on the upgrade process
Pre-upgrade Checklist
Environment Backup
Creating comprehensive backups is crucial before any upgrade attempt. These backups serve as a safety net and help you quickly recover if issues arise.
Backup Istio CRDs and resources with a timestamp (or other unique identifier):
kubectl get crd -o yaml -l chart=istio > istio-crd-$(date +%Y%m%d).yaml
kubectl get istio-io -A -o yaml > istio-resources-$(date +%Y%m%d).yaml
Backup Helm values (if you are using Helm):
helm get values istio-base -n istio-system > istio-base-values-$(date +%Y%m%d).yaml
helm get values istiod -n istio-system > istiod-values-$(date +%Y%m%d).yaml
helm get values istio-ingress -n istio-ingress > ingress-values-$(date +%Y%m%d).yaml
Prepare Tag, Version, and Revision
Identify your current and target version
, tag
, and revision
. Revision is used for canary upgrades and rollbacks. For detailed instructions on determining version information, please refer to the Helm or istioctl documentation.
For example, this guide uses TID versions 1.21 and 1.23:
export OLD_VERSION=1.21.0+tetrate1
export OLD_TAG=1.21.0-tetrate1
export OLD_REV=1-21-0
export NEW_VERSION=1.23.0+tetrate1
export NEW_TAG=1.23.0-tetrate1
export NEW_REV=1-23-0
If you are using Helm, update your repository and verify that the new version is available:
helm repo update tetratelabs
helm search repo tetratelabs/base --versions
Version Compatibility Check
Before proceeding, verify compatibility between your current and target versions:
- Check Current Version and Components:
istioctl version
- Analyze the Mesh for Potential Upgrade Issues:
istioctl analyze
- Inspect the Cluster for Istio Install and Upgrade Requirements, targeting your cluster with the new version's binary:
istioctl-new-version x precheck
- Validate Upgrade Compatibility:
istioctl-new-version upgrade --dry-run
Tetrate Repository Access
This guide assumes that you have:
- Valid credentials for the Tetrate private repository
- A configured image pull secret named
tetrate-tis-creds
in your Istio namespaces
For pull secret configuration instructions, please see:
If you are using TID community images instead:
- Remove
imagePullSecrets
from all example commands. - Change the image hub to
containers.istio.tetratelabs.com
.
Label and Annotate Base CRDs (TIS < 1.24)
For TIS releases earlier than 1.24, you need to label and annotate the base CRDs before upgrading. This extra step prevents errors during the upgrade process:
for crd in $(kubectl get crds -l chart=istio -o name && kubectl get crds -l app.kubernetes.io/part-of=istio -o name)
do
kubectl label "$crd" "app.kubernetes.io/managed-by=Helm"
kubectl annotate "$crd" "meta.helm.sh/release-name=istio-base" # replace with your Helm release name if different
kubectl annotate "$crd" "meta.helm.sh/release-namespace=istio-system" # replace with your Istio namespace if different
done
Failure to do so may result in errors such as:
Error: UPGRADE FAILED: Unable to continue with update: CustomResourceDefinition "wasmplugins.extensions.istio.io" exists and cannot be imported into the current release: invalid ownership metadata; label validation error: missing key "app.kubernetes.io/managed-by": must be set to "Helm"; annotation validation error: missing key "meta.helm.sh/release-name": must be set to "istio-base"; annotation validation error: missing key "meta.helm.sh/release-namespace": must be set to "istio-system"
Canary Upgrade (Recommended)
A canary upgrade allows you to run two Istio control plane versions simultaneously, enabling a gradual migration with the flexibility to roll back quickly if needed.
Using Helm
Upgrade Procedures
Install the New Istio Version
Begin by upgrading the Istio base CRDs:
helm upgrade istio-base tetratelabs/base \
-n istio-system \
--set global.tag=${NEW_TAG} \
--set global.hub="addon-containers.istio.tetratelabs.com" \
--set "global.imagePullSecrets[0]=tetrate-tis-creds" \
--version ${NEW_VERSION}Next, deploy the Istiod control plane as a canary using the new revision identifier:
helm install istiod-${NEW_REV} tetratelabs/istiod \
-n istio-system \
--set global.tag=${NEW_TAG} \
--set global.hub="addon-containers.istio.tetratelabs.com" \
--set "global.imagePullSecrets[0]=tetrate-tis-creds" \
--version ${NEW_VERSION} \
--set revision=${NEW_REV} \
--waitLikewise, deploy the new Ingress gateway:
helm install istio-ingress-${NEW_REV} tetratelabs/gateway \
-n istio-ingress \
--set global.tag=${NEW_TAG} \
--set global.hub="addon-containers.istio.tetratelabs.com" \
--set "global.imagePullSecrets[0]=tetrate-tis-creds" \
--version ${NEW_VERSION} \
--set revision=${NEW_REV}Verify that both the control plane and ingress gateway pods are running with the correct revision:
kubectl get pods -L istio.io/rev -n istio-system
kubectl get pods -L istio.io/rev -n istio-ingressMigrate Workloads to the New Version
Label your application namespaces to use the new control plane version while deleting the injection label if present:
kubectl label namespace <app-namespace> istio.io/rev=${NEW_REV} istio-injection- --overwrite
Perform a rolling restart of all deployments in the namespace:
kubectl rollout restart deployment -n <app-namespace>
Verify that the workloads are running with the new version:
kubectl get pods -n <app-namespace> -o wide
istioctl proxy-statusKeep a close eye on traffic, logs, and service performance to ensure everything is running smoothly.
Finalize Migration
After confirming that all workloads are stable with the new version, it’s time to remove the old control plane.
Canary Helm Release NameIf you previously used a canary upgrade, your Istiod Helm release might be named
istiod-${OLD_REV}
.helm delete istiod -n istio-system
Then, update the default revision of the Istio base chart to the new canary version:
helm upgrade istio-base tetratelabs/base \
--set defaultRevision=${NEW_REV} \
-n istio-system
Rollback Procedures
If issues are detected after migrating workloads, follow these steps to roll back quickly:
Revert Workload Labels
Reset the namespace labels to point back to the previous control plane version:
kubectl label namespace <app-namespace> istio.io/rev=${OLD_REV} --overwrite
Restart the deployments:
kubectl rollout restart deployment -n <app-namespace>
Validate the Rollback
Confirm that the workloads are now connected to the previous control plane:
kubectl get pods -L istio.io/rev -n istio-system
istioctl proxy-statusRemove the New Istio Version
Uninstall the new components:
helm uninstall istiod-${NEW_REV} -n istio-system
helm uninstall istio-ingress-${NEW_REV} -n istio-ingressRollback the Istio base chart to the previous revision:
# Rollback to the immediately previous revision
helm rollback istio-base -n istio-system
# Or rollback to a specific revision number
helm rollback istio-base <REVISION_NUMBER> -n istio-systemTo view available Helm revisions if needed:
helm history istio-base -n istio-system
Using Istioctl
Upgrade Procedures
Install the New Istio Version
Deploy the new control plane alongside the existing one using istioctl:
istioctl install --set profile=default \
--set "values.global.imagePullSecrets[-1]=tetrate-tis-creds" \
--set tag=${NEW_TAG} \
--set hub="addon-containers.istio.tetratelabs.com" \
--set revision=${NEW_REV}Verify that both control planes are running:
kubectl get pods -L istio.io/rev -n istio-system
Migrate Gateways and Workloads
For gateways, create a copy of the deployment and add the new revision label. For example:
apiVersion: apps/v1
kind: Deployment
metadata:
name: istio-ingressgateway-${NEW_REV} # Use the target revision
namespace: istio-ingress
spec:
selector:
matchLabels:
istio: ingressgateway
template:
metadata:
annotations:
inject.istio.io/templates: gateway
labels:
istio: ingressgateway
istio.io/rev: ${NEW_REV} # Target revision
spec:
containers:
- name: istio-proxy
image: autoThen, update your application namespaces to use the new control plane:
kubectl label namespace <app-namespace> istio.io/rev=${NEW_REV} istio-injection- --overwrite
Restart the deployments:
kubectl rollout restart deployment -n <app-namespace>
Verify that the workloads are now using the new control plane:
kubectl get pods -n <app-namespace> -o wide
istioctl proxy-statusFinalize Migration
Once all workloads are stable on the new version, uninstall the old control plane:
istioctl uninstall --revision ${OLD_REV}
If the old control plane does not have a revision label, run:
istioctl uninstall -y
Finally, set the default revision to the new version:
istioctl tag set default --revision ${NEW_REV}
Rollback Procedures
If the new revision causes issues, revert as follows:
Revert Workload Labels
Change the namespace labels back to the previous control plane version:
kubectl label namespace <app-namespace> istio.io/rev=${OLD_REV} --overwrite
Restart the deployments:
kubectl rollout restart deployment -n <app-namespace>
Validate the Rollback
Ensure that workloads are connected to the previous control plane:
istioctl proxy-status
Monitor logs and metrics carefully to confirm that everything is stable.
Remove the New Istio Version
If necessary, uninstall the new control plane:
istioctl uninstall --revision ${NEW_REV}
In-Place Upgrade
An In-Place upgrade updates the existing Istio control plane without running parallel versions. This method is simpler but carries a higher risk since all workloads are immediately affected. Read on for clear, step-by-step instructions.
Using Helm
Upgrade Procedures
Execute the In-Place Upgrade
Upgrade the Istio base CRDs:
helm upgrade istio-base tetratelabs/base \
-n istio-system \
--set global.tag=${NEW_TAG} \
--set global.hub="addon-containers.istio.tetratelabs.com" \
--set "global.imagePullSecrets[0]=tetrate-tis-creds" \
--version ${NEW_VERSION}Then, upgrade the Istiod control plane:
helm upgrade istiod tetratelabs/istiod \
-n istio-system \
--set global.tag=${NEW_TAG} \
--set global.hub="addon-containers.istio.tetratelabs.com" \
--set "global.imagePullSecrets[0]=tetrate-tis-creds" \
--version ${NEW_VERSION} \
--waitLikewise, upgrade the Ingress gateway:
helm upgrade istio-ingress tetratelabs/gateway \
-n istio-ingress \
--set global.tag=${NEW_TAG} \
--set global.hub="addon-containers.istio.tetratelabs.com" \
--set "global.imagePullSecrets[0]=tetrate-tis-creds" \
--version ${NEW_VERSION} \
--waitVerify that the control plane and gateway pods are running as expected:
kubectl get pods -n istio-system
kubectl get pods -n istio-ingressMigrate Workloads to the New Version
Restart your application deployments to adopt the new configuration:
kubectl rollout restart deployment -n <app-namespace>
Confirm that workloads have connected to the new control plane:
kubectl get pods -n <app-namespace> -o wide
istioctl proxy-status
Rollback Procedures
Identify the Previous Helm Revision
Check the Helm release history to find the appropriate revision number:
helm history istio-base -n istio-system
helm history istiod -n istio-system
helm history istio-ingress -n istio-ingressPerform the Helm Rollback
Rollback each component to its previous revision:
helm rollback istio-base <REVISION_NUMBER> -n istio-system
helm rollback istiod <REVISION_NUMBER> -n istio-system
helm rollback istio-ingress <REVISION_NUMBER> -n istio-ingressRestart Workloads
Restart your application deployments to ensure they use the previous control plane:
kubectl rollout restart deployment -n <app-namespace>
4. Validate the Rollback
Verify that the pods are running the previous version and check proxy status:
kubectl get pods -n <app-namespace> -o wide
istioctl proxy-status(Optional) Reapply Backup Configuration
If necessary, reapply your backed-up configuration:
kubectl apply -f istio-backup-$(date +%Y%m%d).yaml
Using Istioctl
Upgrade Procedures
Execute the In-Place Upgrade
Run the upgrade command:
istioctl upgrade
Verify that the control plane has been updated:
kubectl get pods -n istio-system -o wide
istioctl versionMigrate Workloads to the New Version
Restart your deployments to pick up the new configuration:
kubectl rollout restart deployment -n <app-namespace>
Confirm that workloads have connected to the updated control plane:
kubectl get pods -n <app-namespace> -o wide
istioctl proxy-status
Rollback Procedures
If issues occur after an in-place upgrade using istioctl, follow these steps to roll back:
Reinstall the Previous Version
Istioctl Version CompatibilityEnsure you use the istioctl version corresponding to the version you want to roll back to.
Reinstall the previous version with istioctl:
istioctl install --set profile=default \
--set "values.global.imagePullSecrets[-1]=tetrate-tis-creds" \
--set tag=${OLD_TAG} \
--set hub="addon-containers.istio.tetratelabs.com"Verify that the control plane has been reverted:
kubectl get pods -n istio-system -o wide
istioctl versionMigrate Workloads to the Previous Version
Restart your application deployments:
kubectl rollout restart deployment -n <app-namespace>
Confirm that workloads have reconnected to the old control plane:
kubectl get pods -n <app-namespace> -o wide
istioctl proxy-status(Optional) Reapply Backup Configuration
If needed, reapply any custom configurations from your backup:
kubectl apply -f istio-backup-$(date +%Y%m%d).yaml
By following these detailed instructions for both Canary and In-Place upgrades —using either Helm or istioctl— you can confidently manage version transitions in your Istio deployment. Always ensure you have reliable backups and monitor the upgrade process closely to catch any potential issues early.