Skip to main content
Version: 0.9.x

Elasticsearch wipe procedure

In some situations, due to data model changes in Elasticsearch indexes, it is required that you wipe existing indexes and templates so the new version of OAP and/or Zipkin can function properly.

The procedure below describes how to wipe such data from Elasticsearch and ensure that the OAP and Zipkin components will start up correctly.

  1. Scale down to 0 replicas the oap deployment in the management namespace.
kubectl -n ${MANAGEMENT_NAMESPACE} scale deployment oap --replicas=0
  1. Scale down to 0 replicas the zipkin deployment in the management namespace.
kubectl -n ${MANAGEMENT_NAMESPACE} scale deployment zipkin --replicas=0
  1. Scale down to 0 replicas the oap-deployment deployment in the control plane namespace. This needs to be done in all clusters onboarded in TSB.
kubectl -n ${CONTROL_NAMESPACE} scale deployment oap-deployment --replicas=0
  1. Scale down to 0 replicas the zipkin deployment in the control plane namespace. This needs to be done in all clusters onboarded in TSB.
kubectl -n ${CONTROL_NAMESPACE} scale deployment zipkin --replicas=0
  1. Execute the following commands to delete templates and indexes in Elasticsearch.
for tmpl in $(curl http://<es_host>:<es_port>/_cat/templates | \
egrep "alarm_record|endpoint_|envoy_|http_access_log|profile_|security_audit_|service_|register_lock|instance_traffic|segment|network_address|top_n|zipkin" | \
awk '{print $1}'); do curl http://<es_host>:<es_port>/_template/${tmpl} -XDELETE ; done
for idx in $(curl http://<es_host>:<es_port>/_cat/indices | \
egrep "alarm_record|endpoint_|envoy_|http_access_log|profile_|security_audit_|service_|register_lock|instance_traffic|segment|network_address|top_n|zipkin" | \
awk '{print $3}'); do curl http://<es_host>:<es_port>/${idx} -XDELETE ; done
Elasticsearch options

The commands above assume a plain HTTP Elasticsearch instance with no auth. Besides setting <es_host> and <es_port> appropriately, you will need to add basic auth if required by supplying -u <es_user>:<es_pass> to the curl commands above, or set the scheme to https if needed.

  1. Scale up the oap deployment in the management namespace.
kubectl -n ${MANAGEMENT_NAMESPACE} scale deployment oap --replicas=1

Keep an eye on the logs of the new OAP pod for a line similar to:

2020-05-13 10:32:45,919 - org.eclipse.jetty.server.Server -10833 [main] INFO  [] - Started @10860ms
  1. Scale up the zipkin deployment in the management namespace.
kubectl -n ${MANAGEMENT_NAMESPACE} scale deployment zipkin --replicas=1
OAP and Zipkin availability

Ensure OAP and Zipkin start correctly in the management plane before continuing with this procedure. The management plane pods for these components create the needed index templates and indices required by the system, so you need to ensure they are up and running before moving on to scale up the control plane components.

  1. Scale up the oap-deployment deployment in the control plane namespace. This needs to be done in all clusters onboarded in TSB.
kubectl -n ${CONTROL_NAMESPACE} scale deployment oap-deployment --replicas=1
  1. Scale up the zipkin deployment in the control plane namespace. This needs to be done in all clusters onboarded in TSB.
kubectl -n ${CONTROL_NAMESPACE} scale deployment zipkin --replicas=1