Skip to main content
logoTetrate Service BridgeVersion: 1.6.x

Resource Consumption and Capacity Planning

This document describes a conservative guideline for capacity planning of Tetrate Service Bridge (TSB) in Management and Control planes.

These parameters apply to production installations: TSB will run with minimal resources if you are using a demo-like environment.

disclaimer

The resource provisioning guidelines described in this document are very conservative.

Also please be aware that the resource provisioning described in this document are applicable to vertical resource scaling. Multiple replicas of the same TSB components do not share the load with each other, and therefore you cannot expect the combined resources from multiple components to have the same effect. Replicas of TSB components should only be used for high availability purposes only.

For a baseline installation of TSB with 1 registered cluster and 1 deployed service within that cluster, the following resources are recommended.

To reiterate, the amount of memory described below are very conservative. Also, the actual performance given by the number of vCPUs tend to fluctuate depending on your underlying infrastructure. You are advised to verify the results in your environment.

ComponentvCPU #Memory MiB
TSB server (Management Plane) 12512
XCP Central Components 22128
XCP Edge1128
Front Envoy150
IAM1128
TSB UI1256
OAP45192
OTEL-collector21024

1 Including the Kubernetes operator and persistent data reconciliation processes.
2 Including the Kubernetes operator.

The TSB stack is mostly CPU-bound. Additional clusters registered with TSB via XCP increase the CPU utilization by ~4%.

The effect of additional registered clusters or additional deployed workload services on memory utilisation is almost negligible. Likewise, the effect of additional clusters or workloads on resource consumption of the majority of TSB components is mostly negligible, with the notable exceptions of TSB, XCP Central component, TSB UI and IAM.

note

Components that are part of the visibility stack (e.g. OTel/OAP, etc.) have their resource utilisation driven by requests, thus the resource scaling should follow the user request rate statistics. As a general rule of thumb, more than 1 vCPU is preferred. It is also important to notice that the visibility stack performance is largely bound by Elasticsearch performance.

Thus, we recommend vertically scaling the components by 1 vCPU for a number of deployed workflows:

Management Plane

Besides OAP, All components don't require any resource adjustment. Those components are architectured and tested to support very large clusters.

OAP in Management plane requires extra CPU and Memory ~ 100 millicores of CPU and 1024 MiB of RAM per every 1000 services. E.g. 4000 services aggregated in TSB Management Plane from all TSB clusters would require approximately 400 millicores of CPU and 4096 MiB of RAM in total.

Control Plane Resource Requirements

Following table shows typical peak resource utilization for TSB control plane with the following assumptions:

  • 50 services with sidecars
  • Traffic on entire cluster is 500 repository
  • OAP trace sampling rate is 1% of the traffic
  • Metric is captured for every request at every workload.

Note that average CPU utilization would be a fraction of the typical peak value.

ComponentTypical Peak CPU (m)Typical Peak Memory (Mi)
Istiod300m250Mi
OAP2500m2500Mi
XCP Edge100m100Mi
Istio Operator - Control Plane50m100Mi
Istio Operator - Data Plane150m100Mi
TSB Control Plane Operator100m100Mi
TSB Data Plane Operator150m100Mi
OTEL Collector50m100Mi

TSB/Istio Operator resource usage per Ingress Gateway

The following table shows the resources used by TSB Operator and Istio Operator per Ingress Gateways

Ingress GatewaysTSB Operator CPU(m)TSB Operator Mem(Mi)Istio Operator CPU(m)Istio Operator Mem(Mi)
0100m50Mi10m45Mi
502600m125Mi1100m120Mi
1003500m200Mi1300m175Mi
1503800m250Mi1400m200Mi
2004000m325Mi1400m250Mi
2504700m325Mi1750m300Mi
3005000m475Mi1750m400Mi

Component resource utilization

The following tables will show how the different components of TSB scale with 4000 services and peaking with 60 rpm, this is divided by information from the Management Plane, and the Control Plane.

Management Plane

ServicesGatewaysTraffic(rpm)Central CPU(m)Central Mem(Mi)MPC CPU(m)MPC Mem(Mi)OAP CPU(m)OAP Mem(Mi)Otel CPU(m)Otel Mem(Mi)TSB CPU(m)TSB Mem(Mi)
000 rpm3m39Mi5m30Mi37m408Mi22m108Mi14m57Mi
400260 rpm4m42Mi15m31Mi116m736Mi24m123Mi50m63Mi
800460 rpm4m54Mi24m34Mi43m909Mi26m127Mi85m75Mi
1200660 rpm4m59Mi32m41Mi28m1141Mi27m210Mi213m78Mi
1600860 rpm5m63Mi44m48Mi209m1475Mi29m249Mi113m86Mi
20001060 rpm5m73Mi41m51Mi51m1655Mi24m319Mi211m91Mi
24001260 rpm4m84Mi72m62Mi57m1910Mi29m381Mi227m97Mi
28001460 rpm5m90Mi73m65Mi43m2136Mi16m466Mi275m104Mi
32001660 rpm5m106Mi85m78Mi89m2600Mi43m574Mi382m108Mi
36001860 rpm5m123Mi94m71Mi245m2772Mi37m578Mi625m115Mi
40002060 rpm5m147Mi90m81Mi521m3224Mi15m704Mi508m122Mi
note

IAM will peak at 5m/32Mi, LDAP at 1m/12Mi and XCP Operator at 3m and 23Mi

Control Plane

ServicesGatewaysTraffic(rpm)Edge CPU(m)Edge Mem(Mi)Istiod CPU(m)Istiod Mem(Mi)OAP CPU(m)OAP Mem(Mi)Otel CPU(m)Otel Mem(Mi)
000 rpm3m67Mi6m110Mi55m439Mi16m74Mi
400260 rpm2m97Mi33m182Mi334m1138Mi18m75Mi
800460 rpm3m153Mi35m249Mi653m1640Mi21m85Mi
1200660 rpm3m192Mi68m286Mi815m2238Mi23m164Mi
1600860 rpm3m238Mi84m324Mi1217m2766Mi20m202Mi
20001060 rpm3m280Mi84m357Mi1364m3351Mi17m267Mi
24001260 rpm15m270Mi98m370Mi1658m3921Mi19m331Mi
28001460 rpm5m310Mi334m450Mi2062m4493Mi19m406Mi
32001660 rpm6m352Mi243m470Mi2406m4866Mi20m506Mi
36001860 rpm22m386Mi130m489Mi2606m5346Mi20m512Mi
40002060 rpm5m501Mi138m523Mi2904m6128Mi20m620Mi
note

Metric Server will peak at 4m/24Mi, Onboarding Operator at 4m/24Mi, and XCP-Operator at 3m/22Mi