Skip to main content
logoTetrate Service BridgeVersion: 1.11.x

Requirements and Download

This page will give you an overview of everything that you need to get started with both Tetrate Service Bridge (TSB) installation.

Operating a TSB service mesh requires a good understanding of working with Kubernetes and Docker repositories. For additional guidance, we recommend reading their supporting documentation.

FIPS-validated TSB Build

Tetrate Service Bridge (TSB) now offers a FIPS-validated build, designed to meet U.S. federal government cryptographic standards. This build enhances security posture for organizations requiring strict compliance. For implementation details and best practices, see our TSB FIPS Guide.

Requirements

You can install TSB for production use, or you can install the demo profile for get a quick feel of TSB. Please check the requirements for each in the following table:

Production TSBDemo/Quickstart TSB
Kubernetes cluster:
See Supported Kubernetes versions
Private Docker registry (HTTPS)
Tetrate repository Account and API Key (if you don't have this yet, please contact Tetrate)
Docker Engine 18.03.01 or above, with push access to your private Docker registry
PostgreSQL 11.1 or abovepackaged (v14.12)packaged (v14.12)
Elasticsearch 6.x, 7.x, or 8.x, or AWS OpenSearch 1.x or 2.xpackaged (v8.14.3)packaged (v8.14.3)
Redis 6.2 or abovepackaged (v7.0.15)
Identity ProviderLocal IdP
Cert-manager v1.7.2 or abovepackaged (cert-manager v1.14.6)
cert-manager usage

cert-manager is used to issue and manage certificate for TSB webhook, TSB internal communications and integration with external CA for Istio control plane.

cert-manager version

cert-manager 1.4.0 is the minimum version required for use with TSB 1.5. It has the feature flag to sign K8S CSR requests which supports Kubernetes 1.16-1.21. Go to cert-manager Supported Releases to get more information on supported Kubernetes and OpenShift versions.

Production installation note

The size of your Kubernetes clusters is dependent on your platform deployment requirements. A base TSB install does not consume many additional resources. The sizing of storage is greatly dependent on the size of your application clusters, amount of workloads (and their request rate), and observability configuration (sampling rate, data retention period, etc.). For more information see our capacity planning guide.

When running self-managed, your organization might impose additional (security) restrictions, availability, and disaster recovery requirements on top of the above mentioned environments and applications. For detailed information on how to adjust the TSB installation and configuration please refer to the operator reference guides as well as the how to section of our documentation where you can find descriptions of the configuration options, common deployment scenarios and solutions.

Identity Provider

Tetrate Service Bridge (TSB) requires an Identity Provider (IdP) as the source of users. This identity provider is used for user authentication and to periodically synchronize the information of existing users and groups into the platform. TSB now offers multiple options for identity providers:

Local Identity Provider

TSB now supports a local identity provider, which can be used for environments where external identity providers are not required or not available. This option is particularly useful for:

  • Small-scale deployments
  • Development and testing environments
  • Scenarios where external identity management is not necessary

The local identity provider allows you to manage users and groups directly within TSB, simplifying the setup process and reducing external dependencies. For more information on setting up the local identity provider, see Local Identity Provider.

External Identity Providers

For environments that require integration with existing identity management systems, TSB continues to support external identity providers:

LDAP

TSB can integrate with LDAP (Lightweight Directory Access Protocol) servers. To use LDAP:

  1. Configure your LDAP queries for authentication and synchronization of users and groups.
  2. Set up TSB to use your LDAP server as the identity provider.

For more details on LDAP configuration, see LDAP as Identity Provider.

OIDC-Compliant Identity Providers

TSB supports any OIDC (OpenID Connect) compliant Identity Provider. To use OIDC:

  1. Create an OIDC client in your IdP.
  2. Enable Authorization Code Flow for login with UI.
  3. Enable Device Authorization for login with tctl using device code.

For more information and examples, see how to set up Azure AD as a TSB Identity Provider.

OIDC IdP synchronization

TSB supports Azure AD for synchronization of users and groups. If you use another IdP, you need to create a sync job that will get users and teams from your IdP and sync them into TSB using the sync API. See User Synchronization for more details.

Data and Telemetry Storage

Tetrate Service Bridge (TSB) requires data and telemetry storage. TSB uses PostgreSQL for data storage and Elasticsearch for telemetry storage. The current version of TSB includes embedded PostgreSQL and Elasticsearch options.

Embedded PostgreSQL and Elasticsearch are suitable for various environments, including production scenarios with specific considerations:

  • Demo and Development: Ideal for testing, development, and demonstration purposes.
  • Production: Can be used in production environments where high availability (HA) and extensive observability are not critical requirements.

Considerations for Embedded Storage in Production

  1. Performance and Scalability: Evaluate the performance and scalability needs of your specific use case.
  2. Uptime: Uptime guarantees are dependent on the installation environment. Kubernetes has limitations for stateful data/storage management.
  3. Data Management:
    • PostgreSQL:
      • Scheduled backups are supported.
      • Best-effort support for repair and restoration.
    • Elasticsearch:
      • Used for observability data on a best-effort basis.
      • In case of problems, purging towards a clean install may be necessary.
      • Observability data retention is limited to a maximum of 7 days.
  4. Storage Sizing:
    • Proper sizing of Persistent Volume Claims (PVCs) is crucial.
    • Data growth rates vary per installation.
    • If storage overflows, options include purging data, increasing PVC size, or reducing the retention period.
  5. The Postgres user configured in TSB must have full ownership of the TSB schema. This is automatically configured on the first installation when the schema is created. If the user or permissions need to be changed afterward, care must be taken to make sure full ownership of the TSB schema is still in place before doing any changes on the Management Plane settings.

External Storage for Production

For production environments with specific requirements, external storage solutions should be considered:

  1. High Availability (HA): If you need strong SLAs, large-scale operations, or HA guarantees, consider moving to external Elasticsearch and/or PostgreSQL.
  2. Heavy Observability Usage: Environments with high sampling rates or extensive tracing may benefit from external solutions.
  3. Clustered Setups: For scenarios requiring clustered databases or hot standbys.
Production Storage Evaluation

TSB's operator-managed storage provides lifecycle management for the databases. However, you should still evaluate the performance and scalability of the storage solution for your specific production environment.

While embedded storage can be suitable for many production use cases, you need to carefully evaluate whether it meets your specific requirements, especially regarding:

  • High availability needs
  • Observability data volume and retention
  • Performance under your expected load
  • Backup and recovery processes

If your use case involves critical HA requirements or heavy observability usage (particularly with high sampling rates for tracing), consider externalizing these tools for optimal performance and reliability.

The Postgres user configured in TSB must have full ownership of the TSB schema. This is automatically configured on the first installation when the schema is created. If the user or permissions need to be changed afterward, care must be taken to make sure full ownership of the TSB schema is still in place before doing any changes on the Management Plane settings.

Certificate Provider

TSB requires a certificate provider to support certificate provisioning for internal TSB components for purposes like webhook certificates and others. This certificate provider must be available in the management plane cluster and all control plane clusters.

TSB supports cert-manager as one of the supported providers. It can manage the lifecycle of cert-manager installation for you. To configure the installation of cert-manager in your cluster, add the following section as part of the ManagementPlane or ControlPlane CR:

  components:
internalCertProvider:
certManager:
managed: INTERNAL

You can also use any certificate provider which supports the kube-CSR API. To use custom providers, please refer to the following section Internal Cert Provider

Existing cert-manager installation

If you are already using cert-manager as part of your cluster, you can set the managed field in ManagementPlane or ControlPlane CR to EXTERNAL. This will let TSB utilize the existing cert-manager installation. The TSB operator will fail if it finds an already installed cert-manager when the managed field is set to INTERNAL to ensure that it does not override the existing cert-manager installation.

cert-manager Kube-CSR

TSB uses the kubernetes CSR resource to provision certificates for various webhooks. If your configuration uses an EXTERNAL cert-manager installation, please ensure cert-manager can sign Kubernetes CSR requests. For example, in cert-manager 1.7.2, this is enabled by setting this feature gate flag ExperimentalCertificateSigningRequestControllers=true. For TSB managed installations using INTERNAL managed cert-manager, this configuration is already set as part of the installation.

Download

The first step to get TSB up and running is to install our TSB CLI tool tctl. With tctl you can install (or upgrade) TSB. It also allows you to interact with the TSB API's using yaml objects. If having operated Kubernetes deployments, this will be familiar to you. It also makes it easy to integrate TSB with GitOps workflows.

Follow the instruction in the CLI reference pages to download and install tctl.

Sync Tetrate Service Bridge images

Now that you have tctl installed, you can download the needed container images and push them into your private Docker repository. The tctl tool makes this easy by providing the image-sync command, which will download the image versions matching the current version of tctl from Tetrate repository and push it into your private Docker repository. The username and apikey arguments must hold the Tetrate repository account details provided to you by Tetrate to enable the download of the container images. The registry argument must point to your private Docker registry.

tctl install image-sync --username <user-name> \
--apikey <api-key> --registry <registry-location>

The first time you run this command you will be presented with a EULA which needs to be accepted. If you run the TSB installation from CI/CD or other environment where you will not have an interactive terminal at your disposal, you can add the --accept-eula flag to the above command.

Demo installations on a Kind cluster

If you are installing the demo profile in a local kind cluster, you can directly load the images in the kind node as follows:

# Loging to the Docker repository using our `username` and `apikey`
docker login containers.dl.tetrate.io

# Pull all the docker images
for i in `tctl install image-sync --just-print --raw` ; do docker pull $i ; done

# Load the images to the kind node
for i in `tctl install image-sync --just-print --raw` ; do kind load docker-image $i ; done

Installation

cluster profiles

Operating a multi-cluster TSB environment typically involves communicating with multiple Kubernetes clusters. In the documentation we do not make explicit use of kubectl config context and tctl config profiles as they are specific to your environment. Make sure that you have selected the right kubectl context and tctl profile as default or use explicit arguments to select the correct clusters when executing commands with these tools.

For installation using Helm chart, please proceed to the helm installation guide.

For installation using tctl, please proceed to the tctl installation guide.

For the demo installation procedure, please proceed to the demo installation guide.