Skip to main content
logoTetrate Service BridgeVersion: 1.11.x

Management Plane Installation

This page will show you how to install the Tetrate Service Bridge management plane in a production environment.

Before you start, make sure that you've:

✓ Checked the requirements
✓ Checked TSB management plane components
✓ Checked types of certificates and internal certificates requirements
✓ Checked firewall information
✓ If you are upgrading from previous version, also check PostgreSQL backup and restore
Downloaded Tetrate Service Bridge CLI (tctl)
Synced the Tetrate Service Bridge images

FIPS installation

To install FIPS-validated build, you need to download the FIPS-validated tctl binary from the TSB CLI downloads page. Using the FIPS-validated tctl for these steps ensures that the installed management plane operator and its components are FIPS-validated.

FIPS-validated tctl only available for Linux amd64 platforms.

Management Plane Operator

To keep installation simple but still allow a lot of custom configuration options we have created a management plane operator. The operator will run in the cluster and bootstraps the management plane as described in a ManagementPlane Custom Resource. It watches for changes and enacts them. To help in creating the right Custom Resource Document (CRD) we have added the ability to our tctl client to create the base manifests which you can then modify according to your required set-up. After this you can either apply the manifests directly to the appropriate clusters or use in your source control operated clusters.

Operators

If you would like to know more about the inner workings of Operators, and the Operator Pattern, review the Kubernetes documentation

Create the manifest allowing you to install the management plane operator from your private Docker registry:

tctl install manifest management-plane-operator \
--registry <registry-location> > managementplaneoperator.yaml

The managementplaneoperator.yaml file created by the install manifest command can be applied directly to the appropriate cluster by using the kubectl client:

kubectl apply -f managementplaneoperator.yaml

After applying the manifest you will see the operator running in the tsb namespace:

kubectl get pod -n tsb

Example output:

NAME                                            READY   STATUS    RESTARTS   AGE
tsb-operator-management-plane-d4c86f5c8-b2zb5 1/1 Running 0 8s

Configuring Secrets

The management plane components need some secrets for both internal and external communication purposes. Below is a list of secrets that can be created based on your needs:

Secret nameDescription
admin-credentialsTSB will create a default admin user with name: admin and this is the password's one way hash for this special account. These credentials are kept outside of your IdP while any other credentials must be stored in your IdP.
tsb-certsTLS certificate that has type kubernetes.io/tls. Must have tls.key and tls.cert value. The TLS certificates can be self signed or issued by public CA. Go to Internal certificate requirements for more details.
postgres-credentialsContains:
 1. Postgres username and password.
 2. The CA certificate to verify Postgres connections when Postgres is configured to present a self-signed certificate. TLS verification only happens if you set sslMode in Postgres settings to verify-ca or verify-full. See PostgresSettings for more details.
 3. Client certificate and private key if Postgres is configured with mutual TLS.
elastic-credentialsElasticsearch username and password.
es-certsThe CA certificate to validate Elasticsearch connections when Elasticsearch is configured to present a self-signed certificate.
ldap-credentialsOnly set if using LDAP as Identity Provider (IdP). Contain LDAP binddn and bindpassword.
custom-host-caOnly set if using LDAP as IdP. The CA certificate to validate LDAP connections when LDAP is configured to present a self-signed certificate.
iam-oidc-client-secretOnly set if using OIDC with any IdP. Contain OIDC client-secret and device-client-secret.
azure-credentialsOnly set if using OIDC with Azure AD as IdP. Client secret to connect to Azure AD for team and user synchronization.
xcp-central-certXCP central TLS certificate. Go to Internal certificate requirements for more details.

Using tctl to Generate Secrets

note

Since 1.7, TSB supports automated certificate management for TSB management plane TLS certificates, internal certificates and intermediate Istio CA certificates. Go to Automated Certificate Management for more details. This means you don't need to create tsb-certs and xcp-central-cert secrets anymore. The following example will assume that you are using automated certificate management.

These secrets can be generated in the correct format by passing them as command-line flags to the tctl management-plane-secrets command.

The following command will generate managementplane-secrets.yaml that contains Elasticsearch, Postgres, OIDC and admin credentials along with TSB TLS certificate.

tctl install manifest management-plane-secrets \
--elastic-username <elastic-username> \
--elastic-password <elastic-password> \
--oidc-client-secret "<oidc-client-secret>" \
--postgres-username <postgres-username> \
--postgres-password <postgres-password> \
--tsb-admin-password <tsb-admin-password> > managementplane-secrets.yaml

If you intend to use the embedded Postgres and Elasticsearch, you can exclude the --elastic-username, --elastic-password, --postgres-username, and --postgres-password flags. The usernames for Elasticsearch and PostgreSQL will be tsb, and the passwords will be randomly generated with a length of 16 characters.


See the CLI reference documentation for all available options such as providing CA certificates for Elasticsearch, PostgreSQL and LDAP. You can also check the bundled explanation from tctl by running this help command:

tctl install manifest management-plane-secrets --help

Applying secrets

Once you've created your secrets manifest, you can add to source control or apply it to your cluster.

Vault Injection

If you're using Vault injection for certain components, remove the applicable secrets from the manifest that you've created before applying it to your cluster.

kubectl apply -f managementplane-secrets.yaml

Management Plane Installation

Now you're ready to deploy the management plane.

To deploy the management plane you need to create a ManagementPlane custom resource in the Kubernetes cluster that describes the management plane.

Below is a ManagementPlane custom resource (CR) that describes a basic management plane. Save this managementplane.yaml and adjust it according to your needs:

Organization name

Organization is a root of the TSB object hierarchy. A TSB Management plane can only have one organization.

To login with tctl, you will need to specify organization name and it must match with <organization-name> that you set in the ManagementPlane CR below. Organization name has to be lowercase to comply with RFC standards.

If not specified, the default value is tetrate and it cannot be changed after creation.

Embedded Storage

If you omit dataStore and telemetryStore then default embedded postgres and elasticsearch will be installed in management plane namespace.

Refer to the Data and Telemetry Storage section considerations of using embedded Postgres and Elasticsearch in production.

Local Identity Provider

If you omit identityProvider then the default local identity provider will be used. Local identity provider can be used for testing purposes, small deployments or when you don't have an external identity provider.

Refer to the Identity Provider section for more information.

Postgres Permissions

The Postgres user configured in TSB must have full ownership of the TSB schema. This is automatically configured on the first installation when the schema is created. If the user or permissions need to be changed afterward, care must be taken to make sure full ownership of the TSB schema is still in place before doing any changes on the Management Plane settings.

The following example uses OIDC as identity provider.

apiVersion: install.tetrate.io/v1alpha1
kind: ManagementPlane
metadata:
name: managementplane
namespace: tsb
spec:
hub: <registry-location>
organization: <organization-name>
dataStore:
postgres:
host: <postgres-hostname-or-ip>
port: <postgres-port>
name: <database-name>
telemetryStore:
elastic:
host: <elastic-hostname-or-ip>
port: <elastic-port>
version: <elastic-version>
selfSigned: <is-elastic-use-self-signed-certificate>
protocol: <http or https. default to https if not set>

# Enable automatic certificates management.
# You can remove this field if you want to manage certificates user other methods
# Note that you will need to provide certificates as secrets in that case.
certIssuer:
selfSigned: {}
tsbCerts: {}
clusterIntermediateCAs: {}

identityProvider:
oidc:
clientId: <oidc-client-id>
# authorization code flow for TSB UI login
providerConfig:
dynamic:
configurationUri: <oidc-well-known-openid-configuration>
redirectUri: <oidc-callback>
scopes:
- email
- profile
- offline_access

components:
internalCertProvider:
certManager:
managed: INTERNAL

If you are not using Azure AD an the OIDC Identity provider, follow the steps in Users Synchronization to see how you can create organizations and sync your users and teams into TSB

For more information on what each of these sections describes and how to configure them, please check out the following links:

Edit the relevant sections, save your configured custom resource to a file and apply it to your Kubernetes cluster.

kubectl apply -f managementplane.yaml

Note: TSB will automatically do this every hour, so this command only needs to be run once after the initial installation.

Verifying Installation

To verify your installation succeeded, log in as the admin user. Try to connect to the TSB UI or login with the tctl CLI tool.

The TSB UI is reachable on port 8443 of the external IP as returned by the following command:

kubectl get svc -n tsb envoy

To configure tctl's default config profile to point to your new TSB cluster do the following:

tctl config clusters set default --bridge-address $(kubectl get svc -n tsb envoy --output jsonpath='{.status.loadBalancer.ingress[0].ip}'):8443

Now you can log in with tctl and provide the organization name and admin account credentials. The tenant field is optional and can be left blank at this point and configured later, when tenants are added to the platform.

tctl login
Organization: tetrate
Tenant:
Username: admin
Password: *****
Login Successful!

Go to Connect to TSB with tctl for more details on how to configure tctl.