Skip to main content
logoTetrate Service BridgeVersion: next

Manage Embedded PostgreSQL

If postgres configuration is not specified in the DataStore section of the ManagementPlane spec, TSB will install and manage an embedded Postgres database via the kubegres operator. This page provides information on how to observe and manage the embedded Postgres database.

When to use this information

The TSB embedded database is backed-up daily. In normal operation, you may wish to take an additional manual backup before upgrading TSB as an act of caution.

The remainder of this information is provided for reference only. You should not need to interact directly with the embedded postgres database or with the kubegres operator. Please contact Tetrate Technical Support for assistance if needed.

Observe Database Management Activity

Kubegres manages the Postgres instances. It pre-configures the databases for high availability, with data replication and fast failover in the event that a single Postgres instance fails. It also manages periodic and manual database backups.

Kubegres is installed by the tsb-operator:

kubectl get pod -n tsb -l platform.tsb.tetrate.io/application=kubegres
NAME                                           READY   STATUS    RESTARTS   AGE
kubegres-controller-manager-74899654db-dgdlb 2/2 Running 0 37m

It deploys a series of postgres replicas:

kubectl get pod -n tsb -l app=tsb-postgres --label-columns replicationRole
NAME               READY   STATUS    RESTARTS   AGE   REPLICATIONROLE
tsb-postgres-1-0 1/1 Running 0 44m primary
tsb-postgres-2-0 1/1 Running 0 42m replica
tsb-postgres-3-0 1/1 Running 0 41m replica

You can monitor kubegres activity by following the pod logs:

kubectl logs -n tsb -l platform.tsb.tetrate.io/application=kubegres --tail=-1

Database Backups

By default, kubegres takes a daily database backup at 02:00am. This backup is configured in the ManagementPlane spec, for example:

spec:
dataStore:
embeddedPostgres:
backupCronSchedule: 0 2 * * */1
backupStorageSize: 10Gi
connectionIdleLifetime: 8500s
connectionIdleMaxOpen: 2
connectionLifetime: 8500s
connectionMaxOpen: 100
name: tsb
storageSize: 50Gi

Run a backup

You can manually trigger a backup as follows:

kubectl create job -n tsb --from cronjob/backup-tsb-postgres my-manual-backup

Watch the backup logs as follows:

kubectl logs -n tsb -l job-name=my-manual-backup

Wait for completion:

kubectl wait job -n tsb my-manual-backup --for=condition=complete

Inspect the logs to determine the location of the backup file:

12/03/2024 11:36:49 - DB backup completed for Kubegres resource tsb-postgres into file: /var/lib/backup/tsb-postgres-backup-12_03_2024_11_36_49.gz

Copy a Backup Offline

Backups are stored in a Kubernetes Persistent Volume Claim (PVC) named tsb-backup in the tsb namespace, using the path /var/lib/backup/tsb-postgres-backup-$date.gz:

kubectl describe pvc -n tsb tsb-backup

To access a PVC backup, deploy a pod to mount that PVC, and use it to copy the backup files offline:

cat <<EOF > read-backup.yaml
apiVersion: v1
kind: Pod
metadata:
name: read-backup
namespace: tsb
spec:
containers:
- name: alpine
image: alpine:latest
command:
- sleep
- infinity
volumeMounts:
- name: backup-data
mountPath: /var/lib/backup
readOnly: false
volumes:
- name: backup-data
persistentVolumeClaim:
claimName: tsb-backup
EOF

kubectl apply -f read-backup.yaml

List the backups available in the PVC storage:

kubectl exec -n tsb -it read-backup -- ls -l /var/lib/backup

You will see a list of available backup files:

total 1844
drwx------ 2 root root 16384 Aug 1 02:00 lost+found
-rw-r--r-- 1 root root 362772 Aug 1 02:00 tsb-postgres-backup-01_08_2023_02_00_07.gz
-rw-r--r-- 1 root root 418508 Aug 2 02:00 tsb-postgres-backup-02_08_2023_02_00_09.gz
-rw-r--r-- 1 root root 541727 Mar 2 11:36 tsb-postgres-backup-12_03_2024_11_36_49.gz

Copy the desired backup file(s):

kubectl cp -n tsb read-backup:/var/lib/backup/tsb-postgres-backup-12_03_2024_11_36_49.gz ./tsb-postgres-backup-12_03_2024_11_36_49.gz

Restore a Backup

Before restoring a Backup, the TSB Management Plane needs to be temporarily paused. This does not affect TSB-hosted services.

Pause the Management Plane

Scale down the TSB and IAM components to prevent write activity on the database:

kubectl scale deploy -n tsb tsb iam --replicas 0

Restore a backup

If necessary, copy your offline backup file into the PVC:

kubectl cp -n tsb my-offline-backup.gz read-backup:/var/lib/backup/my-offline-backup.gz

Create a job to run the restore. Make sure to:

  • replace the image HUB with the location of the registry where you originally installed TSB from.
  • edit the value of the BACKUPFILE variable to match the name of the backup file you wish to restore
cat <<EOF > restore-backup.yaml
apiVersion: batch/v1
kind: Job
metadata:
name: restore-backup
namespace: tsb
spec:
backoffLimit: 1
completions: 1
parallelism: 1
template:
spec:
restartPolicy: Never
containers:
- name: restore
image: HUB/postgres:14.8-alpine3.18
command:
- sh
args:
- -c
- |
set -ex
echo "copying the backup to restore"
cp /var/lib/backup/$BACKUPFILE ./$BACKUPFILE
gzip -d $BACKUPFILE

echo "executing the backup"
psql "sslmode=verify-ca \
sslcert=/var/lib/postgresql/data/tls/tls.crt \
sslkey=/var/lib/postgresql/data/tls/tls.key \
sslrootcert=/var/lib/postgresql/data/tls/ca.crt \
host=$DBHOST user=$DBUSER dbname=$DBNAME password=$DBPASSWORD" \
-f ${BACKUPFILE:0:-3}
env:
- name: BACKUPFILE
value: my-offline-backup.gz
- name: DBHOST
value: tsb-postgres
- name: DBUSER
value: tsb
- name: DBNAME
value: postgres
- name: DBPASSWORD
valueFrom:
secretKeyRef:
name: postgres-credentials
key: embedded-postgres-password
volumeMounts:
- mountPath: /var/lib/postgresql/data/tls
name: tsb-postgres-certs
readOnly: true
- mountPath: /var/lib/backup
name: backup-volume
readOnly: true
volumes:
- name: tsb-postgres-certs
secret:
defaultMode: 0600
secretName: tsb-postgres-certs
- name: backup-volume
persistentVolumeClaim:
claimName: tsb-backup
readOnly: true
EOF

kubectl apply -f restore-backup.yaml

Follow the logs:

kubectl logs -n tsb -l job-name=restore-backup

Wait for completion, then restart the TSB and IAM components:

kubectl wait -n tsb job restore-backup --for=condition=complete

kubectl scale deploy -n tsb tsb iam --replicas 1

Local access to the database

The Embedded Postgres Database is secured with TLS connection, meaning that any client requires the TLS certificates since the connection will be verified. TSB installation creates and saves these certificates into the kubernetes secret tsb-postgres-certs.

They can be easily downloaded by running something similar to:

SECRET=tsb-postgres-certs
DEST=./tsb-postgres

kubectl get -n tsb secret ${SECRET} -o go-template='{{ index .data "ca.crt" | base64decode}}' > $DEST-ca.crt
kubectl get -n tsb secret ${SECRET} -o go-template='{{ index .data "tls.crt" | base64decode}}'> $DEST-client.crt
kubectl get -n tsb secret ${SECRET} -o go-template='{{ index .data "tls.key" | base64decode}}' > $DEST-client.key

chmod 600 $DEST-ca.crt $DEST-client.crt $DEST-client.key

Once the TLS certificates are available in the local machine, they can be used to authenticate a database connection.

The installation provides two kubernetes services for primary or replica instances:

  • tsb-postgres is the service exposing the primary instance.
  • tsb-postgres-replica is the service exposing the replica instances.

One way to connect to the database is choosing the proper service and creating a tunnel with kubectl port-forward.

This example is for the replica instances:

kubectl port-forward -n tsb svc/tsb-postgres-replica 5432

It is then possible to connect to the database using for example the psql commandline:

CA=./tsb-postgres-ca.crt
CERT=./tsb-postgres-client.crt
KEY=./tsb-postgres-client.key
HOST=localhost

psql "sslmode=verify-ca sslcert=$CERT sslkey=$KEY sslrootcert=$CA host=$HOST user=tsb dbname=tsb"

This will prompt for the password, and it can be found in the postgres-credentials secret.

Extract it with:

kubectl get -n tsb secret postgres-credentials -o go-template='{{ index .data "embedded-postgres-password" | base64decode}}'

Pruning the Database

The database table audit_log can grow over time. In order to keep its size under control, it is possible to prune the audit logs periodically. This feature can be enabled in the ManagementPlane spec setting the auditLogsCleanupRetention field, and optionally the cleanupCronSchedule field to schedule the cleanup (defaults at 04:00am daily).

For example:

spec:
dataStore:
cleanupCronSchedule: 0 4 * * */1
auditLogsCleanupRetention: 720h

Manually pruning the Database

The audit logs can eventually also be pruned manually if desired. Mind that to perform write operations on the database, the connection has to be established with the tsb-postgres primary instance.

The dates of logs can be checked in the TSB UI or directly in the database. To examine the database directly:

select kind,message,triggered_by,operation,to_timestamp(time) from audit_log;

If you wish to prune older audit log entries:

  • Delete items older than a given day, such as '2023-07-28':

    delete from audit_log where time <= cast(extract(epoch from to_timestamp('2023-07-28','YYYY-MM-DD')) as integer);
  • Delete items older than a given time, such as '2023-07-28 14:22:00':

    delete from audit_log where time <= cast(extract(epoch from to_timestamp('2023-07-28 14:22:00','YYYY-MM-DD HH24:MI:SS')) as integer);