Skip to main content
logoTetrate Service BridgeVersion: next

Configuring Flux CD for GitOps

This document explains how you can configure Flux CD with Helm and GitHub integration to deploy a TSB application to a target cluster.

note

This document assumes that:

  • Flux version 2 CLI is installed.
  • Helm CLI is installed.
  • TSB is up and running, and GitOps has been enabled for the target cluster.

Cluster setup

First, install Flux on the target cluster with a GitHub integration. To do that, use the following command under the target cluster kubernetes context.

note

You will need a GitHub Personal Access Token (PAT) to input in the following command.

flux bootstrap github \
--owner=your-org \
--repository=git-ops \
--path=./clusters/cluster-01
note

Add --personal --private flags if you use your personal GitHub account for testing purposes.

This sets up Flux with the needed configurations for a cluster called cluster-01 in a GitHub repository named git-ops under the clusters/cluster-01/ directory.

note

Run flux logs -A --follow command in a different shell for debugging purposes.

You can run this command to do a generic status check:

flux check
Output
► checking prerequisites
✔ Kubernetes 1.20.15-gke.2500 >=1.20.6-0
► checking controllers
✔ helm-controller: deployment ready
► ghcr.io/fluxcd/helm-controller:v0.20.1
✔ kustomize-controller: deployment ready
► ghcr.io/fluxcd/kustomize-controller:v0.24.2
✔ notification-controller: deployment ready
► ghcr.io/fluxcd/notification-controller:v0.23.4
✔ source-controller: deployment ready
► ghcr.io/fluxcd/source-controller:v0.24.0
✔ all checks passed

Under the hood, Flux uses a Kustomization with a GitRepository source to store its own resources.

You can query its status:

flux get all
Output
NAME                     	REVISION    	SUSPENDED	READY	MESSAGE
gitrepository/flux-system main/36dff73 False True stored artifact for revision 'main/36dff739b5ae411a7b4a64010d42937bd3ae4d25'

NAME REVISION SUSPENDED READY MESSAGE
kustomization/flux-system main/36dff73 False True Applied revision: main/36dff73

Meanwhile, you will see something like this in the logs:

2022-04-24T20:42:06.921Z info Kustomization/flux-system.flux-system - server-side apply completed
2022-04-24T22:51:30.431Z info GitRepository/flux-system.flux-system - artifact up-to-date with remote revision: 'main/36dff739b5ae411a7b4a64010d42937bd3ae4d25'

Since Flux is up and running now, the next step is to push new configurations to the git-ops repository for the cluster-01 cluster. You can clone the repository and cd into clusters/cluster-01 for the next steps.

Application setup

There are several ways to structure your GitOps repositories. In this example, and for simplicity reasons, the same repository is used for both cluster and application configurations.

The goal for this section is to deploy the Helm chart of a Bookinfo application and its TSB resources.

First, create the bookinfo namespace with sidecar injection:

kubectl create namespace bookinfo
kubectl label namespace bookinfo istio-injection=enabled

Then, create a HelmRelease Flux resource with a GitRepository source for Bookinfo.

note

The alternative to GitRepository is a HelmRepository, which is not covered in this document.

If the bookinfo TSB helm chart definition is stored in the apps/bookinfo directory, create the HelmRelease resource in clusters/cluster-01/bookinfo.yaml.

apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
name: bookinfo
namespace: flux-system
spec:
chart:
spec:
chart: ./apps/bookinfo
sourceRef:
kind: GitRepository
name: flux-system
interval: 1m0s
install:
createNamespace: true
targetNamespace: bookinfo

Note that:

  • The HelmRelease will be created in the flux-system namespace, while the resources defined by the Helm chart of the apps/bookinfo chart of the release will be deployed in the bookinfo target namespace.
  • Since spec.chart.spec.version is not specified, Flux will use latest chart version.
  • GitRepository.name is flux-system since that's the name Flux uses internally for bootstrapping.

Next, add and push the file into git and watch the flux logs. You will see something like this:

2022-04-25T08:02:37.233Z info HelmRelease/bookinfo.flux-system - reconcilation finished in 49.382555ms, next run in 1m0s
2022-04-25T08:02:37.980Z info HelmChart/flux-system-bookinfo.flux-system - Discarding event, no alerts found for the involved object
2022-04-25T08:02:45.784Z error HelmChart/flux-system-bookinfo.flux-system - reconciliation stalled invalid chart reference: stat /tmp/helmchart-flux-system-flux-system-bookinfo-4167124062/source/apps/bookinfo: no such file or directory

This is because the helm chart has not been pushed to the apps/bookinfo directory yet.

Note that instead of parsing the flux logs, you can also query the resources with kubectl:

  • kubectl get helmreleases -A
  • kubectl get helmcharts -A

Next, create the helm chart. Create the apps/ directory, enter it and run:

helm create bookinfo

This creates the following file tree:

tree
Output
.
+-- bookinfo
+-- Chart.yaml
+-- charts
+-- templates
| +-- NOTES.txt
| +-- _helpers.tpl
| +-- deployment.yaml
| +-- hpa.yaml
| +-- ingress.yaml
| +-- service.yaml
| +-- serviceaccount.yaml
| \-- tests
| \-- test-connection.yaml
\-- values.yaml

4 directories, 10 files

Then, cd into bookinfo/.

For simplicity reasons, remove the following not-needed content:

rm -rf values.yaml charts templates/NOTES.txt templates/*.yaml templates/tests/

Next, edit the Chart.yaml. A minimal content looks like this:

apiVersion: v2
name: bookinfo
description: TSB bookinfo Helm Chart.
type: application
version: 0.1.0
appVersion: "0.1.0"

Next, add the Bookinfo definitions to the templates/ directory, gathering them from Istio's repository:

curl https://raw.githubusercontent.com/istio/istio/master/samples/bookinfo/platform/kube/bookinfo.yaml -o bookinfo.yaml

Once we have the bookinfo deployment, we'll add the TSB configuration resources in a templates/tsb.yaml file. When creating the TSB configurations, a best practice is to put all them inside a List resource. This will enforce a strict order when applying them into the cluster, and you will be able to guarantee that the resources that are at higher levels of the TSB resource hierarchy are applied first, and you won't hit issues due to Helm resource ordering limitations.

note

For this example, an ingress gateway it's used for the application that will be configured by the first resource configuration below. You can read more about this here.

Also make sure you change your-org and your-tenant for actual values.

apiVersion: v1
kind: List
items:
# Create an ingress gateway deployment that will be the entry point to
# the bookinfo application
- apiVersion: install.tetrate.io/v1alpha1
kind: IngressGateway
metadata:
namespace: bookinfo
name: tsb-gateway-bookinfo
spec: {}
# Create the workspace and gateway group that capture the namespaces where
# the bookinfo application will run
- apiVersion: tsb.tetrate.io/v2
kind: Workspace
metadata:
name: bookinfo
annotations:
tsb.tetrate.io/organization: your-org
tsb.tetrate.io/tenant: your-tenant
spec:
namespaceSelector:
names:
- "*/bookinfo"
- apiVersion: gateway.tsb.tetrate.io/v2
kind: Group
metadata:
name: bookinfo-gg
annotations:
tsb.tetrate.io/organization: your-org
tsb.tetrate.io/tenant: your-tenant
tsb.tetrate.io/workspace: bookinfo
spec:
namespaceSelector:
names:
- "*/*"
configMode: BRIDGED
# Expose the productpage service in the application ingress
- apiVersion: gateway.tsb.tetrate.io/v2
kind: IngressGateway
metadata:
name: bookinfo-gateway
annotations:
tsb.tetrate.io/organization: your-org
tsb.tetrate.io/tenant: your-tenant
tsb.tetrate.io/workspace: bookinfo
tsb.tetrate.io/gatewayGroup: bookinfo-gg
spec:
workloadSelector:
namespace: bookinfo
labels:
app: tsb-gateway-bookinfo
http:
- name: productpage
port: 80
hostname: "bookinfo.example.com"
routing:
rules:
- route:
host: "bookinfo/productpage.bookinfo.svc.cluster.local"
port: 9080

Before pushing, test that the chart is well constructed:

helm install bookinfo --dry-run .

It should print the rendered resources as YAML.

Now it's time to push them and check the flux logs.

If GitOps is properly configured in the cluster, pushing this chart will create the corresponding Kubernetes and TSB resources:

kubectl get pods -n bookinfo
Output
NAME                                        READY   STATUS    RESTARTS   AGE
details-v1-79f774bdb9-8fr6d 1/1 Running 0 4m17s
productpage-v1-6b746f74dc-mvl9n 1/1 Running 0 4m17s
ratings-v1-b6994bb9-zxq8n 1/1 Running 0 4m17s
reviews-v1-545db77b95-c99dk 1/1 Running 0 4m17s
reviews-v2-7bf8c9648f-rsndb 1/1 Running 0 4m17s
reviews-v3-84779c7bbc-kzhwl 1/1 Running 0 4m17s
tsb-gateway-bookinfo-73668b6aab-jygvk 1/1 Running 0 4m18s
kubectl get workspaces -A
Output
NAMESPACE   NAME       PRIVILEGED   TENANT    AGE
bookinfo bookinfo tetrate 4m20s

Use tctl to check for correct WS status. You can use the TSB UI instead.

tctl x status ws bookinfo
Output
NAME        STATUS      LAST EVENT    MESSAGE
bookinfo ACCEPTED

This means everything is up and running. The the bookinfo service can be accessed on the configured hostname through the ingress gateway.

If DNS is not configured in the cluster, or do you want to test it from your local environment, you can run a curl against the productpage service via its ingress gateway public IP like this.

export IP=$(kubectl -n bookinfo get service tsb-gateway-bookinfo -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
curl -H "Host: bookinfo.example.com" -H "X-B3-Sampled: 1" http://$IP/productpage

Application setup - GitRepository with Kustomization

The goal for this section is to deploy a sample application using Flux GitRepository and Kustomization.

In this example we'll be using your own fork of the podinfo application. This is an application created by the FluxCD maintainers and are used in FluxCD documentation.

We will need push changes to this repository, so you need to fork the repository. Once you have the podinfo application forked in your GitHub account clone the repo:

GITHUB_ACCOUNT=<your github account>
git clone https://github.com/$GITHUB_ACCOUNT/podinfo.git

Add podinfo repository to Flux

In this example you will create a GitRepository that will watch your fork of podinfo application. Go to your git-ops repository directory created in the step Cluster setup.

cd $GITOPS_REPO
flux create source git podinfo \ <region:eu-west-1>
--url=https://github.com./piotrkpc/podinfo \
--branch=master \
--interval=30s \
--export > ./clusters/cluster-01/podinfo-source.yaml

the file podinfo-source.yaml should look as follows:

---
apiVersion: source.toolkit.fluxcd.io/v1
kind: GitRepository
metadata:
name: podinfo
namespace: flux-system
spec:
interval: 30s
ref:
branch: master
url: https://github.com./<your-github-account>/podinfo

Now we can commit and push to your git-ops flux repository.

git add -A && git commit -m "add podinfo GitRepository" && git push

Add your TSB configuration to expose podinfo application.

We would like to expose the podinfo application using service mesh gateways. The podinfo repository already has a directory that stores kustomize manifest that will deploy your application, so we can reuse that. The directory is podinfo/kustomize. Add the following TSB configuration to existing kustomize manifests:

cd $PODINFO_REPO
cat <<EOF > kustomize/tsb.yaml
apiVersion: install.tetrate.io/v1alpha1
kind: IngressGateway
metadata:
namespace: podinfo
name: tsb-gateway-podinfo
spec: {}
---
apiVersion: tsb.tetrate.io/v2
kind: Tenant
metadata:
name: "tetrate"
annotations:
tsb.tetrate.io/organization: "tetrate"
spec:
description: Test tenant
---
apiVersion: tsb.tetrate.io/v2
kind: Workspace
metadata:
name: podinfo
annotations:
tsb.tetrate.io/organization: "tetrate"
tsb.tetrate.io/tenant: "tetrate"
spec:
namespaceSelector:
names:
- "*/podinfo"
---
apiVersion: gateway.tsb.tetrate.io/v2
kind: Group
metadata:
name: podinfo-gws
annotations:
tsb.tetrate.io/organization: "tetrate"
tsb.tetrate.io/tenant: "tetrate"
tsb.tetrate.io/workspace: podinfo
spec:
namespaceSelector:
names:
- "*/podinfo"
configMode: BRIDGED
---
apiVersion: gateway.tsb.tetrate.io/v2
kind: IngressGateway
metadata:
name: podinfo-gw
annotations:
tsb.tetrate.io/organization: "tetrate"
tsb.tetrate.io/tenant: "tetrate"
tsb.tetrate.io/workspace: podinfo
tsb.tetrate.io/gatewayGroup: podinfo-gws
spec:
workloadSelector:
namespace: podinfo
labels:
app: tsb-gateway-podinfo
http:
- name: podinfo
port: 80
hostname: "podinfo.tetrate.io"
routing:
rules:
- route:
host: "podinfo/podinfo.podinfo.svc.cluster.local"
port: 9898
EOF

Add this file to the git repository and push the changes.

git add -A && git commit -m "add TSB configuration" && git push

Now we create a Kustomization resource that will watch the podinfo repository and apply the kustomize manifests.

flux create kustomization podinfo \                                                                                                                                                                                                                                                                                 <region:eu-west-1>
--target-namespace=podinfo \
--source=podinfo \
--path="./kustomize" \
--prune=true \
--wait=true \
--interval=30m \
--retry-interval=2m \
--health-check-timeout=3m \
--export > ./clusters/cluster-01/podinfo-kustomization.yaml

The file podinfo-kustomization.yaml should look as follows:

---
apiVersion: kustomize.toolkit.fluxcd.io/v1
kind: Kustomization
metadata:
name: podinfo
namespace: flux-system
spec:
interval: 30m0s
path: ./kustomize
prune: true
retryInterval: 2m0s
sourceRef:
kind: GitRepository
name: podinfo
targetNamespace: podinfo
timeout: 3m0s
wait: true
healthChecks: # since TSB resources are important for the application to work, we can add health checks to make sure the resources are applied correctly
- apiVersion: tsb.tetrate.io/v2
kind: Tenant
name: tetrate
- apiVersion: tsb.tetrate.io/v2
kind: Workspace
name: podinfo
- apiVersion: gateway.tsb.tetrate.io/v2
kind: Group
name: podinfo-gws
- apiVersion: gateway.tsb.tetrate.io/v2
kind: IngressGateway
name: podinfo-gw

Now we can commit and push to your git-ops flux repository.

git add -A && git commit -m "add podinfo Kustomization" && git push

Now you can check the status of the resources created by the Kustomization resource.

flux get kustomizations --watch                                                                                                                                                                                                                                                                                     <region:eu-west-1>
NAME REVISION SUSPENDED READY MESSAGE
flux-system main@sha1:98b0a683 False True Applied revision: main@sha1:98b0a683
podinfo master@sha1:5c80d130 False True Applied revision: master@sha1:5c80d130

as you can see the podinfo kustomization has been applied and the resources are ready. Let's verify that everything works using kubectl and curl.

$ kubectl get -n podinfo pods,svc,tenant,workspaces,groups,gateways                                                                                                                                                                                                                                                   <region:eu-west-1>
NAME READY STATUS RESTARTS AGE
pod/podinfo-664f9748d8-gw6zs 2/2 Running 0 12h
pod/podinfo-664f9748d8-hk82j 2/2 Running 0 12h
pod/tsb-gateway-podinfo-778f448c96-pdlg4 1/1 Running 0 10m

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/podinfo ClusterIP 10.3.248.28 <none> 9898/TCP,9999/TCP 5d9h
service/tsb-gateway-podinfo LoadBalancer 10.3.247.238 34.83.20.78 15443:32211/TCP,80:30653/TCP,443:32434/TCP 10m

NAME DELETION PROTECTION AGE
tenant.tsb.tetrate.io/tetrate 5d9h

NAME PRIVILEGED TENANT DELETION PROTECTION AGE
workspace.tsb.tetrate.io/podinfo tetrate 5d9h

NAME MODE WORKSPACE TENANT DELETION PROTECTION AGE
group.gateway.tsb.tetrate.io/podinfo-gws BRIDGED podinfo tetrate 5d9h

NAME AGE
gateway.networking.istio.io/podinfo-gw 5d9h

$ IP=$(kubectl -n podinfo get svc tsb-gateway-podinfo -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
$ curl -H "Host: podinfo.tetrate.io" http://$IP
{
"hostname": "podinfo-664f9748d8-gw6zs",
"version": "6.5.4",
"revision": "33dac1ba40f73555725fbf620bf3b4f6f1a5ad89",
"color": "#34577c",
"logo": "https://raw.githubusercontent.com/stefanprodan/podinfo/gh-pages/cuddle_clap.gif",
"message": "greetings from podinfo v6.5.4",
"goos": "linux",
"goarch": "amd64",
"runtime": "go1.21.5",
"num_goroutine": "8",
"num_cpu": "4"
}%

Troubleshooting

Remember to bump the chart version when publishing new changes to the Chart.

If there are no changes and you want to force flux to re-run, do:

flux reconcile helmrelease bookinfo

You can also check for issues in the Flux Kubernetes resources:

flux get helmreleases -A
Output
NAMESPACE  	NAME    	REVISION	SUSPENDED	READY	MESSAGE
flux-system bookinfo False False install retries exhausted
kubectl get helmreleases -A -o yaml
Output
...

If you see a upgrade retries exhausted message, there is a bug regression. The workaround is to suspend and resume the HelmRelease:

flux suspend helmrelease bookinfo
flux resume helmrelease bookinfo