Skip to main content
logoTetrate Service ExpressVersion: Latest

Configuring Flux CD

This document explains how you can configure Flux CD with Helm and GitHub integration to deploy a TSE application to a target cluster.

This document assumes that:

  • Flux version 2 CLI is installed.
  • Helm CLI is installed.

Cluster setup

Install Flux on the target cluster with a GitHub integration.

tip

You will need to provide a GitHub Personal Access Token (PAT) when bootstrapping flux.

Add the --personal --private flags if you use your personal GitHub account for testing purposes.

Set up Flux with the needed configurations for a cluster called cluster-1 in a GitHub repository named git-ops under the clusters/cluster-1/ directory:

flux bootstrap github \
--owner=your-org \
--repository=git-ops \
--path=./clusters/cluster-1

Checking the Installation

You can run flux logs -A --follow command in a different shell while you install Flux.

You can run flux check to perform a general status check:

flux check
► checking prerequisites
✔ Kubernetes 1.20.15-gke.2500 >=1.20.6-0
► checking controllers
✔ helm-controller: deployment ready
► ghcr.io/fluxcd/helm-controller:v0.20.1
✔ kustomize-controller: deployment ready
► ghcr.io/fluxcd/kustomize-controller:v0.24.2
✔ notification-controller: deployment ready
► ghcr.io/fluxcd/notification-controller:v0.23.4
✔ source-controller: deployment ready
► ghcr.io/fluxcd/source-controller:v0.24.0
✔ all checks passed

Under the hood, Flux uses a Kustomization with a GitRepository source to store its own resources.

You can query its status:

flux get all
# NAME REVISION SUSPENDED READY MESSAGE
# gitrepository/flux-system main/36dff73 False True stored artifact for revision 'main/36dff739b5ae411a7b4a64010d42937bd3ae4d25'
#
# NAME REVISION SUSPENDED READY MESSAGE
# kustomization/flux-system main/36dff73 False True Applied revision: main/36dff73

Meanwhile, you will see something like this in the logs:

2022-04-24T20:42:06.921Z info Kustomization/flux-system.flux-system - server-side apply completed
2022-04-24T22:51:30.431Z info GitRepository/flux-system.flux-system - artifact up-to-date with remote revision: 'main/36dff739b5ae411a7b4a64010d42937bd3ae4d25'

Once Flux is up and running, the next step is to push new configurations to the git-ops repository for the cluster-1 cluster. You can clone the repository and cd into clusters/cluster-1 directory for the next steps.

Application setup

The goal for this section is to deploy the Helm chart of a Bookinfo application and its TSE resources.

There are several ways to structure your GitOps repositories. In this example, and for simplicity reasons, the same repository is used for both cluster and application configurations.

  1. Create the bookinfo namespace

    First, create the bookinfo namespace with sidecar injection:

    kubectl create namespace bookinfo
    kubectl label namespace bookinfo istio-injection=enabled
  2. Create the Flux resource

    Create a HelmRelease Flux resource with a GitRepository source for Bookinfo.

    For example, if the bookinfo helm chart definition is stored in the apps/bookinfo directory, create the following HelmRelease resource in clusters/cluster-1/bookinfo.yaml:

    apiVersion: helm.toolkit.fluxcd.io/v2beta1
    kind: HelmRelease
    metadata:
    name: bookinfo
    namespace: flux-system
    spec:
    chart:
    spec:
    chart: ./apps/bookinfo
    sourceRef:
    kind: GitRepository
    name: flux-system
    interval: 1m0s
    install:
    createNamespace: true
    targetNamespace: bookinfo

    Note that:

    • The HelmRelease will be created in the flux-system namespace, while the resources defined by the Helm chart of the apps/bookinfo chart of the release will be deployed in the bookinfo target namespace.
    • Since spec.chart.spec.version is not specified, Flux will use latest chart version.
    • The GitRepository.name is flux-system; this is the name that Flux uses by default when bootstrapping.

    The alternative to GitRepository is a HelmRepository, which is not covered in this document.

  3. Commit the Resource Configuration

    Next, add and push the change into git and watch the flux logs (flux logs -A --follow). You should see something like this:

    2022-04-25T08:02:37.233Z info HelmRelease/bookinfo.flux-system - reconcilation finished in 49.382555ms, next run in 1m0s
    2022-04-25T08:02:37.980Z info HelmChart/flux-system-bookinfo.flux-system - Discarding event, no alerts found for the involved object
    2022-04-25T08:02:45.784Z error HelmChart/flux-system-bookinfo.flux-system - reconciliation stalled invalid chart reference: stat /tmp/helmchart-flux-system-flux-system-bookinfo-4167124062/source/apps/bookinfo: no such file or directory

    The error arises because helm chart has not been pushed to the apps/bookinfo directory yet.

    Query resources with kubectl

    Note that instead of parsing the flux logs, you can also query the resources with kubectl:

    • kubectl get helmreleases -A
    • kubectl get helmcharts -A

    Create the Helm Chart

    Create the apps/ directory, enter it and run:

    helm create bookinfo

    Display the file structure using tree or similar:

    File Structure: output from tree command
    .
    +-- bookinfo
    +-- Chart.yaml
    +-- charts
    +-- templates
    | +-- NOTES.txt
    | +-- _helpers.tpl
    | +-- deployment.yaml
    | +-- hpa.yaml
    | +-- ingress.yaml
    | +-- service.yaml
    | +-- serviceaccount.yaml
    | \-- tests
    | \-- test-connection.yaml
    \-- values.yaml

    4 directories, 10 files

    Then, cd into bookinfo/.

    For simplicity, remove the following content which is not required:

    rm -rf values.yaml charts templates/NOTES.txt templates/*.yaml templates/tests/

    Edit Chart.yaml. A minimal definition looks like this:

    Chart.yaml
    apiVersion: v2
    name: bookinfo
    description: TSE bookinfo Helm Chart.
    type: application
    version: 0.1.0
    appVersion: "0.1.0"

    Next, add the Bookinfo definitions to the templates/ directory, gathering them from Istio's repository:

    curl https://raw.githubusercontent.com/istio/istio/master/samples/bookinfo/platform/kube/bookinfo.yaml -o bookinfo.yaml

    Once we have the bookinfo deployment, we'll add the TSE configuration resources in a templates/tse.yaml file.

    When creating the TSE configurations, best practice is to put all them inside a List resource. This will enforce a strict order when applying them into the cluster, and you will be able to guarantee that the resources that are at higher levels of the TSE resource hierarchy are applied first, and you won't hit issues due to Helm resource ordering limitations.

    templates/tse.yaml
    apiVersion: v1
    kind: List
    items:
    # Create an ingress gateway deployment that will be the entry point to
    # the bookinfo application
    - apiVersion: install.tetrate.io/v1alpha1
    kind: IngressGateway
    metadata:
    namespace: bookinfo
    name: gateway-bookinfo
    spec: {}
    # Create the workspace and gateway group that capture the namespaces where
    # the bookinfo application will run
    - apiVersion: tsb.tetrate.io/v2
    kind: Workspace
    metadata:
    name: bookinfo
    annotations:
    tsb.tetrate.io/organization: tse
    tsb.tetrate.io/tenant: tse
    spec:
    namespaceSelector:
    names:
    - "*/bookinfo"
    - apiVersion: gateway.tsb.tetrate.io/v2
    kind: Group
    metadata:
    name: bookinfo-gg
    annotations:
    tsb.tetrate.io/organization: tse
    tsb.tetrate.io/tenant: tse
    tsb.tetrate.io/workspace: bookinfo
    spec:
    namespaceSelector:
    names:
    - "*/*"
    configMode: BRIDGED
    # Expose the productpage service in the application ingress
    - apiVersion: gateway.tsb.tetrate.io/v2
    kind: Gateway
    metadata:
    name: bookinfo-gateway
    annotations:
    tsb.tetrate.io/organization: tse
    tsb.tetrate.io/tenant: tse
    tsb.tetrate.io/workspace: bookinfo
    tsb.tetrate.io/gatewayGroup: bookinfo-gg
    spec:
    workloadSelector:
    namespace: bookinfo
    labels:
    app: gateway-bookinfo
    http:
    - name: productpage
    port: 80
    hostname: "bookinfo.example.com"
    routing:
    rules:
    - route:
    serviceDestination:
    host: "bookinfo/productpage.bookinfo.svc.cluster.local"
    port: 9080
    ---

    Before pushing, test that the chart is well constructed:

    helm install bookinfo --dry-run .

    This should output the rendered resources as YAML.

  4. Push the Helm Chart

    Now it's time to push the changes and check the flux logs.

    If GitOps is properly configured in the cluster, pushing this chart will create the corresponding Kubernetes and TSE resources:

    kubectl get pods -n bookinfo
    NAME                                        READY   STATUS    RESTARTS   AGE
    details-v1-79f774bdb9-8fr6d 1/1 Running 0 4m17s
    productpage-v1-6b746f74dc-mvl9n 1/1 Running 0 4m17s
    ratings-v1-b6994bb9-zxq8n 1/1 Running 0 4m17s
    reviews-v1-545db77b95-c99dk 1/1 Running 0 4m17s
    reviews-v2-7bf8c9648f-rsndb 1/1 Running 0 4m17s
    reviews-v3-84779c7bbc-kzhwl 1/1 Running 0 4m17s
    tsb-gateway-bookinfo-73668b6aab-jygvk 1/1 Running 0 4m18s
    kubectl get workspaces -A
    NAMESPACE   NAME       PRIVILEGED   TENANT    AGE
    bookinfo bookinfo tetrate 4m20s
    tctl x status ws bookinfo
    NAME        STATUS      LAST EVENT    MESSAGE
    bookinfo ACCEPTED

Test the Deployment

Once the deployment has completed, you should be able to access the Bookinfo application using the hostname in the Gateway.

If DNS is not configured in the cluster, or if you want to test it from your local environment, you can run curl against the productpage service via its ingress gateway public IP as follows:

export IP=$(kubectl -n bookinfo get service tsb-gateway-bookinfo -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
curl -H "Host: bookinfo.example.com" http://$IP/productpage

Troubleshooting

Remember to bump the chart version when publishing new changes to the Chart.

If there are no changes and you want to force flux to re-run, do:

flux reconcile helmrelease bookinfo

You can also check for issues in the Flux Kubernetes resources:

flux get helmreleases -A

kubectl get helmreleases -A -o yaml

If you see a upgrade retries exhausted message, there is a bug regression. The workaround is to suspend and resume the HelmRelease:

flux suspend helmrelease bookinfo
flux resume helmrelease bookinfo