Extend Helm Deployment Variables
Environment variables
IBM Industry Solutions Workbench provides the possibility to define any custom environment variables (like DB connection, URLs,...) for
your service projects. If a new service project is created you will find an extension-values.yaml
file in your Git
repository. It's possible to add this file by yourself if it is missing. But you need to use the file
name extension-values.yaml
and the described file structure.
Add new environment variables
By default, the extension-values.yaml
file contains only comments that should explain how the file can be used. To use
this feature, you need to remove the comments and place your needed environment variables in the file as described later.
The service project pipeline will add this additional variables to the Helm chart that gets created via the pipeline.
Within an Application Composition Project, these additional values (of the values.yaml
) can be seen
and even be overridden by the "Configure Component" functionality.
The built Helm chart of the service project will react on these values and add the additional environment variables to the deployment of your service project:
- Supported types of environment variables:
secretKeyRef
configMapKeyRef
keyValue
fieldRef
Structure of extension-values.yaml
env: variables: secretKeyRef: - variableName: VARIABLE1 secretName: k5-service1-variable1-secret secretKey: key1 optional: false configMapKeyRef: - variableName: VARIABLE2 configMapName: k5-service1-variable2-cm configMapKey: key2 optional: false keyValue: - variableName: VARIABLE3 value: myString fieldRef: - variableName: VARIABLE4 apiVersion: v1 fieldPath: metadata.namespace
Supported type secretKeyRef
Key | Description | Example |
---|---|---|
variableName | Name of the environment variable | VARIABLE1 |
secretName | Name of the Kubernetes secret | k5-service1-variable1-secret |
secretKey | Name of the Key used in the Kubernetes secret | key1 |
optional | Defines if the Pod will start if the secret is missing | false |
Supported type configMapKeyRef
Key | Description | Example |
---|---|---|
variableName | Name of the environment variable | VARIABLE2 |
configMapName | Name of the Kubernetes configmap | k5-service1-variable2-cm |
configMapKey | Name of the Key used in the Kubernetes configmap | key2 |
optional | Defines if the Pod will start if the configmap is missing | false |
Supported type keyValue
Key | Description | Example |
---|---|---|
variableName | Name of the environment variable | VARIABLE3 |
value | Value of the environment variable | myString |
Supported type fieldRef
Key | Description | Example |
---|---|---|
variableName | Name of the environment variable | VARIABLE4 |
apiVersion | API Version of the Kubernetes resource | v1 |
fieldPath | Path to the value | metadata.namespace |
See also Kubernetes environment variables.
Deployment of new environment variables
In your deployed service the environment variables will be added to the deployment of the service project so that they can be used in the implementation, e.g.:
kind: Deployment apiVersion: apps/v1 metadata: name: k5-service1 ... spec: containers: - env: - name: VARIABLE1 valueFrom: secretKeyRef: name: k5-service1-variable1-secret key: key1 optional: false - name: VARIABLE2 valueFrom: configMapKeyRef: name: k5-service1-variable2-cm key: key2 optional: false - name: VARIABLE3 value: myString - name: VARIABLE4 valueFrom: fieldRef: apiVersion: v1 fieldPath: metadata.namespace
Using secrets or ConfigMaps
If you are using secrets or ConfigMaps for your environment variables, you need to create the defined secrets and ConfigMaps manually in your OpenShift namespaces (k5-projects).
Example to create a required secret:
cat <<EOF | oc apply -f -
kind: Secret
apiVersion: v1
metadata:
name: k5-service1-variable1-secret
spec:
data:
key1: bXlWYWx1ZTE=
type: Opaque
EOF
Example to create a required ConfigMap:
cat <<EOF | oc apply -f -
kind: ConfigMap
apiVersion: v1
metadata:
name: k5-service1-variable2-cm
spec:
data:
key2: myValue2
type: Opaque
EOF
Naming collisions of the ConfigMaps and secrets must be avoided by using unique names (consider applications, services and namespaces).
Override deployment values
As an alternative to fully customize helm charts IBM Industry Solutions Workbench allows
to override specific values of the values.yaml for the helm charts via the extension-values.yaml
.
IBM Industry Solutions Workbench is using pre-defined helm chart templates for the build and deployment of Service Projects. This helm charts can be adjusted or completely overridden for specific Service Projects depending on your needs and requirements.
TechPreview Feature: Please note that this feature is a techpreview. That means the feature may not be fully supported, functionally complete and may introduce breaking changes with the next version.
It's not possible to override all value of the values.yaml
, but only the following documented values.
Possible values via extension-values.yaml
It's possible to add overrideValues
to the extension-values.yaml
. These values are then added to the values.yaml
of
the built helm chart for your project.
The following example shows what values can be overridden for the service project deployment:
overrideValues: # override pdb configuration poddisruptionbudget: enabled: false # override hpa configuration autoscaling: enabled: false # override replica count replicaCount: 1 # override readiness, liveness and startup probes probes: readinessProbe: httpGet: path: /actuator/health port: 8443 scheme: HTTPS timeoutSeconds: 5 periodSeconds: 5 successThreshold: 1 failureThreshold: 5 livenessProbe: httpGet: path: /actuator/health port: 8443 scheme: HTTPS timeoutSeconds: 5 periodSeconds: 5 successThreshold: 1 failureThreshold: 5 # add extra init containers extraInitContainers: - name: init-myservice image: my_image command: ['sh', '-c', 'until nslookup myservice; do echo waiting for myservice; sleep 2; done;'] # add extra containers container: extraContainer: - name: my-sidecar image: my_image args: - /sidecar-controller env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace volumeMounts: - name: copy-files mountPath: /srv/var/lib/files