HiveBrain v1.2.0
Get Started
← Back to all entries
patternkubernetesMinor

How best to delay startup of a kubernetes container until another container has done something?

Submitted by: @import:stackexchange-devops··
0
Viewed 0 times
delaycontaineruntilkuberneteshasanotherdonehowstartupsomething

Problem

I'm migrating a chunk of applications to k8s. Some of them have large amounts of config files which are best held in GIT as their size exceeds the max size for configmaps. I have a simple git-sync image which I can configure to keep a persistent volume in sync with a git repository and had hoped to use it as a sidecar in some deployments.

Here's the crux. Some applications (like vendor apps that I can't control) require the configuration files to be there before the application starts. This means I can't just run the git-sync container as a sidecar as there's no guarantee it will have cloned the git repo before the main app starts. I've worked around this by having a separate deployment for the git sync and then having an initContainer for my main application which checks for the existence of the cloned git repo before starting.

This works but it feels a little messy. Any thoughts on a cleaner approach to this?

Here's a yaml snippet of my deployments:

#main-deployment
...
initContainers:
- name: wait-for-git-sync
  image: my-git-sync:1.0
  command: ["/bin/bash"]
  args: [ "-c", "until [ -d /myapp-config/stuff ] ; do echo \"config not present yet\"; sleep 1; done; exit;" ]
  volumeMounts:
  - mountPath: /myapp-config
    name: myapp-config
containers:
- name: myapp
  image: myapp:1.0
  volumeMounts:
  - mountPath: /myapp-config
    name: myapp-config

volumes:
- name: myapp-config
  persistentVolumeClaim:
    claimName: myapp-config
...
---
#git-sync-deployment
...
containers:
- name: myapp-git-sync
  image: my-git-sync:1.0
  env:
    - name: GIT_REPO
      value: ssh://mygitrepo
    - name: SYNC_DIR
      value: /myapp-config/stuff
  volumeMounts:
  - mountPath: /myapp-config
    name: myapp-config
volumes:
- name: myapp-config
  persistentVolumeClaim:
    claimName: myapp-config
...

Solution

Maybe a readiness probe will help. The api server will in this case call your pods on /health and a http status error code means not ready, else ready. As long as the service is not ready, calls will not be routed.

- name: name
    image: "docker.io/app:1.0"
    imagePullPolicy: Always
    readinessProbe:
      httpGet:
        path: /health
        port: 5000
      initialDelaySeconds: 5


And in your code

@app.route("/health")
def health():
    if not os.path.exists('gitfile'):
        return "not ok", 500
    return "OK", 200


or else a livenessprobe with checks the return value of the utilities called. zero means success, else fail.

livenessProbe:
      exec:
        command:
        - cat
        - /tmp/healthy
      initialDelaySeconds: 5
      periodSeconds: 5

Code Snippets

- name: name
    image: "docker.io/app:1.0"
    imagePullPolicy: Always
    readinessProbe:
      httpGet:
        path: /health
        port: 5000
      initialDelaySeconds: 5
@app.route("/health")
def health():
    if not os.path.exists('gitfile'):
        return "not ok", 500
    return "OK", 200
livenessProbe:
      exec:
        command:
        - cat
        - /tmp/healthy
      initialDelaySeconds: 5
      periodSeconds: 5

Context

StackExchange DevOps Q#16483, answer score: 1

Revisions (0)

No revisions yet.