patternkubernetesMinor
Start only one pod at a time in kubernetes
Viewed 0 times
kubernetestimeoneonlystartpod
Problem
Is it possible to tell kubernetes, that when performing a deployment of multiple pods that it should start only one at a time and only create the next pod if the previous has completely started up?
It would be useful in our usecase:
Our application automatically migrates/upgrades the database when starting up. But when there are multiple pods starting up at the same time, it could happen that multiple pods try to upgrade the database at the same time which could corrupt the database.
But if kubernetes would wait with starting up the second pod until the first pod has fully started up, it would mean that the database has been correctly upgraded by then.
If kubernetes does not have such functionality, how would you handle such a case?
It would be useful in our usecase:
Our application automatically migrates/upgrades the database when starting up. But when there are multiple pods starting up at the same time, it could happen that multiple pods try to upgrade the database at the same time which could corrupt the database.
But if kubernetes would wait with starting up the second pod until the first pod has fully started up, it would mean that the database has been correctly upgraded by then.
If kubernetes does not have such functionality, how would you handle such a case?
Solution
Use a combination of Readiness Probes and a proper deployment strategy.
PS: It's worth noting that this approach will cause your node to temporarily contain more pods than your
- Configure your Deployment spec like so:
spec:
replicas: 4 # or as many as you need to have
strategy:
rollingUpdate:
maxUnavailable: 0
maxSurge: 1maxUnavailable = 0 means that there should always be a number of pods available equal to the replicas count. maxSurge = 1 means that kubernetes will create the new pods one at a time, and destroy the old ones one-at-a-time as well.- Implement a
readinessProbewhich should return a successful response when the database migration has completed. You'd have to write the logic for that yourself, but if you do it right, kubernetes will create a pod with the new Deployment and wait for it to complete and only after that, evict 1 of the old ones.
PS: It's worth noting that this approach will cause your node to temporarily contain more pods than your
replicas config implies, namely replicas + 1 for a short period of time while deploying. You need to make sure that your node can handle the little extra load untill the rollout is complete.Code Snippets
spec:
replicas: 4 # or as many as you need to have
strategy:
rollingUpdate:
maxUnavailable: 0
maxSurge: 1Context
StackExchange DevOps Q#12597, answer score: 5
Revisions (0)
No revisions yet.