patternkubernetesMinor
What's better: 2 pods (on the same server) or 1 standalone application?
Viewed 0 times
samethewhatapplicationpodsstandalonebetterserver
Problem
For a client we're going to roll out a certain type of software installation.
(I can give more details in the comments if you like.)
The question is: would you go for 2 pods (on the same server)?
Or would a single installation be more stable?
The software is stable as such but my concern is pods, by nature, are meant to be replaceable...
There is also the option of a single pod on the machine, but I have deep concerns about this approach...
My guess is that once 1 pod gets into to much trouble (memory pressure to name a thing)
Kubernetes is going to restart it, as pods are meant to be replaceable.
(Bringing down the clients' application during the process.)
So I guess 2 pods is a solution here.
1 pod should be able to handle the load (his load + the extra load of the restarting pod).
What I don't know is whether 2 pods would be as stable as running the software on 1 node,
without Kubernetes.
In the same process we're also going to roll out 2 pods on 2 virtual machines, with other software,
so the choice whether we do or don't use Kubernetes doesn't depend on this part of the implementation alone.
(I can give more details in the comments if you like.)
The question is: would you go for 2 pods (on the same server)?
Or would a single installation be more stable?
The software is stable as such but my concern is pods, by nature, are meant to be replaceable...
There is also the option of a single pod on the machine, but I have deep concerns about this approach...
My guess is that once 1 pod gets into to much trouble (memory pressure to name a thing)
Kubernetes is going to restart it, as pods are meant to be replaceable.
(Bringing down the clients' application during the process.)
So I guess 2 pods is a solution here.
1 pod should be able to handle the load (his load + the extra load of the restarting pod).
What I don't know is whether 2 pods would be as stable as running the software on 1 node,
without Kubernetes.
In the same process we're also going to roll out 2 pods on 2 virtual machines, with other software,
so the choice whether we do or don't use Kubernetes doesn't depend on this part of the implementation alone.
Solution
Is the app stateless
The software is stable as such but my concern is pods, by nature, are meant to be replaceable
This is your primary concern. Is the app stateless (e.g. can the requests be served by any of multiple instances without problem?). If not - be very careful to run it on Kubernetes (e.g. don't use rolling-deployment) and don't run the app in the cloud :)
Apps running on Kubernetes or cloud should follow The Twelve Factor principles
Number of Servers
It would be good to be able to use two or more servers. This reduce the Single-Point-Of-Failure, and you can e.g. take down one server for maintenance but still serve your users.
App directly on Server or in Container
pods (on the same server) or 1 standalone application?
An application runs as a process on the Operating System, whether you run it on a container or as non-container process. There is almost no runtime overhead, what you get with containers is repeatability and maintainability.
A container is designed to be hermetically sealed, so it is not sensitive to your servers configuration e.g. what language-settings or character encoding. This was a bigger problem before containers was used. With containers the app runs in an identical environment every time, except the kernel. In addition it is easier to maintain and manage processes when run as containers, you can e.g. set limits for CPU usage, Memory limit and they get its own network e.g. multiple containers on the same server can listen to port 8080 (from the applications standpoint).
If your apps run as containers on a server cluster, you can also easyli scale out or in by adding or removing servers.
Control of resources
My guess is that once 1 pod gets into to much trouble (memory pressure to name a thing) Kubernetes is going to restart it, as pods are meant to be replaceable.
Yeah, but this is a good thing. Kubernetes is mostly self-healing. If you don't have such "control-loops", you will need manual internvention - since it is the same app that e.g. may have memory leaks.
Run app on Kubernetes or not
whether we do or don't use Kubernetes doesn't depend on this part of the implementation alone
The first thing to check: Is the app stateless?
and also, you should use two or more servers otherwise you get little advantage of running it in Kubernetes.
There is a cost to maintain Kubernetes. You need skills and you need to maintain an addition system. Make sure it adds enough value for you.
The software is stable as such but my concern is pods, by nature, are meant to be replaceable
This is your primary concern. Is the app stateless (e.g. can the requests be served by any of multiple instances without problem?). If not - be very careful to run it on Kubernetes (e.g. don't use rolling-deployment) and don't run the app in the cloud :)
Apps running on Kubernetes or cloud should follow The Twelve Factor principles
Number of Servers
It would be good to be able to use two or more servers. This reduce the Single-Point-Of-Failure, and you can e.g. take down one server for maintenance but still serve your users.
App directly on Server or in Container
pods (on the same server) or 1 standalone application?
An application runs as a process on the Operating System, whether you run it on a container or as non-container process. There is almost no runtime overhead, what you get with containers is repeatability and maintainability.
A container is designed to be hermetically sealed, so it is not sensitive to your servers configuration e.g. what language-settings or character encoding. This was a bigger problem before containers was used. With containers the app runs in an identical environment every time, except the kernel. In addition it is easier to maintain and manage processes when run as containers, you can e.g. set limits for CPU usage, Memory limit and they get its own network e.g. multiple containers on the same server can listen to port 8080 (from the applications standpoint).
If your apps run as containers on a server cluster, you can also easyli scale out or in by adding or removing servers.
Control of resources
My guess is that once 1 pod gets into to much trouble (memory pressure to name a thing) Kubernetes is going to restart it, as pods are meant to be replaceable.
Yeah, but this is a good thing. Kubernetes is mostly self-healing. If you don't have such "control-loops", you will need manual internvention - since it is the same app that e.g. may have memory leaks.
Run app on Kubernetes or not
whether we do or don't use Kubernetes doesn't depend on this part of the implementation alone
The first thing to check: Is the app stateless?
and also, you should use two or more servers otherwise you get little advantage of running it in Kubernetes.
There is a cost to maintain Kubernetes. You need skills and you need to maintain an addition system. Make sure it adds enough value for you.
Context
StackExchange DevOps Q#13304, answer score: 2
Revisions (0)
No revisions yet.