HiveBrain v1.2.0
Get Started
← Back to all entries
patterndockerCritical

Why it is recommended to run only one process in a container?

Submitted by: @import:stackexchange-devops··
0
Viewed 0 times
whycontainerprocessrecommendedoneonlyrun

Problem

In many blog posts, and general opinion, there is a saying that goes "one process per container".

Why does this rule exist?
Why not run ntp, nginx, uwsgi and more processes in a single container that needs to have all processes to work?

blog posts mentioning this rule:

  • "Single-­process-­per-­container is a recommended design pattern for Docker applications."



  • "Docker is only for creating single-process or single-service containers."



  • "better to use one process per container"



  • "Run a single service as a container"



  • "One process per container"



  • "one process per container"

Solution

Lets forget the high-level architectural and philosophical arguments for a moment. While there may be some edge cases where multiple functions in a single container may make sense, there are very practical reasons why you may want to consider following "one function per container" as a rule of thumb:

  • Scaling containers horizontally is much easier if the container is isolated to a single function. Need another apache container? Spin one up somewhere else. However if my apache container also has my DB, cron and other pieces shoehorned in, this complicates things.



  • Having a single function per container allows the container to be easily re-used for other projects or purposes.



  • It also makes it more portable and predictable for devs to pull down a component from production to troubleshoot locally rather than an entire application environment.



  • Patching/upgrades (both the OS and the application) can be done in a more isolated and controlled manner. Juggling multiple bits-and-bobs in your container not only makes for larger images, but also ties these components together. Why have to shut down application X and Y just to upgrade Z?



  • Above also holds true for code deployments and rollbacks.



  • Splitting functions out to multiple containers allows more flexibility from a security and isolation perspective. You may want (or require) services to be isolated on the network level -- either physically or within overlay networks -- to maintain a strong security posture or comply with things like PCI.



  • Other more minor factors such as dealing with stdout/stderr and sending logs to the container log, keeping containers as ephemeral as possible etc.



Note that I'm saying function, not process. That language is outdated. The official docker documentation has moved away from saying "one process" to instead recommending "one concern" per container.

Context

StackExchange DevOps Q#447, answer score: 123

Revisions (0)

No revisions yet.