HiveBrain v1.2.0
Get Started
← Back to all entries
patterndockerMinor

One Jenkinsfile or multiple?

Submitted by: @import:stackexchange-devops··
0
Viewed 0 times
jenkinsfileonemultiple

Problem

So I have a Jenkinsfile defining a build pipeline and then a Jenkins job (not pipeline) with a very simple deployment script for our Docker stacks.

Seeing that Jenkinsfiles can become as complex and powerful as one's coding skills are (seeing that they can groovy fairly well), would I have everything in one Jenkinsfile or have multiple Jenkinsfiles?

Consider this theoretical process:

Development
> Build artefact > Build Docker image
> Deploy Docker container to testing > tests & QA
> deploy to staging > have client check it
> deploy to production


Would all of that go into one Jenkinsfile and run different steps of the pipeline according to user input like

"which stages should be performed?
Please specify:
Build artefact,
build Docker Image, ..."


This would result in a complex Jenkinsfile but you'd only have one single file.

Or should this process be split into multiple Jenkinsfiles with multiple Jenkins jobs/pipelines for each Jenkinsfile to do a part of the process?

Solution

It really comes down to personal preference.

One additional tool you might not be aware of are shared libraries for Pipeline. These allow you to quickly write custom Pipeline steps or factor out common Pipeline code without writing a Jenkins plugin in Java.

Between multiple Pipeline jobs and shared libraries, there are many ways to split up your job's code and there's no one right answer on how to do it. One suggestion I have is to determine what steps in your process are "atomic" - that is, what are the smallest group of steps that you envision admins/ops/devs/etc. might need to run on their own? Each "atomic unit" should then become its own job.

For instance, let's say you're automating a deploy process, and your process in a general sense looks like this:

  • build



  • deploy to dev



  • deploy to prod



Then I would create three jobs, one for each bullet point. This gives you some nice advantages:

-
You can re-run each job individually in case something fails instead of having to restart the entire process over from the very beginning. Deploy to prod failed? You only need to re-run the deploy to prod job. You don't need to rebuild or re-deploy to dev.

-
Each stage in the process is separated out so you can see how often it succeeds or fail. Some stages in your process may be more robust or fragile than others; this kind of separation gives you insight into that.

-
This level of abstraction makes it easy for non-ops people to perform ops tasks (if that is something you want/need at your org). When everything is monolithic, the only thing non-ops people can do is run the entire process from the beginning. When everything is split up into tiny jobs, you need intimate ops knowledge to know which pieces to run in what order. By separating your jobs into independent, easy-to-understand, appropriately bite-size chunks, a non-ops person only needs to press a single button to kick off an automated ops process.

Context

StackExchange DevOps Q#6591, answer score: 8

Revisions (0)

No revisions yet.