snippetMinor
Best way to create a Ci/CD pipeline to reduce bugs and facilitate refactor
Viewed 0 times
createandwayreducebugsfacilitaterefactorpipelinebest
Problem
I started a project with a small team (3 people ) and we had to hack a lot and built a big application with no unit tests, relying only on manual. Now we have a huge technical debt and we started implementing CI/CD. Our goal is to start doing cypress tests everywhere, then some unit tests and refactoring the code as well as implementing static analysis tools on the Ci/CD pipeline:
Our idea is that we will not have any local environment but just server-based. So developer push code to Gitlab, Gitlab runner in Digital Ocean push the repository to staging server, Developer debugs in Digital Ocean using Local Cypress Runner and his own modifications, if he breaks something or forgets to do a full regression test a command-line test regression runner is triggered BEFORE the second deployment so it doesn't allow the code to pass to the deployment/production server.
Questions
1: do we need to separate deployment environment from staging in 2 droplets? can we achieve this configuration without major headaches in the same droplet?
2: do we need to use docker? if not how it should be done
3: what setup can we do to reduce manual work? our problem is not needing approvals but automating as much as we can since we don't have IT/DevOps guy and neither QA.
Note: the first stage is to implement full regression, the second stage is to implement code quality tools (static analyzers) such as sonarqube, phpStan, etc.
Stack: LAMP stack with PHP Codeigniter and vanilla JS.
Our idea is that we will not have any local environment but just server-based. So developer push code to Gitlab, Gitlab runner in Digital Ocean push the repository to staging server, Developer debugs in Digital Ocean using Local Cypress Runner and his own modifications, if he breaks something or forgets to do a full regression test a command-line test regression runner is triggered BEFORE the second deployment so it doesn't allow the code to pass to the deployment/production server.
Questions
1: do we need to separate deployment environment from staging in 2 droplets? can we achieve this configuration without major headaches in the same droplet?
2: do we need to use docker? if not how it should be done
3: what setup can we do to reduce manual work? our problem is not needing approvals but automating as much as we can since we don't have IT/DevOps guy and neither QA.
Note: the first stage is to implement full regression, the second stage is to implement code quality tools (static analyzers) such as sonarqube, phpStan, etc.
Stack: LAMP stack with PHP Codeigniter and vanilla JS.
Solution
It might not be obvious now, but usually as projects advance full regression costs (resources/time) grow much faster than static analysis ones.
You'll also find that static analysis alone isn't a sufficiently solid bug gate, it must be complemented with regression testing as well.
So I'd place the full regression after the static analysis in the CI/CD pipeline - I wouldn't want to waste full regression costs on code which doesn't even pass static analysis.
Down the road you'll probably find that running full regression and/or full static analysis in earlier pipeline stages can easily become bottlenecks:
Addressing these bottlenecks usually means splitting such runs across 2 major functional stages (each one can actually consist of multiple sub-stages and/or steps):
-
the (pre-)integration stage(s), which execute subsets of these runs in the earlier pipeline stages:
-
the post-integration stages, executing the longer/full regression and/or static analysis runs:
You'll also find that static analysis alone isn't a sufficiently solid bug gate, it must be complemented with regression testing as well.
So I'd place the full regression after the static analysis in the CI/CD pipeline - I wouldn't want to waste full regression costs on code which doesn't even pass static analysis.
Down the road you'll probably find that running full regression and/or full static analysis in earlier pipeline stages can easily become bottlenecks:
- if you block other commits while waiting for the run results you'd slow down the commit rate
- if you allow other commits while a run is in progress they'll change the repository context potentially invalidating the earlier run results (for example you can't immediately fix a runtime bug uncovered by the run if the subsequent commit that happened in the meantime causes a build time breakage).
Addressing these bottlenecks usually means splitting such runs across 2 major functional stages (each one can actually consist of multiple sub-stages and/or steps):
-
the (pre-)integration stage(s), which execute subsets of these runs in the earlier pipeline stages:
- shorter - to maintaining a sufficiently high commit rate and avoiding bottlenecks
- the target quality level is below deployment, but high enough to keep development flowing
- fixing regressions detected in these stages has a high priority.
- it's possible to use such runs to gate commits/merges in order to prevent rather than detect and fix regressions, in which case these would actually be pre-integration stages
-
the post-integration stages, executing the longer/full regression and/or static analysis runs:
- the goal here would simply be increasing the quality to deployment levels (i.e. blocking the actual deployment if not met)
- regressions detected at this stage aren't blocking for development, fixing them is a lower priority
Context
StackExchange DevOps Q#9167, answer score: 3
Revisions (0)
No revisions yet.