Continuous Integration (CI) is a software development methodology where multiple team members ship their code to a shared repository. A fundamental part of this process is running automated tests, covering both functionality and integration.
At Kiwi.com, we use continuous integration mostly for testing, but it also helps us make sure our code style is consistent. It checks imports order, and we count on it for several other checks.
The result of the combination and automation of all the jobs is what we call a CI pipeline.
It usually consists of a different number of stages that differ based on each team’s and project’s needs. However, the general core is build-test-deploy and each of these stages consists of several jobs running in parallel. Any job that fails during a stage should prevent the next one from starting to save resources.
Below, I describe each of the stages and suggest which tools might be useful for you in each of them.
This stage is named after the job that builds, for example, a Docker image and pushes it to the Docker registry. It can also include jobs that don’t need the built image to run. Typically, it would be linters.
Check out the tools that might come in handy in this stage:
Black is our code formatter of choice that works really well with the CI approach. It helps us make sure that every member of the team who contributes to the same repository follows the same code style. In large teams, it makes a significant difference to the code readability as a whole and it eliminates long debates about style decisions.
Coala is the ultimate red flags checker and it can be extensively customized to fit your needs. We use it, for example, to check imports sorting, docstrings style, spacing consistency, file lines count, or forbidden keywords like “TODO” or “print()”.
Similarly to the build stage, the primary job of the test stage is, surprisingly, to test. However, with a large number of tests, it can be split into two or more jobs to run in parallel, eg unit-tests and integration-tests. In our case, they would run a pytest on a target path.
This stage also includes all other checks and tests that need a build from the first step to run.
One of the tools we use in the test stage is Mypy. It is a great static type checker and, among other things, it checks whether a function is called with the correct number and types of parameters. It is very helpful when managing and refactoring a large codebase.
Pylint is a Python linter which can detect code smells meaning it can prevent hard-to-find bugs in the code. It can even check things like unused imports or variables.
We use Sphinx (sphinx-apidoc) to autogenerate documentation from docstrings.
It also serves as a check to find out if the docstrings are written correctly. If they are not, it fails the job.
The last stage depends on the release strategy you use. For example, if we used canary deployment, we would have two manually triggered jobs here: deploy-canary and deploy-production. A similar setting would be used for blue-green deployment with the common thing being the manual trigger.
As individual requirements, in this case, might significantly differ for various needs, there is probably no one-size-fits-all tool. However, I’d like to mention our open-sourced tool Crane we use for rancher deploys.
To learn more about it, check the blog post by Bence Nagy.
Based on your custom needs, your pipeline might consist of more (or fewer) stages and you can even use your own tools.
That’s what we do with another open-source project — The Zoo. It is a service catalogue that provides an overview of the services’ development and operations and we use it for additional checks and features (for example, Dockerfile checks).
That’d be all from my side. Good luck with building your CI pipeline and if you have any other tips on which tools to use, let me know in the comments. Or you can join us in building ours!