Writing Gitlab CI templates, Part 3/3: pipeline configuration

Alex Lundberg
5 min readJun 14, 2021

--

Writing CI templates to run build, test, and deploy your project is challenging to do in a way that prioritizes pipeline speed, safety, and easy maintenance. In the first part of this series, I discussed the best practices for developing changes to CI templates. In the second part I go over how to setup your templates to reduce code duplication and make changes easy to maintain. In this last part I discuss some common pitfalls, and some general tips to improve your CI. The changes below use GitlabCI as example but can easily be extended to other engines.

Understand build, test, and dependency management for your language.

Languages and frameworks can differ a lot in how they handle these. And though you could partially eliminate the necessity to worry about the build step by pushing this step down to each projects Dockerfile configuration (if building docker-images), you still need to be concerned about dependency management and testing. Best practices here will vary by your language, so it is important to have some familiarity.

You can always check what templates Gitlab publishes for your language as there will often a good starting point.

Leverage Gitlab artifacts

Some job file outputs can be leveraged by gitlab artifacts to show information within the merge request. A common use case of this is exporting code quality metrics through gitlab. Artifacts can be used to push binaries or folders to Gitlab and allow them to be pulled down in later stages. However caching is better suited to this and more flexible. Keep artifacts only for pushing binaries that are used by gitlab to generate reports.

Leverage caching

Leverage gitlab caching to reduce the amount of time spent downloading files. This is usually done to cache dependencies such as node_modules. Ensure that the policy for jobs pulling/pushing from the cache is correctly setup and extra work is not being performed such as re-uploading the cache.

Be warned the setting up caching correctly so that it is fast, only runs when needed, and does not perform any unnecessary steps is very challenging. For one, consider when you want the cache upload step to run. A common pattern is to have a separate setup stage and job that runs code to 1) Pull down existing dependencies , 2) update dependencies, and 3) Push them back up. It is a good idea to only run this stage when a change occurs to the projects dependency tracking system such as package.json. All future jobs should have their pull-policy set to pull-only to pull from this cache that the setup job created.

setup:
stage: setup
cache:
key: $CI_COMMIT_REF_SLUG
paths:
- node_modules
policy: push
script:
- npm install
only:
changes:
- package-lock.json

A word of warning is to be aware of the tradeoff’s that come when setting your caching infrastructure. You can configure Gitlab runner to store the cache within S3, but be careful that this doesn’t unnecessarily slow down your pipelines, as it may not be any faster than pulling your dependencies normally from the web, or from your own registry mirror. You could configure the gitlab-executors to cache on their host node, but then you have to ensure that future executors run on the same node. Conversely, you could mount an EFS volume to each node that would service your gitlab-executors and use node-selectors or taints/tolerations (if on Kubernetes) to ensure your executors run on those nodes with the cache. You would also want to ensure your CI steps are robust enough to still function during a cache miss. You could do this by writing a command in your before_script section to first check the existing of dependencies, and to download if missing.

Compress stages and run jobs in parallel

Many times stages can easily run in parallel such as build and test. Other times you want to block further stages, such as deploy, if prior phases fail. Or you need artifacts from previous stages to build your docker image. Consider if you really need the ordering of your stages and if you can rather run those in parallel. You can additionally use the needs keyword to run jobs out of order. For example, start the image build after the artifact is created from a prior stage, but before the testing has fully completed.

Within GitlabCI, you can use the needs and dependencies keywords to increase your pipeline speed. Needswill run a job if the job under needs has completed rather than waiting for all jobs in the prior stages to complete. This allows you to trigger a job before it would otherwise be triggered if its needs jobs complete. Dependencies will select which artifacts from previous jobs you need to pull for your current job. An example would be an image build job that only pulls the binary artifact instead of the test result artifacts.

Use a sensible base images for running your jobs.

If you are running apk or yum or apt-get commands in your CI script, Consider instead to search for or even make a new docker image that comes bundled with the dependencies you need.

Use a sensible Dockerfile.

If you notice your docker build takes long you may be able to rearrange your Dockerfile layers such that operations that are more likely to change are placed lower in your Dockerfile. An example of this would be to move the lines that copy dependencies to the beginning of your Dockerfile as this is less likely to change than an update to the application code. If you notice that your container is large or takes long to startup. Consider using a scratch container or a lightweight base image to reduce the resources usage and boot time.

Remember that caching does not only need to apply to managing you’re dependencies. If you are building and deploying docker-images, you should also make use of Gitlab’s docker-image caching using the docker build --cache-from command. This can greatly reduce the build time if you have properly setup the project’s Dockerfile with layering.

Skip old jobs if new commits are added, and have jobs automatically retry.

The interruptible keyword stops jobs running in old pipeline when new pipeline run has begun. This reduces the burden on your executors.

The retry will automatically retry a job on failure. You can even specify on what conditions you want to start an automatic retry. This should reduce the manual toil when a flaky test fails and needs to be restarted.

Check for new Gitlab features.

Gitlab constantly upgrades its CI platform to add new features or remediate existing issues. These often can be used to simplify CI pipelines, reduce pipeline time, or add useful metrics for developers. Check to see what other gitlab users are currently doing for your use-case and what gitlab recommends.

Conclusion

  • Understand the software lifecycle stages for your language and framework
  • Leverage caching
  • Make use of Gitlab artifacts for reporting
  • Compress stages and parallelize jobs when possible
  • Use a sensible base image for running jobs
  • Use a Dockerfile with sensible cache layers
  • Skip old jobs and have failed jobs automatically retry.
  • Check for new Gitlab features.

--

--