It's been ten years since the first commit to GitLab, so we are sharing our ten favorite GitLab hacks to help you get the most out of our DevOps Platform. These are tips for all stages of the development lifecycle, so roll up your sleeves and let's get started.
Manage faster with quick actions
You might have adopted keyboard shortcuts for faster navigation and workflows already - if not, check out the GitLab documentation for platform specific shortcuts. The knowledge of pressing r
to land in the reply to comment in text form can be combined with other quick actions, including:
/assign_reviewer @ <search username>
/label ~ <search label>
/label ~enhancement ~workflow::indev
/due Oct 8
/rebase
/approve
/merge
Quick actions are also helpful if you have to manage many issues, merge requests and epics at the same time. There are specific actions which allow you to duplicate existing issues, as one example.
Take a deeper dive into Quick Actions.
Plan instructions with templates
Don’t fall into the trap of back-and-forth with empty issue descriptions that leave out details your development teams need to reproduce the error in the best way possible.
GitLab provides the possibility to use so-called description templates in issues and merge requests. Next to providing a structured template with headings, you can also add task lists which can later be ticked off by the assignee. Basically everything is possible and is supported in GitLab-flavored markdown and HTML.
In addition to that, you can combine the static description templates with quick actions. This allows you to automatically set labels, assignees, define due dates, and more to level up your productivity with GitLab.
<!--
This is a comment, it will not be rendered by the Markdown engine. You can use it to provide instructions how to fill in the template.
-->
### Summary
<!-- Summarize the bug encountered concisely. -->
### Steps to reproduce
<!-- Describe how one can reproduce the issue - this is very important. -->
### Output of checks
<!-- If you are reporting a bug on GitLab.com, write: This bug happens on GitLab.com -->
#### Results of GitLab environment info
<!-- Input any relevant GitLab environment information if needed. -->
<details>
<summary>Expand for output related to app info</summary>
<pre>
(Paste the version details of your app here)
</pre>
</details>
### Possible fixes
<!-- If you can, link to the line of code and suggest actions. →
## Maintainer tasks
- [ ] Problem reproduced
- [ ] Weight added
- [ ] Fix in test
- [ ] Docs update needed
/label ~"type::bug"
When you manage different types of templates, you can pass along the name of the template in the issuable_template
parameter, for example https://gitlab.com/gitlab-org/gitlab/-/issues/new?issuable_template=Feature%20proposal%20%23%20lean
.
At GitLab, we use description and merge request templates in many ways: GitLab the project, GitLab Corporate Marketing team, GitLab team member onboarding and GitLab product team are just a few examples.
Create with confidence
When reading GitLab issues and merge requests, you may see the abbreviation MWPS
which means Merge When Pipeline Succeeds
. This is an efficient way to merge the MRs when the pipeline passes all jobs and stages - you can even combine this workflow with automatically closing issues with keywords from the MR.
Merge When Pipeline Succeeds
also works on the CLI with the git
command and push options. That way you can create a merge request from a local Git branch, and set it to merge when the pipeline succeeds.
# mwps BRANCHNAME
alias mwps='git push -u origin -o merge_request.create -o merge_request.target=main -o merge_request.merge_when_pipeline_succeeds'
Checkout this ZSH alias example in our CEO Sid Sijbrandij’s dotfiles repository. There are more push options available, and even more Git CLI tips in our tools & tips handbook. One last tip: Delete all local branches where the remote branch was deleted, for example after merging a MR.
# Delete all remote tracking Git branches where the upstream branch has been deleted
alias git_prune="git fetch --prune && git branch -vv | grep 'origin/.*: gone]' | awk '{print \$1}' | xargs git branch -d"
You are not bound to your local CLI environment; take it to the cloud with Gitpod and either work in VS Code or the pod terminal.
Verify your CI/CD pipeline
Remember the old workflow of committing a change to .gitlab-ci.yml
just to see if it was valid, or if the job template really inherits all the attributes? This has gotten a whole lot easier with our new pipeline editor. Navigate into the CI/CD
menu and start building CI/CD pipelines right away.
But the editor is more than just another YAML editor. You’ll get live linting, allowing you to know if there is a missing dash for array lists or a wrong keyword in use before you commit. You can also preview jobs and stages or asynchronous dependencies with needs
to make your pipelines more efficient.
The pipeline editor also uses uses the /ci/lint
API endpoint, and fetches the merged YAML configuration I described earlier in this blog post about jq and CI/CD linting. That way you can quickly verify that job templates with extends and !reference tags work in the way you designed them. It also allows you to unfold included files, and possible job overrides (for example changing the stage of an included SAST security template).
Let’s try a quick example – create a new project and new file called server.c
with the following content:
#include <stdio.h>
#include <string.h>
#include <sys/mman.h>
#include <sys/stat.h>
#include <unistd.h>
int main(void) {
size_t pagesize = getpagesize();
char * region = mmap(
(void*) (pagesize * (1 << 20)),
pagesize,
PROT_READ|PROT_WRITE|PROT_EXEC,
MAP_ANON|MAP_PRIVATE, 0, 0);
strcpy(region, "Hello GitLab SAST!");
printf("Contents of region: %s\n", region);
FILE *fp;
fp = fopen("devops.platform", "r");
fprintf(fp, "10 years of GitLab 🦊 🥳");
fclose(fp);
chmod("devops.platform", S_IRWXU|S_IRWXG|S_IRWXO);
return 0;
}
Open the CI/CD pipeline editor and add the following configuration, with an extra secure
stage assigned to the semgrep-sast
job for SAST and the C code.
stages:
- build
- secure
- test
- deploy
include:
- template: Security/SAST.gitlab-ci.yml
semgrep-sast:
stage: secure
Inspect the Merged YAML tab
to see the fully compiled CI/CD configuration. You can commit the changes and check the found vulnerabilities too as an async practice :). The examples are available in this project.
Verify the stage attribute for the job by opening the view merged YAML
tab in the CI/CD pipeline editor.
Package your applications
The package registry possibilities are huge and there are more languages and package managers to come. Describing why Terraform, Helm, and containers (for infrastructure) and Maven, npm, NuGet, PyPI, Composer, Conan, Debian, Go and Ruby Gems (for applications) are so awesome would take too long, but it's clear there are plenty of choices.
One of my favourite workflows is to use existing CI/CD templates to publish container images in the GitLab container registry. This makes continuous delivery much more efficient, such as when deploying the application into your Kubernetes cluster or AWS instances.
include:
- template: 'Docker.gitlab-ci.yml'
In addition to including the CI/CD template, you can also override the job attributes and define a specific stage and manual non-blocking rules.
stages:
- build
- docker-build
- test
include:
- template: 'Docker.gitlab-ci.yml'
# Change Docker build to manual non-blocking
docker-build:
stage: docker-build
rules:
- if: '$CI_COMMIT_REF_NAME == $CI_DEFAULT_BRANCH'
when: manual
allow_failure: true
For celebrating #10YearsOfGitLab, we have created a C++ example with an Easter egg on time calculations. This project also uses a Docker builder image to showcase a more efficient pipeline. Our recommendation is to learn using the templates in a test repository, and then create a dedicated group/project for managing all required container images. You can think of builder images which include the compiler tool chain, or specific scripts to run end-to-end tests, etc.
Secure your secrets
It is easy to leak a secret by making choices that uncomplicate a unit test by running it directly with the production database. The secret persists in git history, and someone with bad intentions gains access to private data, or finds ways to exploit your supply chain even further.
To help prevent that, include the CI/CD template for secret detection.
stages:
- test
include:
- template: Security/Secret-Detection.gitlab-ci.yml
A known way to leak secrets is committing the .env
file which stores settings and secrets in the repository. Try the following snippet by adding a new file .env
and create a merge request.
export AWS_KEY="AKIA1318109798ABCDEF"
Inspect the reports JSON to see the raw reports structure. GitLab Ultimate provides an MR integration, a security dashboard overview, and more features to take immediate action. The example can be found in this project.
MR detail view with detected AWS secret from security scanning
Release and continuously deliver (CD)
GitLab’s release stage provides many features, including canary deployments and GitLab pages. There are also infrastructure deployments with Terraform and cloud native (protected) environments.
While working on a CI/CD pipeline efficiency workshop, I got enthusiastic about parent-child pipelines allowing non-blocking child pipelines into production, with micro services in Kubernetes as one example.
Let’s try it! Create a new project, and add 2 child pipeline configuration files: child-deploy-staging.yml
and child-deploy-prod.yml
. The naming is important as the files will be referenced in the main .gitlab-ci.yml
configuration file later. The jobs in the child pipelines will sleep for 60 seconds to simulate a deployment.
child-deploy-staging.yml:
deploy-staging:
stage: deploy
script:
- echo "Deploying microservices to staging" && sleep 60
child-deploy-prod.yml
deploy-prod:
stage: deploy
script:
- echo "Deploying microservices to prod" && sleep 60
monitor-prod:
stage: deploy
script:
- echo "Monitoring production SLOs" && sleep 60
Now edit the .gitlab-ci.yml
configuration file and create a build-test-deploy stage workflow.
stages:
- build
- test
- deploy
build:
stage: build
script: echo "Build"
test:
stage: test
script: echo "Test"
deploy-staging-trigger:
stage: deploy
trigger:
include: child-deploy-staging.yml
#rules:
# - if: $CI_MERGE_REQUEST_ID
deploy-prod-trigger:
stage: deploy
trigger:
include: child-deploy-prod.yml
#strategy: depend
#rules:
# - if: $CI_COMMIT_REF_NAME == $CI_DEFAULT_BRANCH
Commit the changes and inspect the CI/CD pipelines.
View parent-child pipelines in GitLab
strategy: depends
allows you to make the child pipelines blocking again, and the parent child pipeline waits again. Try uncommenting this for the prod job, and verify that by inspecting the pipeline view. Rules allow refining the scope when jobs are being run, such as when staging child pipelines that should only be run in merge requests and the prod child pipeline only gets triggered when on the default main branch. The full example can be found in this project.
Tip: You can use resource_groups to limit production deployments from running concurrent child pipelines.
Configure your infrastructure
Terraform allows you to describe, plan and apply the provisioning of infrastructure resources. The workflow requires a state file to be stored over steps, where the managed state in GitLab as an HTTP backend is a great help, together with predefined container images and CI/CD templates to make Infrastructure as code as smooth as possible.
You can customize the template, or copy the CI/CD configuration into .gitlab-ci.yml and modify the steps by yourself. Let’s try a quick example with only an AWS account and an IAM user key pair. Configure them as CI/CD variables in Settings > CI/CD > Variables
: AWS_ACCESS_KEY_ID
and AWS_SECRET_ACCESS_KEY
.
Next, create the backend.tf
file and specify the http backend and AWS module dependency.
terraform {
backend "http" {
}
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 3.0"
}
}
}
Create provider.tf
to specify the AWS region.
provider "aws" {
region = "us-east-1"
}
The main.tf
describes the S3 bucket resources.
resource "aws_s3_bucket_public_access_block" "publicaccess" {
bucket = aws_s3_bucket.demobucket.id
block_public_acls = false
block_public_policy = false
}
resource "aws_s3_bucket" "demobucket" {
bucket = "terraformdemobucket"
acl = "private"
}
Tip: You can verify the configuration locally on your CLI by commenting out the HTTP backend above.
For GitLab CI/CD, open the pipeline editor and use the following configuration: (Note that it is important to specify the TF_ROOT
and TF_ADDRESS
variables since you can manage multiple Terraform state files).
variables:
TF_ROOT: ${CI_PROJECT_DIR}
TF_ADDRESS: ${CI_API_V4_URL}/projects/${CI_PROJECT_ID}/terraform/state/${CI_PROJECT_NAME}
include:
- template: Terraform.latest.gitlab-ci.yml
stages:
- init
- validate
- build
- deploy
- cleanup
destroy:
stage: cleanup
extends: .terraform:destroy
when: manual
allow_failure: true
Commit the configuration and inspect the pipeline jobs.
AWS S3 bucket provisioned with Terraform in GitLab CI/CD
The destroy
job is not created in the template and therefore explicitly added as a manual job. It is recommended to review the opinionated Terraform CI/CD template and copy the jobs into your own configuration to allow for further modifications or style adjustments. The full example is located in this project.
View the Terraform states in GitLab
Hat tipping to our Package stage - you can manage and publish Terraform modules in the registry too, using all of the DevOps Platform advantages. And hot off the press, the GitLab Kubernetes Operator is generally available.
Monitor GitLab and dive into Prometheus
Prometheus is a monitoring solution which collects metrics from /metrics
HTTP endpoints made available by applications, as well as so-called exporters to serve services and host information in the specified metrics format. One example is CI/CD pipeline insights to analyse bottlenecks and make your pipelines more efficient. The GitLab CI Pipeline Exporter project has a great quick start in under 5 minutes, bringing up demo setup with Docker-compose, Prometheus and Grafana. From there, it is not far into your production monitoring environment, and monitoring more of GitLab.
Example dashboard for the GitLab CI Pipeline Exporter
The Prometheus Exporter uses the Go client libraries. They can be used to write your own exporter, or instrument your application code to expose /metrics
. When deployed, you can use Prometheus again to monitor the performance of your applications in Kubernetes, as one example. Find more monitoring ideas in my talk “From Monitoring to Observability: Left Shift your SLOs”.
Protect
You can enable security features in GitLab by including the CI/CD templates one by one. A more easy way is to enable Auto DevOps and use the default best practices for security scans. This includes container scanning ensuring that application deployments are not vulnerable on the container OS level.
Let’s try a quick example with a potentially vulnerable image, and the Docker template tip from the Package stage above. Create a new Dockerfile
in a new project:
FROM debian:10.0
Open the pipeline editor and add the following CI/CD configuration:
# 1. Automatically build the Docker image
# 2. Run container scanning. https://docs.gitlab.com/ee/user/application_security/container_scanning/index.html
# 3. Inspect `Security & Compliance > Security Dashboard`
# For demo purposes, scan the latest tagged image from 'main'
variables:
DOCKER_IMAGE: $CI_REGISTRY_IMAGE:latest
include:
- template: Docker.gitlab-ci.yml
- template: Security/Container-Scanning.gitlab-ci.yml
The full example is located in this project.
Tip: Learn more about scanning container images in a deployed Kubernetes cluster to stay even more safe.
View the container scanning vulnerability report
What’s next?
We have tried to find a great “hack” for each stage of the DevOps lifecycle. There are more hacks and hidden gems inside GitLab - share yours and be ready to explore more stages of the DevOps Platform.
Cover image by Alin Andersen on Unsplash