All Articles

How to Set Up GitLab CI/CD from Scratch (2026 Complete Tutorial)

A practical step-by-step guide to setting up GitLab CI/CD pipelines from zero — covering runners, pipeline stages, Docker builds, deployment to Kubernetes, and best practices.

DevOpsBoysMar 15, 20267 min read
Share:Tweet

GitLab CI/CD is one of the most complete CI/CD systems available — and it's built into GitLab, so there's no third-party integration to manage. Everything lives in one place: your code, your pipelines, your container registry, and your deployments.

This guide takes you from zero to a working pipeline that lints, tests, builds a Docker image, and deploys to Kubernetes.


What You'll Build

By the end of this guide, you'll have a GitLab CI/CD pipeline with these stages:

lint → test → build → push → deploy
  • lint: Check code quality
  • test: Run unit tests
  • build: Build a Docker image
  • push: Push to GitLab Container Registry
  • deploy: Deploy to Kubernetes using kubectl

Prerequisites

  • A GitLab account (gitlab.com or self-hosted)
  • A project with some code (we'll use a simple Node.js app)
  • A Kubernetes cluster (optional — we'll show how to set this up)
  • Docker installed on your machine for local testing

Step 1: Understand the Pipeline File

GitLab CI/CD is configured entirely in a single file: .gitlab-ci.yml at the root of your repository.

Every time you push code, GitLab reads this file and runs the pipeline defined in it.

The minimal structure:

yaml
# .gitlab-ci.yml
 
stages:
  - lint
  - test
  - build
 
my-first-job:
  stage: lint
  script:
    - echo "Running lint"
    - npm run lint

Three concepts to understand:

  • stages: Define the order of execution. All jobs in a stage run in parallel. Stages run sequentially.
  • jobs: The actual work. Each job has a stage and a script.
  • script: The shell commands to run.

Step 2: Set Up a GitLab Runner

A GitLab Runner is the machine that actually runs your pipeline jobs.

Option A: Use GitLab's Shared Runners (Easiest)

On gitlab.com, shared runners are available by default. Your pipelines will run on GitLab's managed infrastructure. Nothing to configure.

Go to Settings → CI/CD → Runners in your project to verify shared runners are enabled.

Option B: Register Your Own Runner (More Control)

If you want to use your own server or run Docker-in-Docker:

Install the runner on your server:

bash
# On Ubuntu/Debian
curl -L "https://packages.gitlab.com/install/repositories/runner/gitlab-runner/script.deb.sh" | sudo bash
sudo apt-get install gitlab-runner
 
# Start the runner service
sudo systemctl enable gitlab-runner
sudo systemctl start gitlab-runner

Register the runner with your GitLab project:

bash
sudo gitlab-runner register \
  --url "https://gitlab.com" \
  --token "<your-project-runner-token>" \
  --executor "docker" \
  --docker-image "ubuntu:22.04" \
  --description "my-project-runner"

Get the registration token from: Settings → CI/CD → Runners → New project runner.


Step 3: Write Your First Real Pipeline

Here's a complete pipeline for a Node.js application:

yaml
# .gitlab-ci.yml
 
image: node:20-alpine    # default Docker image for all jobs
 
stages:
  - lint
  - test
  - build
  - push
  - deploy
 
variables:
  REGISTRY: $CI_REGISTRY_IMAGE
  IMAGE_TAG: $CI_COMMIT_SHORT_SHA
 
# ─── LINT ───────────────────────────────────────────────────
lint:
  stage: lint
  script:
    - npm ci
    - npm run lint
  cache:
    key:
      files:
        - package-lock.json
    paths:
      - .npm/
 
# ─── TEST ───────────────────────────────────────────────────
test:
  stage: test
  script:
    - npm ci
    - npm test
  cache:
    key:
      files:
        - package-lock.json
    paths:
      - .npm/
  coverage: '/All files[^|]*\|[^|]*\s+([\d\.]+)/'
  artifacts:
    reports:
      coverage_report:
        coverage_format: cobertura
        path: coverage/cobertura-coverage.xml
 
# ─── BUILD ──────────────────────────────────────────────────
build-image:
  stage: build
  image: docker:24.0
  services:
    - docker:24.0-dind
  variables:
    DOCKER_TLS_CERTDIR: "/certs"
    DOCKER_HOST: tcp://docker:2376
    DOCKER_TLS_VERIFY: 1
    DOCKER_CERT_PATH: "$DOCKER_TLS_CERTDIR/client"
  before_script:
    - docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
  script:
    - docker build -t $REGISTRY:$IMAGE_TAG -t $REGISTRY:latest .
    - docker push $REGISTRY:$IMAGE_TAG
    - docker push $REGISTRY:latest
  rules:
    - if: '$CI_COMMIT_BRANCH == "main"'
 
# ─── DEPLOY ─────────────────────────────────────────────────
deploy-production:
  stage: deploy
  image: bitnami/kubectl:latest
  before_script:
    - echo "$KUBECONFIG_DATA" | base64 -d > /tmp/kubeconfig
    - export KUBECONFIG=/tmp/kubeconfig
  script:
    - kubectl set image deployment/my-app app=$REGISTRY:$IMAGE_TAG -n production
    - kubectl rollout status deployment/my-app -n production
  environment:
    name: production
    url: https://myapp.example.com
  rules:
    - if: '$CI_COMMIT_BRANCH == "main"'

Push this file and GitLab will run the pipeline automatically.


Step 4: Configure Variables and Secrets

GitLab CI uses variables for secrets. Never hardcode credentials in .gitlab-ci.yml.

Set variables in GitLab UI

Go to Settings → CI/CD → Variables and add:

VariableValueProtectedMasked
KUBECONFIG_DATAbase64 of your kubeconfig
DOCKER_HUB_TOKENDocker Hub token

Built-in GitLab variables (always available)

CI_REGISTRY              = registry.gitlab.com
CI_REGISTRY_IMAGE        = registry.gitlab.com/your-group/your-project
CI_REGISTRY_USER         = gitlab-ci-token (for login)
CI_REGISTRY_PASSWORD     = automatically set
CI_COMMIT_SHORT_SHA      = abc123de (short git hash)
CI_COMMIT_BRANCH         = main (branch name)
CI_PIPELINE_ID           = 12345
CI_PROJECT_NAME          = my-project

You don't need to set these — they're injected automatically.


Step 5: Use the GitLab Container Registry

GitLab has a built-in container registry — no Docker Hub needed.

Your image URL is always: registry.gitlab.com/your-group/your-project:tag

The login in before_script uses built-in variables:

yaml
before_script:
  - docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY

Pull from the registry in Kubernetes:

bash
# Create an imagePullSecret from GitLab deploy token
kubectl create secret docker-registry gitlab-registry \
  --docker-server=registry.gitlab.com \
  --docker-username=<deploy-token-username> \
  --docker-password=<deploy-token-password> \
  -n production

Get a deploy token from: Settings → Repository → Deploy tokens.

Then use it in your Kubernetes deployment:

yaml
spec:
  template:
    spec:
      imagePullSecrets:
        - name: gitlab-registry
      containers:
        - name: app
          image: registry.gitlab.com/your-group/your-project:latest

Step 6: Add Environments and Manual Approvals

GitLab has first-class environment support with manual gates:

yaml
deploy-staging:
  stage: deploy
  image: bitnami/kubectl:latest
  script:
    - ./scripts/deploy.sh staging
  environment:
    name: staging
    url: https://staging.myapp.com
  rules:
    - if: '$CI_COMMIT_BRANCH == "main"'
 
deploy-production:
  stage: deploy
  image: bitnami/kubectl:latest
  script:
    - ./scripts/deploy.sh production
  environment:
    name: production
    url: https://myapp.com
  when: manual           # requires a human to click "deploy"
  allow_failure: false
  rules:
    - if: '$CI_COMMIT_BRANCH == "main"'

With when: manual, the job appears in the pipeline but won't run until someone clicks the play button in the GitLab UI. This gives you staging → production with a human gate.


Step 7: Caching and Artifacts

Caching (speed up dependency installation)

yaml
# Cache node_modules based on lockfile
cache:
  key:
    files:
      - package-lock.json
  paths:
    - .npm/
 
# Use the cache
install:
  stage: .pre
  script:
    - npm ci --cache .npm --prefer-offline
  artifacts:
    paths:
      - node_modules/    # pass to subsequent jobs
    expire_in: 1 hour

Artifacts (pass files between jobs)

yaml
build:
  stage: build
  script:
    - npm run build
  artifacts:
    paths:
      - dist/
    expire_in: 1 hour
 
deploy:
  stage: deploy
  needs:
    - job: build
      artifacts: true    # download build artifacts
  script:
    - ls dist/           # files from build job are here
    - ./deploy.sh

Step 8: Pipeline Rules (Control When Jobs Run)

yaml
# Only run tests on MRs and main branch
test:
  rules:
    - if: '$CI_PIPELINE_SOURCE == "merge_request_event"'
    - if: '$CI_COMMIT_BRANCH == "main"'
 
# Only deploy on main branch, not on MRs
deploy:
  rules:
    - if: '$CI_COMMIT_BRANCH == "main"'
      when: on_success
    - when: never
 
# Skip if commit message contains [skip ci]
.default-rules: &default-rules
  rules:
    - if: '$CI_COMMIT_MESSAGE =~ /\[skip ci\]/'
      when: never
    - when: on_success

Control which pipeline runs at all

yaml
workflow:
  rules:
    - if: '$CI_PIPELINE_SOURCE == "merge_request_event"'    # MR pipelines
    - if: '$CI_COMMIT_BRANCH == "main"'                      # main branch pushes
    - if: '$CI_COMMIT_TAG'                                    # version tags

Step 9: Notifications

Send Slack notifications when the pipeline fails:

yaml
notify-failure:
  stage: .post
  image: curlimages/curl:latest
  script:
    - |
      curl -X POST "$SLACK_WEBHOOK" \
        -H 'Content-type: application/json' \
        --data "{
          \"text\": \":x: Pipeline failed on <$CI_PROJECT_URL/pipelines/$CI_PIPELINE_ID|#$CI_PIPELINE_ID>\",
          \"channel\": \"#deployments\"
        }"
  when: on_failure
  rules:
    - if: '$CI_COMMIT_BRANCH == "main"'

Complete Working Example

Here's a minimal but real pipeline you can use immediately:

yaml
# .gitlab-ci.yml — production-ready template
 
image: node:20-alpine
 
stages:
  - test
  - build
  - deploy
 
variables:
  REGISTRY: $CI_REGISTRY_IMAGE
  TAG: $CI_COMMIT_SHORT_SHA
 
test:
  stage: test
  script:
    - npm ci
    - npm test
  cache:
    key: { files: [package-lock.json] }
    paths: [.npm/]
  rules:
    - if: '$CI_PIPELINE_SOURCE == "merge_request_event"'
    - if: '$CI_COMMIT_BRANCH == "main"'
 
build:
  stage: build
  image: docker:24.0
  services: [docker:24.0-dind]
  variables:
    DOCKER_TLS_CERTDIR: "/certs"
    DOCKER_HOST: tcp://docker:2376
    DOCKER_TLS_VERIFY: 1
    DOCKER_CERT_PATH: "$DOCKER_TLS_CERTDIR/client"
  before_script:
    - docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
  script:
    - docker build -t $REGISTRY:$TAG -t $REGISTRY:latest .
    - docker push $REGISTRY:$TAG
    - docker push $REGISTRY:latest
  rules:
    - if: '$CI_COMMIT_BRANCH == "main"'
 
deploy:
  stage: deploy
  image: bitnami/kubectl:latest
  before_script:
    - echo "$KUBECONFIG_DATA" | base64 -d > /tmp/kubeconfig
    - export KUBECONFIG=/tmp/kubeconfig
  script:
    - kubectl set image deployment/my-app app=$REGISTRY:$TAG -n production
    - kubectl rollout status deployment/my-app -n production --timeout=120s
  environment:
    name: production
  rules:
    - if: '$CI_COMMIT_BRANCH == "main"'
      when: manual

Learn More

Want to master GitLab CI/CD with real hands-on environments? KodeKloud's CI/CD courses cover GitLab, GitHub Actions, Jenkins, and ArgoCD with real pipeline exercises — not just documentation walkthroughs.

If you're looking for a cloud environment to deploy to, DigitalOcean's App Platform connects directly to GitLab repos and handles deployment automatically — great for getting started without managing your own Kubernetes cluster.


Summary

GitLab CI/CD gives you a complete pipeline platform in one tool:

  1. .gitlab-ci.yml — single file, defines everything
  2. Runners — shared (free) or self-hosted
  3. Stages — lint → test → build → push → deploy
  4. Variables — secrets in the UI, built-in vars everywhere
  5. Container Registry — built-in, free, no Docker Hub needed
  6. Environments — staging and production with manual gates
  7. Cache + Artifacts — fast pipelines, files between stages
  8. Rules — control when every job runs

Start with the minimal template above and add stages as your project grows.

Newsletter

Stay ahead of the curve

Get the latest DevOps, Kubernetes, AWS, and AI/ML guides delivered straight to your inbox. No spam — just practical engineering content.

Related Articles

Comments