CI/CD Refresher
Continuous Integration & Delivery with GitHub Actions and GitLab CI
Table of Contents
Setup & Environment
Neither platform requires a separate install to get started — both are driven by config files committed to your repo.
GitHub Actions
Create a .github/workflows/ directory in any GitHub repository. Every .yml file in that directory is a workflow. GitHub detects and registers them automatically on push.
# No CLI install needed — just create the directory
mkdir -p .github/workflows
# Your first workflow
cat > .github/workflows/ci.yml <<'EOF'
name: CI
on: [push]
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- run: echo "Hello, Actions!"
EOF
git add .github/workflows/ci.yml
git commit -m "ci: add initial workflow"
git push
GitLab CI
Create a .gitlab-ci.yml file at the repo root. GitLab picks it up on every push automatically — no extra configuration needed for shared runners.
# Create the pipeline config at repo root
cat > .gitlab-ci.yml <<'EOF'
stages:
- build
hello:
stage: build
script:
- echo "Hello, GitLab CI!"
EOF
git add .gitlab-ci.yml
git commit -m "ci: add initial pipeline"
git push
Local Testing
Testing CI changes by pushing commits is slow. Both platforms have tools to run pipelines locally.
GitHub Actions — act
# Install (macOS)
brew install act
# List all workflows and jobs
act -l
# Simulate a push event (runs push-triggered workflows)
act push
# Run a specific job
act -j build
# Use a smaller runner image for speed
act -P ubuntu-latest=catthehacker/ubuntu:act-latest
GitLab CI — gitlab-runner
# Install (macOS)
brew install gitlab-runner
# Run a specific job locally using Docker executor
gitlab-runner exec docker test
# Run with variables
gitlab-runner exec docker build \
--env CI_COMMIT_REF_NAME=main
# List available executors
gitlab-runner exec --help
act uses Docker to pull runner images and execute workflow steps exactly as GitHub would. This means you can catch YAML syntax errors, missing secrets, and broken shell commands without a single push. Use act -l to list discovered workflows and act push to simulate a push event trigger.
Core Concepts
CI/CD is a practice, not a tool. Understanding the concepts makes platform-specific syntax easier to learn.
CI vs CD (Continuous Delivery) vs CD (Continuous Deployment)
| Term | Goal | Human gate? |
|---|---|---|
| CI — Continuous Integration | Merge small changes frequently; detect breakage fast via automated build + test on every commit | No — runs automatically |
| Continuous Delivery | Every passing commit is releasable — artifact is built, tested, staged, and ready to deploy at any time | Yes — human approves production deploy |
| Continuous Deployment | Every passing commit automatically ships to production with no manual step | No — fully automated end-to-end |
Pipeline Anatomy: Build → Test → Deploy
A typical pipeline has three stages, though real pipelines add more (lint, security scan, performance test, smoke test):
- Build — Compile, package, or containerize the artifact. Fails fast on syntax/compile errors.
- Test — Unit tests, integration tests, coverage enforcement. The biggest ROI stage.
- Deploy — Ship the artifact to staging or production. May require approval or environment locks.
Stages run sequentially by default; jobs within the same stage run in parallel. A failure in one stage prevents later stages from running — this is the fail-fast principle that keeps feedback tight.
Trunk-Based vs Feature Branch Workflows
Trunk-Based Development
- All developers commit directly to
main(or short-lived branches merged within a day) - Feature flags control rollout, not long-lived branches
- CI runs on every commit to main — breakage is everyone's problem immediately
- Scales well, reduces merge hell
Feature Branch / GitFlow
- Work happens on branches; CI runs on each branch
- PR/MR triggers a full pipeline before merge
- Branch protection rules enforce CI must pass
- Longer-lived branches increase merge conflict risk
The Feedback Loop
The primary value of CI/CD is compressing the feedback loop. The longer the gap between writing code and learning it's broken, the more expensive the fix. Target: under 10 minutes for the CI signal on every commit. If your pipeline takes 30+ minutes, parallelize and split.
GitHub Actions Fundamentals
GitHub Actions
Workflows live in .github/workflows/*.yml. Each file is a self-contained automation unit.
Workflow Anatomy
# .github/workflows/ci.yml
# Workflow name — shown in the GitHub Actions UI
name: CI Pipeline
# What triggers this workflow
on:
push:
branches: [main, develop]
pull_request:
branches: [main]
# Environment variables available to all jobs
env:
NODE_VERSION: "20"
# One or more jobs — run in parallel by default
jobs:
# Job ID (used for dependencies and references)
test:
# Runner OS — GitHub-hosted or self-hosted
runs-on: ubuntu-latest
# Job-level env vars
env:
DATABASE_URL: postgres://localhost/testdb
# Ordered list of steps
steps:
# Checkout source code (almost always first)
- name: Checkout
uses: actions/checkout@v4 # action reference
# Use a pre-built action from the marketplace
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: ${{ env.NODE_VERSION }}
cache: "npm" # built-in caching
# Run a shell command
- name: Install dependencies
run: npm ci
- name: Run tests
run: npm test
# Conditional step — only runs on push to main
- name: Upload coverage
if: github.ref == 'refs/heads/main'
uses: codecov/codecov-action@v4
Runners
| Runner label | OS | Notes |
|---|---|---|
ubuntu-latest | Ubuntu 22.04 | Fastest, cheapest, most common |
ubuntu-24.04 | Ubuntu 24.04 | Pin to avoid surprise upgrades |
macos-latest | macOS 14 | 10x more expensive minutes, needed for iOS |
windows-latest | Windows Server 2022 | For .NET/Windows-specific testing |
self-hosted | Your machine | Free minutes, you manage the runner |
Common Events
on:
# Push to any branch
push:
branches: ["**"]
# Any PR targeting main
pull_request:
branches: [main]
types: [opened, synchronize, reopened]
# Scheduled (cron) — UTC timezone
schedule:
- cron: "0 6 * * 1-5" # 6 AM UTC, weekdays
# Manual trigger with optional inputs
workflow_dispatch:
inputs:
environment:
description: "Target environment"
required: true
default: staging
type: choice
options: [staging, production]
# Called by another workflow
workflow_call:
inputs:
version:
required: true
type: string
# Triggered via GitHub API
repository_dispatch:
types: [deploy-requested]
GitLab CI Fundamentals
GitLab CI
The entire pipeline lives in .gitlab-ci.yml at the repo root. GitLab's model of stages + jobs maps closely to GitHub's jobs + steps.
Pipeline Anatomy
# .gitlab-ci.yml
# Define the ordered stages — jobs within a stage run in parallel
stages:
- build
- test
- deploy
# Global variables available to all jobs
variables:
NODE_VERSION: "20"
DOCKER_DRIVER: overlay2
# Global default settings (can be overridden per job)
default:
image: node:20-alpine
before_script:
- npm ci --cache .npm --prefer-offline
cache:
key: ${CI_COMMIT_REF_SLUG}
paths:
- .npm/
# A job — name is arbitrary, stage assigns it to a pipeline stage
build:
stage: build
script:
- npm run build
artifacts:
paths:
- dist/
expire_in: 1 hour
test:
stage: test
script:
- npm test -- --coverage
coverage: '/Lines\s*:\s*(\d+\.?\d*)%/' # regex to parse coverage %
artifacts:
reports:
junit: junit.xml
coverage_report:
coverage_format: cobertura
path: coverage/cobertura-coverage.xml
deploy_staging:
stage: deploy
script:
- ./scripts/deploy.sh staging
environment:
name: staging
url: https://staging.example.com
rules:
- if: $CI_COMMIT_BRANCH == "main"
GitLab Runners
Runners execute jobs. GitLab.com provides shared runners; self-managed instances need runners registered.
# Register a project runner (Docker executor)
gitlab-runner register \
--url https://gitlab.com/ \
--registration-token $REGISTRATION_TOKEN \
--executor docker \
--docker-image alpine:latest \
--description "my-docker-runner" \
--tag-list "docker,linux" \
--run-untagged true
# Specify runner by tag in a job
deploy:
tags:
- docker
- linux
workflow → job → step. GitLab: pipeline → stage → job → script commands. The key difference is that GitLab stages provide explicit sequential ordering, whereas GitHub requires needs: to express job dependencies.
Triggers & Events
GitHub: Push, PR, Schedule, Manual
on:
# Only push to main or release/* branches
push:
branches:
- main
- "release/**"
# Path filters — only trigger when these files change
paths:
- "src/**"
- "package.json"
paths-ignore:
- "docs/**"
- "*.md"
# PRs targeting main — run on open, push, reopen
pull_request:
branches: [main]
paths:
- "src/**"
# Release published (e.g. for deployment workflows)
release:
types: [published]
# Cron — daily at midnight UTC
schedule:
- cron: "0 0 * * *"
# Manually triggered from GitHub UI or API
workflow_dispatch:
inputs:
dry_run:
type: boolean
default: false
description: "Skip actual deployment"
GitLab: Rules, Only/Except, Schedules
# Modern approach: rules (replaces only/except)
test:
stage: test
script: npm test
rules:
# Run on merge requests
- if: $CI_PIPELINE_SOURCE == "merge_request_event"
# Run on pushes to main
- if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH
# Run when specific files change (requires merge request context)
- if: $CI_PIPELINE_SOURCE == "merge_request_event"
changes:
- src/**/*
- package.json
# Path filtering — changes keyword
lint:
stage: build
script: npm run lint
rules:
- changes:
- "**/*.js"
- "**/*.ts"
# Legacy approach: only/except (still works but rules is preferred)
deploy:
stage: deploy
only:
- main
except:
- schedules
# Scheduled pipeline — configure in GitLab UI (Project > CI/CD > Schedules)
# Inside the job, detect it with:
nightly_report:
script: ./generate-report.sh
rules:
- if: $CI_PIPELINE_SOURCE == "schedule"
Variables & Secrets
GitHub: Secrets, Variables, GITHUB_TOKEN
jobs:
deploy:
runs-on: ubuntu-latest
# Required to access environment-level secrets and vars
environment: production
steps:
- name: Deploy
env:
# Repository secret — set in Settings > Secrets > Actions
API_KEY: ${{ secrets.API_KEY }}
# Repository variable (non-secret) — Settings > Variables
APP_URL: ${{ vars.APP_URL }}
# Environment-level secret (scoped to "production" environment)
PROD_DB_URL: ${{ secrets.PROD_DB_URL }}
# Built-in token — auto-generated per workflow run
# Has permissions to push to the repo, create PRs, etc.
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- name: Print context (safe — no secret values)
run: |
echo "Repo: ${{ github.repository }}"
echo "Branch: ${{ github.ref_name }}"
echo "SHA: ${{ github.sha }}"
echo "Actor: ${{ github.actor }}"
echo "Event: ${{ github.event_name }}"
# Fine-grained GITHUB_TOKEN permissions
permissions:
contents: read
packages: write
pull-requests: write
id-token: write # required for OIDC (keyless cloud auth)
environment: production on the job, ${{ secrets.MY_SECRET }} will be empty at runtime — no error, just silently blank. This is a very common pitfall.
GitLab: CI/CD Variables, Protected, Masked
# Variables defined in GitLab UI:
# Project > Settings > CI/CD > Variables
# Can be: file type, masked (hidden in logs), protected (only on protected branches)
deploy:
script:
# Predefined GitLab CI variables
- echo "Branch: $CI_COMMIT_BRANCH"
- echo "SHA: $CI_COMMIT_SHA"
- echo "Short SHA: $CI_COMMIT_SHORT_SHA"
- echo "Pipeline ID: $CI_PIPELINE_ID"
- echo "Project: $CI_PROJECT_PATH"
- echo "Registry: $CI_REGISTRY_IMAGE"
# User-defined variables (set in UI or .gitlab-ci.yml)
- echo "App URL: $APP_URL"
# Masked secret — value hidden in job logs
- deploy --token "$DEPLOY_TOKEN"
# Define variables inline (non-sensitive only)
variables:
APP_ENV: production
RETRY_COUNT: "3"
# Override per-job
test_staging:
variables:
APP_ENV: staging
script: ./run-tests.sh
# Variable inheritance: global vars < group vars < project vars < job vars
# Later definitions win — same as environment variable precedence rules
GitLab predefined variable reference (most useful ones)
| Variable | Value |
|---|---|
CI_COMMIT_BRANCH | Branch name (empty for tag pipelines) |
CI_COMMIT_TAG | Tag name (empty for branch pipelines) |
CI_COMMIT_SHA | Full commit SHA |
CI_COMMIT_SHORT_SHA | First 8 chars of SHA |
CI_PIPELINE_SOURCE | push, merge_request_event, schedule, api, trigger |
CI_PROJECT_PATH | group/project |
CI_REGISTRY_IMAGE | Built-in registry URL for this project |
CI_DEFAULT_BRANCH | The default branch (usually main) |
CI_ENVIRONMENT_NAME | Current environment name |
GITLAB_USER_LOGIN | Username who triggered the pipeline |
Caching & Artifacts
Caching and artifacts both persist data across pipeline steps, but they serve different purposes:
- Cache — speeds up builds by reusing downloaded dependencies (non-critical; OK if invalidated)
- Artifact — passes build outputs between jobs (critical; must be reliable)
GitHub Actions: actions/cache
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
# Node.js — cache node_modules via package-lock.json hash
- uses: actions/setup-node@v4
with:
node-version: "20"
cache: "npm" # shorthand — setup-node handles cache/restore
# Manual cache control
- name: Cache node_modules
uses: actions/cache@v4
with:
path: ~/.npm
key: ${{ runner.os }}-node-${{ hashFiles('**/package-lock.json') }}
restore-keys: |
${{ runner.os }}-node-
# Python — cache pip
- uses: actions/setup-python@v5
with:
python-version: "3.12"
cache: "pip"
# Go — cache module downloads
- uses: actions/setup-go@v5
with:
go-version: "1.22"
cache: true # caches $GOPATH/pkg/mod
# Upload build output as artifact (for later jobs or download)
- name: Upload dist
uses: actions/upload-artifact@v4
with:
name: dist-${{ github.sha }}
path: dist/
retention-days: 7
deploy:
needs: build # runs after build
runs-on: ubuntu-latest
steps:
- name: Download dist
uses: actions/download-artifact@v4
with:
name: dist-${{ github.sha }}
path: dist/
- name: Deploy
run: ./scripts/deploy.sh
GitLab CI: cache and artifacts
stages:
- install
- build
- test
- deploy
install:
stage: install
image: node:20-alpine
script:
- npm ci
cache:
# Cache key per branch — prevents cross-branch contamination
key: ${CI_COMMIT_REF_SLUG}
paths:
- node_modules/
- .npm/
# pull-push (default): restore then update cache after job
# pull: restore only (read-only)
# push: update only (write-only, never restore)
policy: pull-push
build:
stage: build
image: node:20-alpine
cache:
key: ${CI_COMMIT_REF_SLUG}
paths:
- node_modules/
policy: pull # only restore, don't update
script:
- npm run build
# Artifacts are passed to downstream jobs — always reliable
artifacts:
paths:
- dist/
expire_in: 1 hour # auto-deleted after this duration
when: on_success # only on success (default)
test:
stage: test
image: node:20-alpine
cache:
key: ${CI_COMMIT_REF_SLUG}
paths:
- node_modules/
policy: pull
script:
- npm test
artifacts:
reports:
junit: junit.xml # GitLab parses and shows in MR UI
coverage_report:
coverage_format: cobertura
path: coverage/cobertura-coverage.xml
when: always # upload even on failure — keep test reports
Testing Patterns
GitHub: Matrix Builds
Matrix builds run the same job configuration across multiple parameter combinations in parallel — great for cross-platform and multi-version testing.
jobs:
test:
runs-on: ${{ matrix.os }}
strategy:
# Don't cancel other matrix jobs if one fails
fail-fast: false
matrix:
os: [ubuntu-latest, macos-latest, windows-latest]
python-version: ["3.10", "3.11", "3.12"]
# Exclude combinations that don't make sense
exclude:
- os: macos-latest
python-version: "3.10"
# Add extra one-off combinations
include:
- os: ubuntu-latest
python-version: "3.12"
experimental: true
steps:
- uses: actions/checkout@v4
- uses: actions/setup-python@v5
with:
python-version: ${{ matrix.python-version }}
- run: pip install -r requirements.txt
- run: pytest -v
# Parallel test splitting — split tests across N workers
test-split:
runs-on: ubuntu-latest
strategy:
matrix:
shard: [1, 2, 3, 4] # 4 parallel shards
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: "20"
cache: "npm"
- run: npm ci
# Jest sharding — each worker runs 1/4 of the tests
- run: npx jest --shard=${{ matrix.shard }}/4 --coverage
GitLab: Parallel and Matrix
# Simple parallel split — GitLab creates N identical jobs
test:
stage: test
parallel: 4
script:
# CI_NODE_INDEX (1-based) and CI_NODE_TOTAL are injected
- pytest tests/ --splits $CI_NODE_TOTAL --group $CI_NODE_INDEX
# Matrix builds — explicit parameter combinations
test_matrix:
stage: test
parallel:
matrix:
- PYTHON_VERSION: ["3.10", "3.11", "3.12"]
OS: ["ubuntu", "alpine"]
image: python:${PYTHON_VERSION}-${OS}
script:
- pip install -r requirements.txt
- pytest -v
# Coverage reporting — regex parsed from job log
test_coverage:
stage: test
script:
- pytest --cov=src --cov-report=term
coverage: '/TOTAL.+?(\d+%)$/' # regex to extract coverage % for GitLab badge
Docker in CI
GitHub: Build and Push to GHCR
jobs:
docker:
runs-on: ubuntu-latest
permissions:
contents: read
packages: write # required to push to GHCR
steps:
- uses: actions/checkout@v4
# Set up QEMU for multi-platform builds (optional)
- name: Set up QEMU
uses: docker/setup-qemu-action@v3
# Set up Docker Buildx (enables layer caching, multi-platform)
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
# Login to GitHub Container Registry
- name: Login to GHCR
uses: docker/login-action@v3
with:
registry: ghcr.io
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
# Extract metadata (tags and labels) from Git context
- name: Docker metadata
id: meta
uses: docker/metadata-action@v5
with:
images: ghcr.io/${{ github.repository }}
tags: |
type=ref,event=branch
type=ref,event=pr
type=semver,pattern={{version}}
type=sha,prefix=sha-
# Build and push with layer caching
- name: Build and push
uses: docker/build-push-action@v5
with:
context: .
platforms: linux/amd64,linux/arm64
push: ${{ github.event_name != 'pull_request' }}
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
# GitHub Actions cache backend — very fast
cache-from: type=gha
cache-to: type=gha,mode=max
GitLab: Build and Push to GitLab Registry
variables:
DOCKER_DRIVER: overlay2
# Disable TLS for Docker-in-Docker service
DOCKER_TLS_CERTDIR: ""
IMAGE_NAME: $CI_REGISTRY_IMAGE:$CI_COMMIT_SHORT_SHA
build_image:
stage: build
image: docker:24
# Docker-in-Docker service — needed to run Docker commands
services:
- docker:24-dind
before_script:
# Login to GitLab Container Registry
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
script:
# Build with layer cache from the registry
- |
docker build \
--cache-from $CI_REGISTRY_IMAGE:latest \
--tag $IMAGE_NAME \
--tag $CI_REGISTRY_IMAGE:latest \
.
- docker push $IMAGE_NAME
- docker push $CI_REGISTRY_IMAGE:latest
rules:
- if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH
# Alternative: Kaniko — builds without Docker daemon (no privileged mode)
build_kaniko:
stage: build
image:
name: gcr.io/kaniko-project/executor:v1.23.0-debug
entrypoint: [""]
script:
- |
/kaniko/executor \
--context $CI_PROJECT_DIR \
--dockerfile $CI_PROJECT_DIR/Dockerfile \
--destination $CI_REGISTRY_IMAGE:$CI_COMMIT_SHORT_SHA \
--cache=true \
--cache-repo $CI_REGISTRY_IMAGE/cache
Deployment Strategies
Blue-Green, Canary, Rolling
| Strategy | How it works | Rollback speed | Risk |
|---|---|---|---|
| Direct deploy | Replace running version in-place | Re-deploy old version | Downtime possible |
| Blue-Green | Two identical envs; flip traffic from Blue to Green | Instant — flip back | Double infra cost |
| Canary | Route small % of traffic to new version; ramp up | Reduce % to 0 | Low — small blast radius |
| Rolling | Replace instances one-by-one | Stop rollout, redeploy old | Both versions live briefly |
GitHub: Deployment Environments
jobs:
deploy_staging:
runs-on: ubuntu-latest
environment:
name: staging
url: https://staging.example.com # shown in GitHub UI
steps:
- uses: actions/checkout@v4
- name: Deploy to staging
run: ./scripts/deploy.sh staging
env:
DEPLOY_KEY: ${{ secrets.STAGING_DEPLOY_KEY }}
deploy_production:
runs-on: ubuntu-latest
needs: deploy_staging # waits for staging to succeed
environment:
name: production # configure in Settings > Environments:
# - required reviewers
# - deployment branch policy
# - wait timer (e.g. 10 min soak)
steps:
- uses: actions/checkout@v4
- name: Deploy to production
run: ./scripts/deploy.sh production
env:
DEPLOY_KEY: ${{ secrets.PROD_DEPLOY_KEY }}
# Canary deployment pattern
canary:
runs-on: ubuntu-latest
steps:
- name: Deploy canary (10% traffic)
run: kubectl set image deployment/app app=image:${{ github.sha }} --record
- name: Wait and check error rate
run: ./scripts/check-metrics.sh --threshold 0.01 --duration 5m
- name: Promote to full rollout
run: kubectl rollout status deployment/app
GitLab: Environments, Review Apps, Manual Gates
stages:
- build
- test
- staging
- production
deploy_staging:
stage: staging
script:
- ./deploy.sh staging
environment:
name: staging
url: https://staging.example.com
# Auto-stop environment after 1 day
auto_stop_in: 1 day
rules:
- if: $CI_COMMIT_BRANCH == "main"
# Review apps — one env per MR
deploy_review:
stage: staging
script:
- ./deploy.sh review-$CI_MERGE_REQUEST_IID
environment:
name: review/$CI_COMMIT_REF_SLUG
url: https://$CI_ENVIRONMENT_SLUG.review.example.com
on_stop: stop_review # cleanup job
rules:
- if: $CI_PIPELINE_SOURCE == "merge_request_event"
stop_review:
stage: staging
script:
- ./teardown.sh review-$CI_MERGE_REQUEST_IID
environment:
name: review/$CI_COMMIT_REF_SLUG
action: stop
when: manual
rules:
- if: $CI_PIPELINE_SOURCE == "merge_request_event"
# Manual gate — job doesn't run until a human clicks "play" in the UI
deploy_production:
stage: production
script:
- ./deploy.sh production
environment:
name: production
url: https://example.com
when: manual # requires human approval
allow_failure: false # blocks the pipeline until approved
rules:
- if: $CI_COMMIT_BRANCH == "main"
Reusable Workflows
Avoid copy-pasting pipeline configuration across repos. Both platforms have mechanisms for sharing pipeline logic.
GitHub: Reusable Workflows and Composite Actions
# .github/workflows/reusable-deploy.yml
# A reusable workflow — called from other workflows via workflow_call
name: Reusable Deploy
on:
workflow_call:
inputs:
environment:
required: true
type: string
image_tag:
required: true
type: string
secrets:
deploy_key:
required: true
outputs:
deploy_url:
description: "The deployed URL"
value: ${{ jobs.deploy.outputs.url }}
jobs:
deploy:
runs-on: ubuntu-latest
environment: ${{ inputs.environment }}
outputs:
url: ${{ steps.deploy.outputs.url }}
steps:
- uses: actions/checkout@v4
- id: deploy
run: |
URL=$(./deploy.sh ${{ inputs.environment }} ${{ inputs.image_tag }})
echo "url=$URL" >> $GITHUB_OUTPUT
env:
DEPLOY_KEY: ${{ secrets.deploy_key }}
# .github/workflows/ci.yml — caller workflow
jobs:
build:
runs-on: ubuntu-latest
outputs:
image_tag: ${{ steps.build.outputs.tag }}
steps:
- id: build
run: echo "tag=${{ github.sha }}" >> $GITHUB_OUTPUT
deploy_staging:
needs: build
# Call the reusable workflow
uses: ./.github/workflows/reusable-deploy.yml
with:
environment: staging
image_tag: ${{ needs.build.outputs.image_tag }}
secrets:
deploy_key: ${{ secrets.STAGING_KEY }}
# .github/actions/setup-node-cache/action.yml
# Composite action — like a reusable set of steps (not a full workflow)
name: "Setup Node with cache"
description: "Setup Node.js and restore npm cache"
inputs:
node-version:
description: "Node.js version"
default: "20"
runs:
using: "composite"
steps:
- uses: actions/setup-node@v4
with:
node-version: ${{ inputs.node-version }}
cache: "npm"
- run: npm ci
shell: bash
GitLab: include, extends, YAML Anchors
# .gitlab-ci.yml — include external templates and local files
include:
# GitLab-provided templates
- template: Security/SAST.gitlab-ci.yml
- template: Code-Quality.gitlab-ci.yml
# Include from another file in the same repo
- local: ".gitlab/ci/deploy.yml"
# Include from another project (shared CI library)
- project: "my-org/ci-templates"
ref: main
file: "/templates/docker-build.yml"
# Include from a remote URL
- remote: "https://example.com/ci-template.yml"
---
# .gitlab/ci/base.yml — shared job definitions
.test_base:
stage: test
image: python:3.12-alpine
before_script:
- pip install -r requirements.txt
cache:
key: ${CI_COMMIT_REF_SLUG}-pip
paths:
- .cache/pip/
---
# .gitlab-ci.yml — using extends to inherit from base
test_unit:
extends: .test_base # inherits all keys, can override
script:
- pytest tests/unit/
test_integration:
extends: .test_base
script:
- pytest tests/integration/
variables:
DATABASE_URL: postgres://localhost/testdb
---
# YAML anchors — DRY within a single file (not composable across files)
.deploy_script: &deploy_script
- echo "Deploying to $TARGET_ENV"
- ./deploy.sh $TARGET_ENV
deploy_staging:
stage: staging
variables:
TARGET_ENV: staging
script:
- *deploy_script
deploy_production:
stage: production
variables:
TARGET_ENV: production
script:
- *deploy_script
Monorepo Strategies
Monorepos contain multiple services or packages. Running full CI on every commit regardless of what changed wastes time and compute. Both platforms support path-based conditional execution.
GitHub: paths filter + dorny/paths-filter
# Simple approach: paths filter on the trigger
# This runs the entire workflow only when matching files change
on:
push:
paths:
- "services/api/**"
- "packages/shared/**"
---
# Better approach: dorny/paths-filter — detect changes per service
# and use outputs to conditionally enable jobs
name: Monorepo CI
on: [push, pull_request]
jobs:
changes:
runs-on: ubuntu-latest
outputs:
api: ${{ steps.filter.outputs.api }}
web: ${{ steps.filter.outputs.web }}
infra: ${{ steps.filter.outputs.infra }}
steps:
- uses: actions/checkout@v4
- uses: dorny/paths-filter@v3
id: filter
with:
filters: |
api:
- 'services/api/**'
- 'packages/shared/**'
web:
- 'services/web/**'
- 'packages/shared/**'
infra:
- 'infra/**'
- '.github/workflows/**'
test_api:
needs: changes
if: needs.changes.outputs.api == 'true'
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Test API service
working-directory: services/api
run: |
go test ./...
test_web:
needs: changes
if: needs.changes.outputs.web == 'true'
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Test web service
working-directory: services/web
run: |
npm ci && npm test
deploy:
needs: [test_api, test_web]
# always() allows deploy to run even if some tests were skipped
if: always() && !failure() && github.ref == 'refs/heads/main'
runs-on: ubuntu-latest
steps:
- run: echo "Deploying changed services"
GitLab: rules:changes
stages:
- test
- deploy
# rules:changes triggers job only when matched paths change
test_api:
stage: test
script:
- cd services/api && go test ./...
rules:
- changes:
- services/api/**/*
- packages/shared/**/*
when: on_success
test_web:
stage: test
script:
- cd services/web && npm ci && npm test
rules:
- changes:
- services/web/**/*
- packages/shared/**/*
when: on_success
# Conditional deploy — only if relevant tests passed
deploy_api:
stage: deploy
script:
- ./deploy.sh api
rules:
- if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH
changes:
- services/api/**/*
# Needs keyword creates a DAG — deploy_api waits only for test_api
deploy_web:
stage: deploy
needs:
- job: test_web
optional: true # skip if test_web was skipped (no changes)
script:
- ./deploy.sh web
rules:
- if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH
changes:
- services/web/**/*
Best Practices & Security
GitHub Actions Security
# 1. Pin actions to a full commit SHA, not a tag
# Tags are mutable — an attacker could push a malicious version to a tag
# BAD:
- uses: actions/checkout@v4
# GOOD:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
# 2. Minimal permissions — principle of least privilege
permissions:
contents: read # don't give write unless needed
# Don't include packages, deployments, etc. unless the job needs them
# 3. Avoid untrusted input in run: steps — injection risk
# BAD — PR title could contain shell metacharacters
- run: echo "${{ github.event.pull_request.title }}"
# GOOD — pass via env var, shell treats it as data not code
- run: echo "$PR_TITLE"
env:
PR_TITLE: ${{ github.event.pull_request.title }}
# 4. Don't inherit secrets to forked PRs
# pull_request trigger does NOT expose secrets by default
# pull_request_target DOES but runs in base repo context — use with care
# 5. Dependency scanning
- name: Dependency audit
run: npm audit --audit-level=high
- name: SAST with CodeQL
uses: github/codeql-action/analyze@v3
# 6. OIDC for keyless cloud authentication (no long-lived credentials)
# Instead of storing AWS keys as secrets, use OIDC to get temporary tokens
jobs:
deploy:
permissions:
id-token: write # required for OIDC
contents: read
steps:
- uses: aws-actions/configure-aws-credentials@v4
with:
role-to-assume: arn:aws:iam::123456789:role/github-actions-role
aws-region: us-east-1
# No AWS_ACCESS_KEY_ID / AWS_SECRET_ACCESS_KEY needed
- run: aws s3 sync dist/ s3://my-bucket/
GitLab CI Security
# 1. Protected branches + protected variables
# Only pipelines on protected branches can access "protected" variables
# Set in Project > Settings > CI/CD > Variables > "Protected" checkbox
# 2. Include security scanning templates
include:
- template: Security/SAST.gitlab-ci.yml
- template: Security/Dependency-Scanning.gitlab-ci.yml
- template: Security/Secret-Detection.gitlab-ci.yml
- template: Security/Container-Scanning.gitlab-ci.yml
# 3. Restrict job to specific runners
deploy_production:
tags:
- production-runner # only runs on runners tagged "production-runner"
script:
- ./deploy.sh production
# 4. Approval rules for production deployments
# Configure in Project > Settings > CI/CD > Protected environments
# Require N approvers before a manual job can run
# 5. Secret detection — scans for accidentally committed secrets
secret_detection:
stage: test
variables:
SECRET_DETECTION_HISTORIC_SCAN: "true" # scan full git history
Branch Protection Rules
GitHub Branch Protection
- Settings > Branches > Branch protection rules
- Require status checks to pass (select CI jobs)
- Require branches to be up-to-date before merge
- Require at least 1 (or N) approving reviews
- Restrict who can push to the branch
- Require signed commits
- Do not allow bypassing the above settings (even admins)
GitLab Protected Branches
- Settings > Repository > Protected Branches
- Control who can merge (Developers, Maintainers, No one)
- Control who can push directly
- Require approval rules (Merge Request Approvals)
- Code owner approvals per CODEOWNERS file
- Pipeline must succeed before merge (merge request settings)
Common Pitfalls
Missing needs: — jobs run in wrong order
GitHub Actions jobs run in parallel by default. Without needs:, your deploy job may start before the build job finishes. Always express dependencies explicitly:
deploy:
needs: [build, test] # explicit dependency
runs-on: ubuntu-latest
Secrets silently empty — missing environment: key
If you define secrets at the environment level (e.g., "production") in GitHub, the job must declare environment: production. Without it, ${{ secrets.MY_SECRET }} evaluates to an empty string with no error. This is one of the hardest bugs to diagnose in a pipeline.
Cache misses on every run — wrong cache key
A cache key that includes a timestamp or commit SHA changes every run, defeating the purpose. Cache keys should be based on the content that determines the cache validity — typically a lockfile hash:
# BAD — new key every commit
key: ${{ github.sha }}-node
# GOOD — only changes when lockfile changes
key: ${{ runner.os }}-node-${{ hashFiles('**/package-lock.json') }}
Flaky tests destabilizing CI
A test that fails intermittently blocks PRs and destroys trust in CI. Strategies to deal with flaky tests:
- Use
--retries=2(Jest) or--reruns=2(pytest-rerunfailures) as a short-term mitigation - Tag flaky tests and run them in a separate job that doesn't block merge
- Track flaky test history in your CI platform's test analytics
- Root-cause and fix — don't let the list grow indefinitely
# GitHub: allow a job to fail without failing the workflow
flaky-tests:
runs-on: ubuntu-latest
continue-on-error: true # report failure but don't block
steps:
- run: pytest tests/flaky/ --reruns=3
Long pipelines — strategies to speed up
- Parallelize — split test suite across matrix shards
- Cache aggressively — dependencies, Docker layers, build outputs
- Fail fast — put linting and fast unit tests first; defer slow integration tests
- Path filtering — don't run full CI for README changes
- Smaller, focused jobs — avoid one mega-job that does everything sequentially
- Self-hosted runners — faster machines, local Docker layer cache, no cold start
Quick Reference Cheatsheet
| Concept | GitHub Actions | GitLab CI |
|---|---|---|
| Config file | .github/workflows/*.yml | .gitlab-ci.yml |
| Pipeline unit | Workflow → Job → Step | Pipeline → Stage → Job |
| Parallel jobs | Default (no deps) | Same stage |
| Sequential jobs | needs: | Different stages (or needs:) |
| Conditional run | if: | rules: / only:/except: |
| Secrets | secrets.*, vars.* | CI/CD Variables (masked/protected) |
| Built-in token | secrets.GITHUB_TOKEN | $CI_JOB_TOKEN |
| Artifacts | upload-artifact / download-artifact | artifacts: paths: |
| Dependency cache | actions/cache | cache: paths: |
| Matrix | strategy: matrix: | parallel: matrix: |
| Reusable logic | Reusable workflow, composite action | include:, extends: |
| Manual approval | Environment protection rules | when: manual |
| Path filter | on.push.paths + dorny/paths-filter | rules: changes: |
| Scheduled run | on.schedule.cron | Schedules UI + $CI_PIPELINE_SOURCE == "schedule" |
| Environment URL | environment: url: | environment: url: |