How to Set Up Argo Workflows on Kubernetes from Scratch in 2026
Step-by-step guide to installing Argo Workflows, creating your first workflow, building CI/CD pipelines, and running DAG-based tasks on Kubernetes.
Argo Workflows is a Kubernetes-native workflow engine for orchestrating parallel jobs, CI/CD pipelines, and complex DAG-based tasks — all defined as custom resources. Unlike Argo CD (which handles GitOps deployments), Argo Workflows focuses on running multi-step jobs directly on your cluster. Let's set it up from scratch.
Prerequisites
Before you start, make sure you have:
- A running Kubernetes cluster (v1.25+) — a managed cluster on DigitalOcean Kubernetes works great for this
kubectlinstalled and configured to talk to your clusterhelmv3 installed- Basic understanding of Kubernetes pods, namespaces, and YAML manifests
Verify your cluster is reachable:
kubectl cluster-info
kubectl get nodesIf you need to brush up on Kubernetes fundamentals before diving in, KodeKloud's Kubernetes courses are hands-down the best way to get up to speed quickly.
Step 1 — Install Argo Workflows with Helm
Create a dedicated namespace and install Argo Workflows using the official Helm chart:
kubectl create namespace argo
helm repo add argo https://argoproj.github.io/argo-helm
helm repo update
helm install argo-workflows argo/argo-workflows \
--namespace argo \
--set server.extraArgs="{--auth-mode=server}" \
--set controller.workflowNamespaces="{argo}"The --auth-mode=server flag disables SSO for now so you can access the UI without authentication during setup. In production, you would configure OIDC or SSO.
Verify everything is running:
kubectl get pods -n argoYou should see the argo-workflows-server and argo-workflows-workflow-controller pods in a Running state:
NAME READY STATUS RESTARTS AGE
argo-workflows-server-6b8d9c5f47-xk2rm 1/1 Running 0 45s
argo-workflows-workflow-controller-7c4d8b6f99-m4tlp 1/1 Running 0 45s
Step 2 — Install the Argo CLI
The Argo CLI lets you submit, watch, and manage workflows from your terminal. Install it based on your OS:
# Linux
curl -sLO https://github.com/argoproj/argo-workflows/releases/latest/download/argo-linux-amd64.gz
gunzip argo-linux-amd64.gz
chmod +x argo-linux-amd64
sudo mv argo-linux-amd64 /usr/local/bin/argo
# macOS (Homebrew)
brew install argo
# Windows (using scoop)
scoop install argoVerify the installation:
argo versionStep 3 — Access the Argo UI
Port-forward the Argo server to access the web dashboard:
kubectl port-forward svc/argo-workflows-server -n argo 2746:2746Open your browser and go to https://localhost:2746. You will see the Argo Workflows dashboard where you can visualize, submit, and monitor workflows.
Step 4 — Create Your First Workflow
Argo Workflows uses Kubernetes Custom Resources. A Workflow is the most basic resource. Create a file called hello-world.yaml:
apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
generateName: hello-world-
namespace: argo
spec:
entrypoint: say-hello
templates:
- name: say-hello
container:
image: alpine:3.19
command: [echo]
args: ["Hello from Argo Workflows!"]Submit it:
argo submit hello-world.yaml -n argo --watchThe --watch flag streams the logs in real time. You should see the pod spin up, print the message, and complete successfully.
Understanding the Structure
Every Argo Workflow has three key parts:
entrypoint— the template that runs first (likemain()in a program)templates— a list of reusable task definitions (containers, scripts, steps, or DAGs)spec— the overall workflow configuration including parameters, volumes, and timeouts
Step 5 — Multi-Step Workflows
Real workflows have multiple steps that run in sequence or parallel. Here is a workflow with sequential and parallel steps:
apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
generateName: multi-step-
namespace: argo
spec:
entrypoint: pipeline
templates:
- name: pipeline
steps:
- - name: step-1
template: run-task
arguments:
parameters:
- name: message
value: "Step 1 — sequential"
- - name: step-2a
template: run-task
arguments:
parameters:
- name: message
value: "Step 2a — parallel"
- name: step-2b
template: run-task
arguments:
parameters:
- name: message
value: "Step 2b — parallel"
- - name: step-3
template: run-task
arguments:
parameters:
- name: message
value: "Step 3 — sequential"
- name: run-task
inputs:
parameters:
- name: message
container:
image: alpine:3.19
command: [sh, -c]
args: ["echo '{{inputs.parameters.message}}' && sleep 2"]Each inner list (- -) is a step group. Steps within the same group run in parallel. Step groups run sequentially. So Step 1 runs first, then 2a and 2b run at the same time, then Step 3 runs last.
Submit it:
argo submit multi-step.yaml -n argo --watchStep 6 — DAG-Based Workflows
For more complex dependency graphs, use DAGs instead of steps. DAGs let you define explicit dependencies between tasks:
apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
generateName: dag-pipeline-
namespace: argo
spec:
entrypoint: build-pipeline
templates:
- name: build-pipeline
dag:
tasks:
- name: clone-repo
template: task
arguments:
parameters:
- name: message
value: "Cloning repository"
- name: run-lint
dependencies: [clone-repo]
template: task
arguments:
parameters:
- name: message
value: "Running linter"
- name: run-tests
dependencies: [clone-repo]
template: task
arguments:
parameters:
- name: message
value: "Running tests"
- name: build-image
dependencies: [run-lint, run-tests]
template: task
arguments:
parameters:
- name: message
value: "Building Docker image"
- name: push-image
dependencies: [build-image]
template: task
arguments:
parameters:
- name: message
value: "Pushing to registry"
- name: task
inputs:
parameters:
- name: message
container:
image: alpine:3.19
command: [sh, -c]
args: ["echo '{{inputs.parameters.message}}' && sleep 3"]In this DAG, clone-repo runs first. Then run-lint and run-tests run in parallel (both depend only on clone-repo). Once both finish, build-image runs, followed by push-image. This is exactly how a real CI/CD pipeline works.
Step 7 — Artifact Passing Between Steps
Workflows often need to pass files between steps. Argo supports artifact passing out of the box. Here is an example that generates a file in one step and reads it in the next:
apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
generateName: artifact-pass-
namespace: argo
spec:
entrypoint: artifact-pipeline
templates:
- name: artifact-pipeline
steps:
- - name: generate
template: generate-file
- - name: consume
template: read-file
arguments:
artifacts:
- name: input-file
from: "{{steps.generate.outputs.artifacts.output-file}}"
- name: generate-file
container:
image: alpine:3.19
command: [sh, -c]
args: ["echo 'build-id: abc123\nstatus: success' > /tmp/result.txt"]
outputs:
artifacts:
- name: output-file
path: /tmp/result.txt
- name: read-file
inputs:
artifacts:
- name: input-file
path: /tmp/input.txt
container:
image: alpine:3.19
command: [sh, -c]
args: ["cat /tmp/input.txt"]For production use, configure an artifact repository (S3, GCS, or MinIO) in the Argo config so artifacts persist beyond the workflow's lifetime.
Step 8 — Real CI/CD Pipeline Example
Now let's build a realistic CI/CD pipeline that clones a repo, runs tests, builds a Docker image, and pushes it to a container registry:
apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
generateName: cicd-pipeline-
namespace: argo
spec:
entrypoint: ci-pipeline
arguments:
parameters:
- name: repo-url
value: "https://github.com/your-org/your-app.git"
- name: branch
value: "main"
- name: image
value: "ghcr.io/your-org/your-app"
- name: tag
value: "latest"
volumeClaimTemplates:
- metadata:
name: workspace
spec:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 2Gi
templates:
- name: ci-pipeline
dag:
tasks:
- name: clone
template: git-clone
- name: test
dependencies: [clone]
template: run-tests
- name: build-push
dependencies: [test]
template: build-and-push
- name: git-clone
container:
image: alpine/git:2.43.0
command: [sh, -c]
args:
- |
git clone --branch {{workflow.parameters.branch}} \
{{workflow.parameters.repo-url}} /workspace/src
cd /workspace/src && git log --oneline -3
volumeMounts:
- name: workspace
mountPath: /workspace
- name: run-tests
container:
image: node:20-alpine
command: [sh, -c]
args:
- |
cd /workspace/src
npm ci
npm test
volumeMounts:
- name: workspace
mountPath: /workspace
- name: build-and-push
container:
image: gcr.io/kaniko-project/executor:latest
args:
- --dockerfile=/workspace/src/Dockerfile
- --context=/workspace/src
- --destination={{workflow.parameters.image}}:{{workflow.parameters.tag}}
- --cache=true
volumeMounts:
- name: workspace
mountPath: /workspaceThis pipeline uses a shared PVC (volumeClaimTemplates) so all steps access the same workspace. Kaniko builds and pushes the Docker image without needing Docker-in-Docker or privileged containers — this is the recommended approach on Kubernetes.
To authenticate Kaniko with your registry, create a Kubernetes secret with your credentials and mount it at /kaniko/.docker/config.json.
Step 9 — Workflow Templates for Reuse
If you reuse the same workflow structure across projects, define a WorkflowTemplate:
apiVersion: argoproj.io/v1alpha1
kind: WorkflowTemplate
metadata:
name: ci-template
namespace: argo
spec:
arguments:
parameters:
- name: repo-url
- name: image
- name: tag
value: "latest"
entrypoint: ci-pipeline
templates:
- name: ci-pipeline
dag:
tasks:
- name: clone
template: git-clone
- name: test
dependencies: [clone]
template: run-tests
# ... same templates as aboveThen trigger it from any workflow:
apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
generateName: use-template-
namespace: argo
spec:
workflowTemplateRef:
name: ci-template
arguments:
parameters:
- name: repo-url
value: "https://github.com/your-org/app.git"
- name: image
value: "ghcr.io/your-org/app"This keeps your pipeline definitions DRY and consistent across teams.
Step 10 — Cron Workflows
To run workflows on a schedule (nightly builds, periodic tests, cleanup jobs), use CronWorkflow:
apiVersion: argoproj.io/v1alpha1
kind: CronWorkflow
metadata:
name: nightly-build
namespace: argo
spec:
schedule: "0 2 * * *"
timezone: "Asia/Kolkata"
concurrencyPolicy: "Replace"
startingDeadlineSeconds: 0
workflowSpec:
entrypoint: build
templates:
- name: build
container:
image: alpine:3.19
command: [sh, -c]
args: ["echo 'Running nightly build at $(date)'"]This runs every day at 2:00 AM IST. The concurrencyPolicy: Replace ensures that if a previous run is still going, it gets replaced by the new one.
List your cron workflows:
argo cron list -n argoProduction Tips
Before running Argo Workflows in production, keep these in mind:
- RBAC — Create a dedicated service account with minimal permissions. Never run workflows as cluster-admin.
- Resource limits — Always set CPU and memory limits on your workflow containers to prevent runaway pods from eating your cluster.
- Artifact repository — Configure S3 or GCS as your default artifact store. Without it, artifacts are lost when pods terminate.
- Garbage collection — Set
workflow.ttlStrategyto auto-delete completed workflows after a retention period. Old workflows pile up fast. - Retry strategies — Add
retryStrategyto critical steps so transient failures do not kill your entire pipeline.
retryStrategy:
limit: 3
retryPolicy: "Always"
backoff:
duration: "30s"
factor: 2
maxDuration: "5m"Wrapping Up
You now have Argo Workflows installed on Kubernetes with the CLI, UI access, and a solid understanding of steps, DAGs, artifacts, workflow templates, and cron workflows. The real CI/CD pipeline example gives you a production-ready starting point — just swap in your repo URL and registry.
Argo Workflows pairs perfectly with Argo CD for a full GitOps CI/CD stack: Argo Workflows handles the build and test pipeline, and Argo CD handles the deployment. If you want to go deeper into Kubernetes orchestration and CI/CD patterns, check out KodeKloud's CI/CD learning paths — they cover Argo, Jenkins, GitHub Actions, and more with hands-on labs.
Need a Kubernetes cluster to practice on? DigitalOcean's managed Kubernetes spins up in minutes and costs a fraction of what the big cloud providers charge. It is what I recommend for anyone learning.
Stay ahead of the curve
Get the latest DevOps, Kubernetes, AWS, and AI/ML guides delivered straight to your inbox. No spam — just practical engineering content.
Related Articles
ArgoCD vs Flux vs Jenkins — GitOps Comparison 2026
A deep-dive comparison of the three most popular GitOps and CI/CD tools — ArgoCD, Flux CD, and Jenkins. Learn which one fits your team, use case, and Kubernetes setup.
Build a Complete CI/CD Pipeline with GitHub Actions + ArgoCD + EKS (2026)
A full project walkthrough — from a simple app to a production-grade GitOps pipeline with automated builds, image scanning, and deployments to AWS EKS using ArgoCD.
CI/CD Pipeline Is Broken: How to Debug and Fix GitHub Actions, Jenkins & ArgoCD Failures (2026)
Your CI/CD pipeline failed and you don't know why. This complete debugging guide covers GitHub Actions, Jenkins, and ArgoCD failures with real error messages and step-by-step fixes.