ArgoCD Application Stuck OutOfSync or Progressing: Complete Fix Guide
ArgoCD app won't sync? Stuck in OutOfSync or Progressing state forever? Here's every cause and how to fix each one step by step.
You push a change to Git. ArgoCD detects it. The application shows "OutOfSync." You hit sync. Nothing happens. Or worse — it shows "Progressing" and stays there for 30 minutes until it times out.
ArgoCD sync issues are the #1 pain point for GitOps teams. The UI shows something is wrong but doesn't always tell you why. Let me walk through every cause and fix.
Understanding ArgoCD Sync States
Before debugging, know what the states mean:
| State | Meaning |
|---|---|
| Synced | Live state matches Git — everything is good |
| OutOfSync | Live state differs from Git — needs sync |
| Progressing | Sync is running, resources are being deployed |
| Degraded | Resources exist but aren't healthy |
| Missing | Resources in Git don't exist in the cluster |
| Unknown | ArgoCD can't determine the state |
Cause 1 — Server-Side Field Differences (Most Common)
ArgoCD compares your Git manifests against the live cluster state. But Kubernetes adds fields to resources — metadata.managedFields, status, default values for omitted fields. These differences make ArgoCD think the resource is out of sync even when it's fine.
Symptoms
Application shows OutOfSync but the diff shows only fields you didn't set — like metadata.creationTimestamp, status, or default spec values.
Fix — Ignore Known Differences
Add ignoreDifferences to your Application spec:
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: my-app
spec:
ignoreDifferences:
- group: apps
kind: Deployment
jsonPointers:
- /spec/replicas # Ignore if HPA manages replicas
- group: ""
kind: Service
jqPathExpressions:
- .metadata.annotations."kubectl.kubernetes.io/last-applied-configuration"
- kind: MutatingWebhookConfiguration
group: admissionregistration.k8s.io
jqPathExpressions:
- .webhooks[]?.clientConfig.caBundleFor cluster-wide settings, configure in argocd-cm ConfigMap:
apiVersion: v1
kind: ConfigMap
metadata:
name: argocd-cm
namespace: argocd
data:
resource.customizations.ignoreDifferences.all: |
managedFields:
- manager: kube-controller-manager
jsonPointers:
- /metadata/managedFieldsCause 2 — Sync Hooks Failing Silently
ArgoCD resource hooks (PreSync, Sync, PostSync) can fail without obvious indication. A failed PreSync hook blocks the entire sync.
Symptoms
Sync starts but never completes. Status shows "Progressing" indefinitely.
Diagnose
# Check hook resources
kubectl get jobs -n your-namespace -l argocd.argoproj.io/hook
# Check hook pod logs
kubectl logs -n your-namespace -l argocd.argoproj.io/hook --tail=50Fix
- Check your hook Job definition:
apiVersion: batch/v1
kind: Job
metadata:
name: db-migrate
annotations:
argocd.argoproj.io/hook: PreSync
argocd.argoproj.io/hook-delete-policy: HookSucceeded
spec:
backoffLimit: 3
template:
spec:
containers:
- name: migrate
image: my-app:latest
command: ["./migrate.sh"]
restartPolicy: Never- Always set
hook-delete-policyto clean up completed hooks:
HookSucceeded— delete after successHookFailed— delete after failureBeforeHookCreation— delete previous hook before creating new one (safest)
- Add a timeout to your hook:
activeDeadlineSeconds: 300 # 5 minute timeoutCause 3 — Sync Waves Misconfigured
Sync waves control the order resources are deployed. If wave dependencies are wrong, resources deploy before their dependencies are ready.
Symptoms
Some resources deploy successfully, others fail because they depend on resources from a later wave.
Fix — Order Waves Correctly
# Wave 0: Namespace and ConfigMaps (deploy first)
apiVersion: v1
kind: Namespace
metadata:
name: my-app
annotations:
argocd.argoproj.io/sync-wave: "0"
---
# Wave 1: Secrets and PVCs
apiVersion: v1
kind: Secret
metadata:
annotations:
argocd.argoproj.io/sync-wave: "1"
---
# Wave 2: Deployments
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
argocd.argoproj.io/sync-wave: "2"
---
# Wave 3: Services and Ingress
apiVersion: v1
kind: Service
metadata:
annotations:
argocd.argoproj.io/sync-wave: "3"Resources in the same wave deploy in parallel. ArgoCD waits for all resources in a wave to be healthy before moving to the next wave.
Cause 4 — Custom Health Checks Missing
ArgoCD has built-in health checks for standard resources (Deployments, StatefulSets, Services). But for CRDs and custom resources, it doesn't know what "healthy" means — so it stays in "Progressing."
Symptoms
Application stuck in "Progressing" for a custom resource (CertificateRequest, VirtualService, etc.).
Fix — Add Custom Health Check
In the argocd-cm ConfigMap:
apiVersion: v1
kind: ConfigMap
metadata:
name: argocd-cm
namespace: argocd
data:
resource.customizations.health.cert-manager.io_Certificate: |
hs = {}
if obj.status ~= nil then
if obj.status.conditions ~= nil then
for _, condition in ipairs(obj.status.conditions) do
if condition.type == "Ready" and condition.status == "True" then
hs.status = "Healthy"
hs.message = condition.message
return hs
end
if condition.type == "Ready" and condition.status == "False" then
hs.status = "Degraded"
hs.message = condition.message
return hs
end
end
end
end
hs.status = "Progressing"
hs.message = "Waiting for certificate"
return hsCause 5 — Git Repository Access Issues
ArgoCD can't read the latest commit from your Git repo.
Symptoms
Application shows OutOfSync but the "Last Synced" timestamp is old. Or sync fails with "repository not accessible" errors.
Diagnose
# Check ArgoCD repo server logs
kubectl logs -n argocd -l app.kubernetes.io/name=argocd-repo-server --tail=50
# Test repo connectivity
argocd repo listFix
- Refresh the repo connection:
argocd app get my-app --refresh- Update Git credentials:
argocd repo add https://github.com/org/repo.git \
--username git \
--password ghp_your_token- For SSH:
argocd repo add git@github.com:org/repo.git \
--ssh-private-key-path ~/.ssh/id_ed25519Cause 6 — Resource Pruning Blocked
When you remove a resource from Git, ArgoCD should delete it from the cluster (pruning). But if pruning is disabled or the resource has a finalizer, it hangs.
Fix — Enable Pruning
In your Application spec:
spec:
syncPolicy:
automated:
prune: true # Delete resources removed from Git
selfHeal: true # Fix manual changes automaticallyOr sync with pruning manually:
argocd app sync my-app --pruneIf a resource has a stuck finalizer:
kubectl patch <resource-type> <name> -n <namespace> \
--type merge -p '{"metadata":{"finalizers":null}}'Cause 7 — Application Size Exceeds Limits
Large applications with 500+ resources can hit ArgoCD's default size limits.
Symptoms
ComparisonError: Too many resources: 632 (max: 500)
Fix
Increase the limit in argocd-cmd-params-cm:
apiVersion: v1
kind: ConfigMap
metadata:
name: argocd-cmd-params-cm
namespace: argocd
data:
controller.status.processors: "50"
controller.operation.processors: "25"
reposerver.parallelism.limit: "0"Better fix — split your application using ApplicationSets or the app-of-apps pattern:
apiVersion: argoproj.io/v1alpha1
kind: ApplicationSet
metadata:
name: microservices
namespace: argocd
spec:
generators:
- git:
repoURL: https://github.com/org/manifests.git
revision: main
directories:
- path: services/*
template:
metadata:
name: '{{path.basename}}'
spec:
destination:
server: https://kubernetes.default.svc
namespace: '{{path.basename}}'
source:
repoURL: https://github.com/org/manifests.git
targetRevision: main
path: '{{path}}'Quick Troubleshooting Commands
# Force refresh application state
argocd app get my-app --refresh --hard-refresh
# View detailed sync status
argocd app sync my-app --dry-run
# Check diff between Git and live
argocd app diff my-app
# View sync history
argocd app history my-app
# Check ArgoCD controller logs
kubectl logs -n argocd -l app.kubernetes.io/name=argocd-application-controller --tail=100
# Restart ArgoCD components
kubectl rollout restart deployment argocd-repo-server -n argocd
kubectl rollout restart statefulset argocd-application-controller -n argocdWrapping Up
ArgoCD sync issues almost always fall into one of these seven categories. Start with checking the diff (Cause 1), then hooks (Cause 2), then health checks (Cause 4). Those three cover 80% of cases.
The key principle: ArgoCD is comparing Git against the live cluster. When they don't match — for any reason — it shows OutOfSync. Your job is figuring out which specific difference is causing it.
Want to master ArgoCD and GitOps from scratch? KodeKloud's ArgoCD course has hands-on labs covering sync strategies, hooks, ApplicationSets, and production troubleshooting.
Stay ahead of the curve
Get the latest DevOps, Kubernetes, AWS, and AI/ML guides delivered straight to your inbox. No spam — just practical engineering content.
Related Articles
ArgoCD vs Flux vs Jenkins — GitOps Comparison 2026
A deep-dive comparison of the three most popular GitOps and CI/CD tools — ArgoCD, Flux CD, and Jenkins. Learn which one fits your team, use case, and Kubernetes setup.
Build a Complete CI/CD Pipeline with GitHub Actions + ArgoCD + EKS (2026)
A full project walkthrough — from a simple app to a production-grade GitOps pipeline with automated builds, image scanning, and deployments to AWS EKS using ArgoCD.
CI/CD Pipeline Is Broken: How to Debug and Fix GitHub Actions, Jenkins & ArgoCD Failures (2026)
Your CI/CD pipeline failed and you don't know why. This complete debugging guide covers GitHub Actions, Jenkins, and ArgoCD failures with real error messages and step-by-step fixes.