Kyverno Complete Guide: Kubernetes Policy Engine for Security & Compliance in 2026
Learn how to use Kyverno to enforce security policies, validate resources, mutate configurations, and generate defaults in your Kubernetes clusters.
Kubernetes gives you enormous flexibility — and that is exactly the problem. Without guardrails, developers can deploy containers running as root, skip resource limits, use the latest tag, and expose services without network policies. Kyverno is the policy engine that fixes all of this without requiring you to learn a new language.
What Is Kyverno?
Kyverno is a Kubernetes-native policy engine designed specifically for Kubernetes. It runs as a dynamic admission controller, intercepting API requests to the Kubernetes API server and enforcing policies written in plain YAML — no Rego, no new DSL, no steep learning curve.
With Kyverno, you can:
- Validate resources against rules (block non-compliant deployments)
- Mutate resources on the fly (inject sidecars, add labels automatically)
- Generate resources when certain conditions are met (auto-create NetworkPolicies, ResourceQuotas)
- Verify images against signatures and attestations
Kyverno was accepted as a CNCF Graduated project in 2024, putting it at the same maturity level as Kubernetes itself, Prometheus, and Envoy.
Kyverno vs OPA/Gatekeeper — Which One Should You Use?
Before Kyverno came along, the dominant option was Open Policy Agent (OPA) with its Kubernetes-specific component, Gatekeeper. Here is how they compare:
| Feature | Kyverno | OPA/Gatekeeper |
|---|---|---|
| Policy language | YAML (Kubernetes-native) | Rego (custom language) |
| Learning curve | Low — if you know K8s YAML, you know Kyverno | High — Rego is powerful but unfamiliar |
| Mutation support | Built-in | Limited (requires separate webhook) |
| Resource generation | Built-in | Not supported |
| Image verification | Built-in (Cosign, Notary) | Requires external tooling |
| CNCF status | Graduated | Graduated (OPA), Sandbox (Gatekeeper) |
| Community adoption | Fast-growing, especially in GitOps setups | Established, widely adopted in enterprises |
The verdict: If your team already knows Rego and has OPA policies in production, stick with Gatekeeper. For everyone else — especially teams adopting GitOps — Kyverno is the better choice in 2026. The YAML-native approach means your policies live alongside your manifests and follow the same review process.
If you want to build deep Kubernetes security skills including policy engines, KodeKloud's Kubernetes security courses cover both Kyverno and OPA hands-on.
How Kyverno Works — Architecture
Kyverno runs as a set of controllers inside your cluster:
- Admission Controller — A webhook that intercepts CREATE, UPDATE, and DELETE requests to the API server
- Background Controller — Scans existing resources against policies (not just new ones)
- Reports Controller — Generates PolicyReports (a Kubernetes-native reporting CRD)
The flow is straightforward:
kubectl apply → API Server → Kyverno Webhook → Validate/Mutate → Admit or Deny
When a resource hits the API server, Kyverno evaluates it against all matching policies. If a validate rule fails and the policy is set to enforce, the request is denied with a clear error message. If it is set to audit, the violation is logged but the resource is admitted.
Installing Kyverno
The recommended installation method is Helm:
# Add the Kyverno Helm repo
helm repo add kyverno https://kyverno.github.io/kyverno/
helm repo update
# Install Kyverno in its own namespace
helm install kyverno kyverno/kyverno \
--namespace kyverno \
--create-namespace \
--set replicaCount=3For production, always run at least 3 replicas for high availability. Kyverno processes admission requests synchronously — if the webhook is down, your cluster API calls will fail (unless you configure the webhook's failurePolicy to Ignore, which is not recommended for security-critical policies).
Verify the installation:
kubectl get pods -n kyverno
# NAME READY STATUS RESTARTS AGE
# kyverno-admission-controller-0 1/1 Running 0 60s
# kyverno-background-controller-0 1/1 Running 0 60s
# kyverno-cleanup-controller-0 1/1 Running 0 60s
# kyverno-reports-controller-0 1/1 Running 0 60sNeed a Kubernetes cluster to practice on? DigitalOcean's managed Kubernetes (DOKS) gives you a production-grade cluster in minutes with a free trial — great for testing policy engines without burning through your AWS budget.
ClusterPolicy vs Policy
Kyverno has two scopes:
- ClusterPolicy — Applies across all namespaces (cluster-wide)
- Policy — Applies to a specific namespace only
In practice, you will use ClusterPolicy for 90% of your rules since security policies should be consistent across the cluster.
apiVersion: kyverno.io/v1
kind: ClusterPolicy # Cluster-wide
metadata:
name: require-labels
---
apiVersion: kyverno.io/v1
kind: Policy # Namespace-scoped
metadata:
name: require-labels
namespace: productionRule Types — Validate, Mutate, Generate
Validate Rules
Validate rules check incoming resources against conditions. If the check fails, the request is blocked (enforce mode) or logged (audit mode).
Example 1: Require Resource Limits on All Containers
This is the most common policy. Without resource limits, a single pod can consume all node resources and crash your entire cluster.
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: require-resource-limits
annotations:
policies.kyverno.io/title: Require Resource Limits
policies.kyverno.io/description: >-
All containers must specify CPU and memory limits.
spec:
validationFailureAction: Enforce
background: true
rules:
- name: check-resource-limits
match:
any:
- resources:
kinds:
- Pod
validate:
message: "All containers must have CPU and memory limits defined."
pattern:
spec:
containers:
- resources:
limits:
memory: "?*"
cpu: "?*"The ?* pattern means "any non-empty value" — the container must have the field set to something.
Example 2: Block the latest Tag
Using latest in production is a deployment anti-pattern. You never know exactly which version is running, rollbacks become impossible, and debugging is a nightmare.
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: disallow-latest-tag
spec:
validationFailureAction: Enforce
background: true
rules:
- name: block-latest-tag
match:
any:
- resources:
kinds:
- Pod
validate:
message: >-
Using the 'latest' tag is not allowed.
Specify an explicit image tag (e.g., nginx:1.25.4).
pattern:
spec:
containers:
- image: "!*:latest & !*:*latest*"
=(initContainers):
- image: "!*:latest & !*:*latest*"Example 3: Require Specific Labels
Labels are critical for cost allocation, ownership tracking, and service discovery. This policy ensures every Deployment has app.kubernetes.io/name and team labels.
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: require-labels
spec:
validationFailureAction: Enforce
background: true
rules:
- name: check-required-labels
match:
any:
- resources:
kinds:
- Deployment
- StatefulSet
- DaemonSet
validate:
message: >-
The labels 'app.kubernetes.io/name' and 'team' are required
on all Deployments, StatefulSets, and DaemonSets.
pattern:
metadata:
labels:
app.kubernetes.io/name: "?*"
team: "?*"Mutate Rules
Mutate rules automatically modify resources as they are created or updated. This is powerful for injecting defaults without burdening developers.
Example: Auto-Add Default Security Context
Instead of rejecting pods that lack a security context, you can automatically add one:
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: add-default-security-context
spec:
rules:
- name: add-run-as-nonroot
match:
any:
- resources:
kinds:
- Pod
mutate:
patchStrategicMerge:
spec:
containers:
- (name): "*"
securityContext:
+(runAsNonRoot): true
+(readOnlyRootFilesystem): true
+(allowPrivilegeEscalation): falseThe +() syntax means "add if not present" — it will not overwrite values that developers have explicitly set.
Generate Rules
Generate rules automatically create companion resources when a triggering resource appears. This is unique to Kyverno and incredibly useful.
Example: Auto-Create NetworkPolicy for Every New Namespace
Every namespace should have a default-deny NetworkPolicy. Instead of relying on humans to remember this, Kyverno can generate it automatically:
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: generate-default-network-policy
spec:
rules:
- name: default-deny-ingress
match:
any:
- resources:
kinds:
- Namespace
exclude:
any:
- resources:
namespaces:
- kube-system
- kyverno
generate:
synchronize: true
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
name: default-deny-ingress
namespace: "{{request.object.metadata.name}}"
data:
spec:
podSelector: {}
policyTypes:
- IngressWhen synchronize: true is set, Kyverno will recreate the NetworkPolicy if someone deletes it — enforcing the policy continuously, not just at creation time.
Image Verification
Kyverno can verify container image signatures using Cosign and Notary. This is critical for software supply chain security:
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: verify-image-signatures
spec:
validationFailureAction: Enforce
rules:
- name: verify-cosign-signature
match:
any:
- resources:
kinds:
- Pod
verifyImages:
- imageReferences:
- "ghcr.io/my-org/*"
attestors:
- entries:
- keys:
publicKeys: |-
-----BEGIN PUBLIC KEY-----
MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAE...
-----END PUBLIC KEY-----Policy Reports — Auditing Existing Resources
Kyverno does not just evaluate new resources. The background controller continuously scans existing resources and generates PolicyReport and ClusterPolicyReport objects:
# View cluster-wide policy violations
kubectl get clusterpolicyreport -o wide
# Get detailed results
kubectl get policyreport -n production -o yamlThis is essential for understanding your current compliance posture — not just catching new violations.
Best Practices for Running Kyverno in Production
1. Start in audit mode, then switch to enforce.
Set validationFailureAction: Audit when rolling out new policies. Review the PolicyReports to understand impact before switching to Enforce. Breaking existing workloads on day one is a fast way to lose developer trust.
2. Exclude system namespaces.
Always exclude kube-system, kyverno, and other infrastructure namespaces from your policies. A policy that blocks Kyverno's own pods from deploying will lock you out.
spec:
rules:
- name: my-rule
exclude:
any:
- resources:
namespaces:
- kube-system
- kyverno
- cert-manager3. Use annotations for documentation. Kyverno supports standard annotations that show up in policy reports and error messages:
metadata:
annotations:
policies.kyverno.io/title: Require Resource Limits
policies.kyverno.io/category: Best Practices
policies.kyverno.io/severity: high
policies.kyverno.io/description: >-
Detailed explanation of why this policy exists.4. Store policies in Git alongside your manifests. Kyverno policies are Kubernetes YAML — they belong in your GitOps repo. Use ArgoCD or Flux to deploy them. This gives you version control, pull request reviews, and audit trails for every policy change.
5. Monitor Kyverno itself. Kyverno exposes Prometheus metrics on port 8000. Monitor admission latency, policy evaluation counts, and webhook failures. A slow or failing Kyverno webhook directly impacts your cluster's API responsiveness.
6. Set resource limits on Kyverno pods. Kyverno's memory usage scales with the number of policies and cluster size. Start with 512Mi memory and 200m CPU per controller, then adjust based on actual usage.
7. Use the Kyverno CLI for testing in CI.
The kyverno CLI lets you test policies against manifests locally before deploying:
# Install the CLI
brew install kyverno
# Test a policy against a resource
kyverno apply policy.yaml --resource deployment.yaml
# Run all policies against all resources in a directory
kyverno apply ./policies/ --resource ./manifests/This integrates perfectly into CI pipelines — catch policy violations in pull requests before they ever reach the cluster.
Wrapping Up
Kyverno solves one of Kubernetes' biggest operational gaps: enforcing consistent standards across teams and namespaces without writing code. Its YAML-native approach means the same engineers who write Deployments and Services can write policies. Validate rules catch misconfigurations, mutate rules inject sane defaults, and generate rules automate boilerplate — all through the same admission control mechanism that Kubernetes already provides.
If you are running Kubernetes in production without a policy engine, you are relying on documentation and discipline to prevent misconfigurations. That works until it does not. Start with three or four high-impact policies (resource limits, no latest tag, required labels, default network policies), run them in audit mode for a week, then switch to enforce.
For hands-on practice with Kyverno and Kubernetes security, check out KodeKloud's lab-based courses — they let you break things in real clusters without consequences.
Stay ahead of the curve
Get the latest DevOps, Kubernetes, AWS, and AI/ML guides delivered straight to your inbox. No spam — just practical engineering content.
Related Articles
ArgoCD vs Flux vs Jenkins — GitOps Comparison 2026
A deep-dive comparison of the three most popular GitOps and CI/CD tools — ArgoCD, Flux CD, and Jenkins. Learn which one fits your team, use case, and Kubernetes setup.
Build a Complete CI/CD Pipeline with GitHub Actions + ArgoCD + EKS (2026)
A full project walkthrough — from a simple app to a production-grade GitOps pipeline with automated builds, image scanning, and deployments to AWS EKS using ArgoCD.
Build a DevSecOps Pipeline with Trivy, SonarQube, and OPA from Scratch (2026)
Step-by-step project walkthrough: add security scanning, code quality gates, and policy enforcement to a GitHub Actions pipeline. Real configs, production-ready.