AWS RDS Connection Timeout from EKS Pods — How to Fix It
EKS pods can't connect to RDS? Fix RDS connection timeouts from Kubernetes — covers security groups, VPC peering, subnet routing, and IAM auth issues.
Your app is running on EKS. Your RDS database is in the same VPC. But pods can't connect — you get Connection timed out or could not connect to server: Connection refused. Here's the full diagnostic playbook.
Most Common Causes
- Security group not allowing inbound from EKS nodes/pods
- RDS is in a private subnet with no route from pod subnet
- Wrong endpoint in connection string
- IAM authentication misconfigured (for RDS IAM auth)
- RDS in a different VPC without peering
Step 1: Verify the Basics
# Get your pod's IP
kubectl get pod <pod-name> -o wide
# NAME READY STATUS RESTARTS IP NODE
# my-app-xyz 1/1 Running 0 10.0.1.145 ip-10-0-1-10
# Get RDS endpoint
aws rds describe-db-instances \
--query 'DBInstances[*].[DBInstanceIdentifier,Endpoint.Address,Endpoint.Port]' \
--output tableStep 2: Test Connectivity from Inside the Pod
# Install netcat in a debug container, or use a test pod
kubectl run debug --image=nicolaka/netshoot --rm -it -- bash
# Test TCP connectivity to RDS
nc -zv your-rds-endpoint.rds.amazonaws.com 5432
# Connection to your-rds-endpoint ... 5432 port [tcp/postgresql] succeeded!
# OR
# nc: connect to your-rds-endpoint port 5432 (tcp) failed: Connection timed outIf you get Connection timed out → it's a network/security group issue.
If you get Connection refused → RDS is reachable but rejecting (wrong port, DB not running).
If you get Could not resolve host → DNS issue.
Problem 1: Security Group Not Allowing EKS Traffic
This is the most common cause.
Find your RDS security group:
aws rds describe-db-instances \
--db-instance-identifier your-db \
--query 'DBInstances[0].VpcSecurityGroups[*].VpcSecurityGroupId'Find your EKS node security group:
aws eks describe-cluster --name your-cluster \
--query 'cluster.resourcesVpcConfig.clusterSecurityGroupId'Check RDS security group inbound rules:
aws ec2 describe-security-groups \
--group-ids sg-rds-id \
--query 'SecurityGroups[0].IpPermissions'Fix: Add inbound rule to RDS security group:
# Allow EKS node security group to reach RDS on port 5432 (PostgreSQL)
aws ec2 authorize-security-group-ingress \
--group-id sg-rds-id \
--protocol tcp \
--port 5432 \
--source-group sg-eks-nodes-idOr in Terraform:
resource "aws_security_group_rule" "eks_to_rds" {
type = "ingress"
from_port = 5432
to_port = 5432
protocol = "tcp"
security_group_id = aws_security_group.rds.id
source_security_group_id = aws_security_group.eks_nodes.id
description = "Allow EKS nodes to connect to RDS"
}Problem 2: Pods Use Different Security Group (VPC CNI)
When using AWS VPC CNI, pods get IPs from your VPC — but their security group might be different from the node's security group.
Check if Security Groups for Pods is enabled:
kubectl describe daemonset aws-node -n kube-system | grep ENABLE_POD_ENIIf pods have their own security group, you need to allow that security group in RDS, not the node security group.
Fix with SecurityGroupPolicy:
apiVersion: vpcresources.k8s.aws/v1beta1
kind: SecurityGroupPolicy
metadata:
name: my-app-sgp
namespace: production
spec:
podSelector:
matchLabels:
app: my-app
securityGroups:
groupIds:
- sg-pod-security-group-id # must include RDS accessProblem 3: RDS in Wrong Subnet / No Route
If your RDS is in a private subnet and EKS pods are in a different subnet, they need a route.
# Check RDS subnets
aws rds describe-db-subnet-groups \
--db-subnet-group-name your-subnet-group \
--query 'DBSubnetGroups[0].Subnets[*].SubnetIdentifier'
# Check route table for EKS pod subnet
aws ec2 describe-route-tables \
--filters "Name=association.subnet-id,Values=subnet-eks-id"Both should be in the same VPC. If they're in different availability zones, that's fine — VPC routes within a VPC.
If RDS is in a different VPC, you need VPC peering or AWS Transit Gateway. No amount of security group rules will fix cross-VPC access without routing.
Problem 4: Wrong RDS Endpoint
Don't use the IP address of RDS — always use the DNS endpoint. The IP can change after a failover.
# Wrong - using IP directly
DB_HOST=10.0.2.55 # ← this can change
# Correct - using DNS endpoint
DB_HOST=mydb.abc123def456.us-east-1.rds.amazonaws.comCheck your connection string in the pod:
kubectl exec -it <pod-name> -- env | grep -i db_host
kubectl exec -it <pod-name> -- env | grep -i database_urlProblem 5: RDS IAM Authentication Issues
If you're using IAM database authentication (no passwords), you need the correct setup.
Generate a token:
aws rds generate-db-auth-token \
--hostname mydb.abc123.us-east-1.rds.amazonaws.com \
--port 5432 \
--region us-east-1 \
--username mydbuserIf this fails, the IAM role doesn't have the rds-db:connect permission:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "rds-db:connect",
"Resource": "arn:aws:rds-db:us-east-1:123456789:dbuser:db-ABCDEFG/mydbuser"
}
]
}For EKS pods, attach this policy to the IAM role associated with the pod's service account (IRSA).
Problem 6: RDS Not Publicly Accessible (Expected)
RDS should not be publicly accessible. Confirm:
aws rds describe-db-instances \
--db-instance-identifier your-db \
--query 'DBInstances[0].PubliclyAccessible'
# false ← correct for productionIf it says true and you're in a corp environment, flag it for security review.
Full Diagnostic Checklist
# 1. Pod can resolve RDS DNS?
kubectl exec -it <pod> -- nslookup your-rds-endpoint.rds.amazonaws.com
# 2. Pod can reach RDS port?
kubectl run debug --image=nicolaka/netshoot --rm -it -- \
nc -zv your-rds-endpoint.rds.amazonaws.com 5432
# 3. RDS security group allows EKS?
aws ec2 describe-security-groups --group-ids sg-rds-id \
--query 'SecurityGroups[0].IpPermissions'
# 4. RDS is in available state?
aws rds describe-db-instances --db-instance-identifier your-db \
--query 'DBInstances[0].DBInstanceStatus'
# "available" is good
# 5. Check RDS logs for refused connections
aws rds download-db-log-file-portion \
--db-instance-identifier your-db \
--log-file-name error/postgresql.log \
--output textPrevention: Infrastructure as Code
Define your RDS + EKS security group rules in Terraform so they're never misconfigured:
resource "aws_db_instance" "main" {
identifier = "production-db"
engine = "postgres"
instance_class = "db.t3.medium"
vpc_security_group_ids = [aws_security_group.rds.id]
db_subnet_group_name = aws_db_subnet_group.main.name
# never set publicly_accessible = true in production
}
resource "aws_security_group" "rds" {
name = "rds-sg"
vpc_id = aws_vpc.main.id
ingress {
from_port = 5432
to_port = 5432
protocol = "tcp"
security_groups = [aws_security_group.eks_nodes.id]
}
}Resources
- AWS VPC Networking Guide — understand VPC routing
- EKS Best Practices Guide — EKS subnet design
- AWS RDS Security Groups Docs
- Ultimate AWS Certified DevOps Engineer on Udemy — covers RDS + EKS networking in depth
RDS connectivity issues are always one of these five problems. Run the diagnostic checklist top to bottom and you'll find it within 10 minutes.
Stay ahead of the curve
Get the latest DevOps, Kubernetes, AWS, and AI/ML guides delivered straight to your inbox. No spam — just practical engineering content.
Related Articles
AWS ALB 504 Gateway Timeout — Every Cause and Fix (2026)
Your ALB returns 504 Gateway Timeout but the app seems fine. Here's every reason this happens — backend timeouts, keepalive mismatches, health check failures — and exactly how to fix each one.
AWS ALB Showing Unhealthy Targets — How to Fix It
Fix AWS Application Load Balancer unhealthy targets. Covers health check misconfigurations, security group issues, target group problems, and EKS-specific ALB controller debugging.
AWS EKS Pods Stuck in Pending State: Causes and Fixes
Pods stuck in Pending on EKS are caused by a handful of known issues — insufficient node capacity, taint mismatches, PVC problems, and more. Here's how to diagnose and fix each one.