🎉 DevOps Interview Prep Bundle is live — 1000+ Q&A across 20 topicsGet it →
All Articles

AWS Lambda Timeout Error — Every Fix (2026)

Lambda function hitting the 15-minute limit or timing out before that? Here's how to find what's slow, increase timeout properly, and redesign for async patterns.

DevOpsBoysApr 29, 20264 min read
Share:Tweet

Your Lambda function is throwing Task timed out after X seconds. Here's exactly what to check and how to fix it.


Understanding Lambda Timeout Limits

Lambda has a hard maximum of 15 minutes (900 seconds). If your function needs longer, it's the wrong tool — use ECS Fargate, EC2, or Step Functions instead.

Common timeout errors:

  • Task timed out after 3.00 seconds — default timeout too low
  • Endpoint request timed out — API Gateway has its own 29-second limit
  • Function runs fine locally but times out in Lambda — cold start or network issue

Fix 1: Increase the Timeout

Via Console:

  1. Lambda → Your function → Configuration → General configuration → Edit
  2. Set Timeout to what you actually need (max 900s)

Via AWS CLI:

bash
aws lambda update-function-configuration \
  --function-name my-function \
  --timeout 60

Via Terraform:

hcl
resource "aws_lambda_function" "my_function" {
  function_name = "my-function"
  timeout       = 60  # seconds
  # ... other config
}

Rule of thumb: Set timeout to 2–3× your P99 execution time, not the maximum. If you're setting it to 900s, your architecture is wrong.


Fix 2: Find What's Actually Slow

Before increasing timeout blindly, find the bottleneck.

Enable X-Ray tracing:

bash
aws lambda update-function-configuration \
  --function-name my-function \
  --tracing-config Mode=Active

Add manual timing in your code:

python
import time
import logging
 
logger = logging.getLogger()
 
def handler(event, context):
    start = time.time()
    
    # DB query
    t1 = time.time()
    result = db.query(...)
    logger.info(f"DB query: {time.time() - t1:.2f}s")
    
    # External API
    t2 = time.time()
    response = requests.get(external_api)
    logger.info(f"API call: {time.time() - t2:.2f}s")
    
    logger.info(f"Total: {time.time() - start:.2f}s")
    return result

Check remaining time in context:

python
def handler(event, context):
    # context.get_remaining_time_in_millis() tells you how much time is left
    if context.get_remaining_time_in_millis() < 5000:  # less than 5s left
        logger.warning("Running out of time!")

Fix 3: Connection Pool Reuse (Common Culprit)

Initializing DB connections inside the handler = new connection every invocation = slow.

Wrong — connection inside handler:

python
def handler(event, context):
    conn = psycopg2.connect(host=..., database=...)  # slow every time
    result = conn.execute("SELECT ...")
    return result

Right — connection outside handler:

python
import psycopg2
 
# This runs once per container, not per invocation
conn = psycopg2.connect(host=..., database=...)
 
def handler(event, context):
    result = conn.execute("SELECT ...")  # reuses connection
    return result

Same pattern for Redis, HTTP sessions, and SDK clients.


Fix 4: Cold Start Optimization

First invocation after idle period = cold start = slow.

python
# Use Provisioned Concurrency for latency-sensitive functions
aws lambda put-provisioned-concurrency-config \
  --function-name my-function \
  --qualifier PROD \
  --provisioned-concurrent-executions 5

Or reduce cold start time:

  • Use smaller deployment packages (zip only what you need)
  • Avoid heavy imports inside handler
  • Use Lambda Layers for shared dependencies
  • Consider arm64 architecture — faster cold starts, 20% cheaper

Fix 5: API Gateway 29-Second Limit

If Lambda is behind API Gateway, you can't exceed 29 seconds regardless of Lambda timeout. API Gateway has a hard 29s integration timeout.

For long-running operations behind API Gateway:

python
# Pattern: return immediately, process async
def handler(event, context):
    # Trigger async processing
    sqs.send_message(
        QueueUrl=QUEUE_URL,
        MessageBody=json.dumps(event)
    )
    # Return immediately to API Gateway
    return {
        "statusCode": 202,
        "body": json.dumps({"status": "processing", "id": job_id})
    }

Client polls for result or uses WebSocket via API Gateway WebSocket API.


Fix 6: VPC Cold Starts

Lambda in a VPC takes longer to start because it needs to attach an ENI.

bash
# Check if your function is in a VPC unnecessarily
aws lambda get-function-configuration \
  --function-name my-function \
  --query 'VpcConfig'

Only put Lambda in VPC if it needs to access RDS, ElastiCache, or other VPC resources. If it's just calling external APIs, remove it from VPC — it'll be faster and cheaper.


When Lambda Is the Wrong Tool

If you consistently need more than 5 minutes, move to:

  • AWS Fargate — containerized tasks, no timeout limit
  • AWS Batch — large-scale batch jobs
  • Step Functions — orchestrate multiple Lambda functions with state
  • ECS on EC2 — full control over runtime

Lambda is designed for short, event-driven functions. Respect that constraint.


Quick checklist:

  • Timeout set appropriately (not default 3s)
  • DB/HTTP connections initialized outside handler
  • X-Ray enabled to find the slow part
  • Not behind API Gateway if > 29s needed
  • Not in VPC unless required
  • Consider async pattern for heavy work
Newsletter

Stay ahead of the curve

Get the latest DevOps, Kubernetes, AWS, and AI/ML guides delivered straight to your inbox. No spam — just practical engineering content.

Related Articles

Comments