🎉 DevOps Interview Prep Bundle is live — 1000+ Q&A across 20 topicsGet it →
All Articles

What is gRPC? Explained Simply for DevOps Engineers

gRPC is replacing REST in microservices — but what is it and why should DevOps engineers care? Here's a plain-English explanation with Kubernetes examples.

DevOpsBoysMay 11, 20264 min read
Share:Tweet

You keep seeing gRPC in job descriptions, Kubernetes configs, and service mesh docs. But nobody explains it simply.

Here's what gRPC is, why it exists, and what it means for you as a DevOps engineer.


What is gRPC?

gRPC is a way for services to communicate. Like REST, but faster, stricter, and designed for microservices.

The "g" stands for Google — they built it and open-sourced it in 2015. It's now used by Netflix, Dropbox, Cisco, CoreOS, and most cloud-native companies.


REST vs gRPC — The Simple Difference

REST sends JSON over HTTP/1.1:

POST /api/users HTTP/1.1
Content-Type: application/json

{"name": "Shubham", "role": "devops"}

Human-readable. Flexible. Slow for high-volume internal services.

gRPC sends binary data over HTTP/2:

protobuf
message CreateUserRequest {
  string name = 1;
  string role = 2;
}

Not human-readable. Strict schema. 5–10x faster for internal services.


How gRPC Works

Step 1: Define the Service Contract (Proto file)

You write a .proto file that defines what functions exist and what data they accept:

protobuf
// user.proto
syntax = "proto3";
 
service UserService {
  rpc CreateUser (CreateUserRequest) returns (CreateUserResponse);
  rpc GetUser (GetUserRequest) returns (User);
  rpc ListUsers (ListUsersRequest) returns (stream User);  // streaming!
}
 
message CreateUserRequest {
  string name = 1;
  string role = 2;
}
 
message User {
  string id = 1;
  string name = 2;
  string role = 3;
  int64 created_at = 4;
}

Step 2: Generate Code Automatically

gRPC generates client and server code in any language from the proto file:

bash
# Generate Go code
protoc --go_out=. --go-grpc_out=. user.proto
 
# Generate Python code
python -m grpc_tools.protoc -I. --python_out=. --grpc_python_out=. user.proto

The generated code handles all serialization, networking, and type checking automatically.

Step 3: Implement and Call

go
// Server (Go)
func (s *server) CreateUser(ctx context.Context, req *pb.CreateUserRequest) (*pb.User, error) {
    user := &pb.User{
        Id:   uuid.New().String(),
        Name: req.Name,
        Role: req.Role,
    }
    return user, nil
}
python
# Client (Python) calling Go server
import grpc
import user_pb2_grpc, user_pb2
 
channel = grpc.insecure_channel('user-service:50051')
stub = user_pb2_grpc.UserServiceStub(channel)
 
response = stub.CreateUser(user_pb2.CreateUserRequest(name="Shubham", role="devops"))
print(response.id)

Why DevOps Engineers Need to Know gRPC

1. Kubernetes Health Checks Use gRPC

Since Kubernetes 1.24, you can use gRPC for liveness and readiness probes:

yaml
livenessProbe:
  grpc:
    port: 50051
    service: "grpc.health.v1.Health"
  initialDelaySeconds: 10
  periodSeconds: 5

This is now the standard for gRPC services — no more HTTP health endpoint workarounds.

2. Service Mesh (Istio, Linkerd) Has Special gRPC Support

gRPC uses persistent HTTP/2 connections. This breaks naive round-robin load balancing — all requests go to the first established connection.

Service meshes fix this with L7 gRPC-aware load balancing:

yaml
# Istio VirtualService for gRPC load balancing
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: user-service
spec:
  hosts:
    - user-service
  http:
    - route:
        - destination:
            host: user-service
            port:
              number: 50051

Without a service mesh, your Kubernetes Service won't load balance gRPC correctly.

3. Nginx Ingress Needs Special Config for gRPC

gRPC requires HTTP/2. Nginx needs explicit config:

yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  annotations:
    nginx.ingress.kubernetes.io/backend-protocol: "GRPC"
    nginx.ingress.kubernetes.io/ssl-redirect: "true"
spec:
  tls:
    - hosts: [user-api.example.com]
      secretName: tls-secret
  rules:
    - host: user-api.example.com
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: user-service
                port:
                  number: 50051

gRPC requires TLS in production. The GRPC backend protocol tells Nginx to use HTTP/2.

4. Observability Is Different for gRPC

REST has HTTP status codes (200, 404, 500). gRPC has its own status codes:

gRPC CodeHTTP EquivalentMeaning
OK (0)200Success
NOT_FOUND (5)404Resource not found
PERMISSION_DENIED (7)403Unauthorized
UNAVAILABLE (14)503Service down
DEADLINE_EXCEEDED (4)504Timeout

For Prometheus metrics, you need gRPC-specific exporters:

yaml
# Prometheus scrape config for gRPC metrics
- job_name: 'grpc-services'
  static_configs:
    - targets: ['user-service:9090']  # metrics port, not gRPC port

gRPC Streaming — The Killer Feature

Unlike REST, gRPC supports streaming in both directions:

protobuf
// Server streaming — server sends multiple responses
rpc WatchPodEvents (WatchRequest) returns (stream PodEvent);
 
// Client streaming — client sends multiple requests
rpc UploadLogs (stream LogEntry) returns (UploadResponse);
 
// Bidirectional streaming
rpc Chat (stream Message) returns (stream Message);

This is how Kubernetes itself works internally — kubectl watch uses server-side streaming over gRPC.


When to Use gRPC vs REST

Use gRPC whenUse REST when
Internal microservice-to-microservice callsPublic APIs consumed by browsers
High-throughput, low-latency requiredSimple CRUD with JSON
Multiple language teams sharing a contractTeam doesn't want schema management overhead
Streaming data (logs, events, telemetry)Webhooks and callbacks
Kubernetes-native tools (kubelet, etcd)External integrations and webhooks

Key Takeaway for DevOps Engineers

You don't need to write gRPC services. But you do need to:

  1. Configure Kubernetes health probes correctly for gRPC services
  2. Understand why gRPC load balancing breaks without a service mesh
  3. Configure Nginx/Envoy ingress with GRPC backend protocol
  4. Know the difference between gRPC status codes in your monitoring

When a developer says "the gRPC service isn't load balancing," you'll know exactly what to check.

For deep-dive networking and Kubernetes configuration labs, KodeKloud covers service mesh and networking in their CKA and CKAD courses.

Newsletter

Stay ahead of the curve

Get the latest DevOps, Kubernetes, AWS, and AI/ML guides delivered straight to your inbox. No spam — just practical engineering content.

Related Articles

Comments