Distributed Locks with Redis & Go
In a distributed system, multiple instances of a service might try to access a shared resource simultaneously. Regular mutexes only work within a single process. For distributed safety, we need Distributed Locks.
The Race Condition Problem
Imagine a promotional event where only 10 users can claim a limited-time coupon. If two server instances check the count at the same millisecond, they might both see count=9 and issue two coupons, resulting in an over-fulfillment. This is a classic Race Condition.
Implementing with Redis (SETNX)
The most common way to implement a lock in Redis is using the SET command with NX (Set if Not Exists) and PX (Expiration in milliseconds).
// Acquire Lock
lockKey := "lock:coupon_claim_123"
requestID := uuid.New().String()
ttl := 5 * time.Second
ok, err := rdb.Set(ctx, lockKey, requestID, ttl).Nx().Result()
if err != nil || !ok {
return fmt.Errorf("could not acquire lock")
}
// Ensure Release with Lua (Atomicity)
defer func() {
var luaScript = `
if redis.call("get", KEYS[1]) == ARGV[1] then
return redis.call("del", KEYS[1])
else
return 0
end
`
rdb.Eval(ctx, luaScript, []string{lockKey}, requestID)
}()High-Level Lock Flow
A distributed lock ensure that only one worker can enter a critical section across multiple nodes.
Arch Note
Interactive logic enabled. Click components in expanded view for technical service definitions.
Distributed Failure Analysis
| Failure | Impact | Mitigation Strategy |
|---|---|---|
| Node Crash before DEL | Lock stays until TTL expires. | Short TTL + Watchdog Pattern. Renew lock every 10% of TTL. |
| Redis Master Fails | New master might not have the lock. | Redlock Pattern. Acquire locks from 3/5 independent nodes. |
| Network Partition | Multiple nodes might think they have the lock. | Fencing Token. Use a monotonic increasing ID for every lock acquisition. |
[!CAUTION] Clock Drift Problem: In a distributed system, clocks are never perfectly synchronized. Never rely on the absolute system time for lock logic; always use relative TTLs.
Performance Considerations
- Network Overhead: Every lock/unlock is a network roundtrip. Use Pipeline/Lua scripts to batch operations.
- Contention: If 1000 nodes try to acquire the same lock, Redis might become a bottleneck. Sharding the lock keys is a common industry tactic.
- Wait Strategy: Instead of busy-waiting, use Redis Pub/Sub to notify waiting nodes when a lock is released.
[!TIP] Expert Tip: For 99.9% of use cases, a single-instance Redis lock is enough. For the remaining 0.1% (banking settlements, core state changes), use the Redlock algorithm or Etcd/Consul for strict consistency.
Using distributed locks correctly is essential for maintaining data integrity in modern backend architectures.