retry
circuit-breaker capability
circuit-breaker adds a circuit breaker wrapper to your service that stops cascading failures when external dependencies degrade.
Why You Need This
Section titled “Why You Need This”Every service that calls an external dependency — an HTTP API, a database, a message broker — inherits that dependency’s failure modes. When the external service slows down, your requests pile up waiting for responses. Your connection pool fills. New requests start failing. Your service is now slow too, and your callers are inheriting the failure. That’s a cascade.
A circuit breaker solves this by tracking failure rates and tripping open when they exceed a threshold. Once open, calls fail fast without attempting the network hop. After a configurable timeout, the breaker moves to half-open and allows a probe request through. If it succeeds, the breaker closes and normal traffic resumes. If it fails, the breaker stays open.
The difference between a circuit breaker and a timeout is scope. A timeout protects a single call. A circuit breaker protects the system — it accumulates failure evidence across calls and stops sending traffic to a dependency that’s clearly unhealthy. Without one, every caller in your system will queue up against the failing service until thread or goroutine pools are exhausted.
What You Get
Section titled “What You Get”- Circuit breaker wrapper with open/half-open/closed state machine
- Configurable failure threshold and reset timeout
- Event logging when the breaker changes state
- Typed error for fast-fail responses (
ErrCircuitOpen)
Go Implementation
Section titled “Go Implementation”The Go capability uses sony/gobreaker — a clean, dependency-light circuit breaker library with a well-tested state machine. The scaffold wraps it in a thin function that takes any operation and returns an error with clear semantics.
Configure the breaker through the gobreaker.Settings struct. The key parameters are MaxRequests (how many requests pass through in half-open state), Interval (the rolling window for counting failures), and Timeout (how long to stay open before probing).
// Wrap any fallible call — the breaker handles state transitions.result, err := breaker.Execute(func() (interface{}, error) { return client.GetOrder(ctx, orderID)})if err == gobreaker.ErrOpenState { // Fast fail — dependency is known unhealthy, skip the network hop. return nil, ErrServiceUnavailable}The breaker’s OnStateChange callback is wired to structured logging so state transitions are visible in your observability stack. When a breaker trips in production, you want a log entry with the name, new state, and timestamp — not silence.
TypeScript Implementation
Section titled “TypeScript Implementation”The TypeScript capability uses opossum — a mature Node.js circuit breaker with a rich event API. The scaffold creates a typed wrapper factory function and wires the open/halfOpen/close events to structured logging.
Opossum wraps an async function and returns a CircuitBreaker instance. Call breaker.fire(...args) instead of the raw function.
import CircuitBreaker from 'opossum';
const breaker = new CircuitBreaker(fetchOrder, { timeout: 3000, // fail the call if it takes longer than 3s errorThresholdPercentage: 50, // trip open at 50% failure rate resetTimeout: 10000, // wait 10s before probing in half-open});
breaker.on('open', () => logger.warn('circuit breaker opened', { name: 'fetchOrder' }));breaker.on('halfOpen', () => logger.info('circuit breaker probing', { name: 'fetchOrder' }));breaker.on('close', () => logger.info('circuit breaker closed', { name: 'fetchOrder' }));
const order = await breaker.fire(orderID);The event callbacks are the critical part. A circuit breaker that silently trips provides no operational signal — you find out it tripped when users complain, not when the breaker opens.
Pairing with Retry
Section titled “Pairing with Retry”Circuit breaker and retry serve different purposes and work well together, but the order matters. Retry handles transient failures — a single request that fails due to a momentary network blip. The circuit breaker handles sustained failures — a dependency that’s been degraded for multiple requests.
The correct composition: retry wraps the individual call, circuit breaker wraps the retry. If a call fails and retry exhausts its attempts, that counts as one failure against the circuit breaker. The breaker accumulates evidence across multiple callers over time.
Add to Your Service
Section titled “Add to Your Service”verikt new my-service --cap circuit-breaker# or add to an existing service:verikt add circuit-breakerRelated Capabilities
Section titled “Related Capabilities”http-client
timeout
bulkhead