1
0
mirror of https://github.com/redis/go-redis.git synced 2025-07-28 06:42:00 +03:00

docs(devdocs): Add generated dev documentation

Add AI generated dev documentation in the `docs` folder.
This commit is contained in:
Nedyalko Dyakov
2025-04-29 23:31:47 +03:00
parent 683f644ec2
commit f316244da4
7 changed files with 2335 additions and 0 deletions

70
docs/README.md Normal file
View File

@ -0,0 +1,70 @@
# Redis Client Documentation
This documentation is AI-generated and provides a comprehensive overview of the Redis client implementation. The documentation is organized into several key files, each focusing on different aspects of the Redis client.
## Documentation Structure
### 1. General Architecture ([`general_architecture.md`](general_architecture.md))
- High-level architecture of the Redis client
- Core components and their interactions
- Connection management
- Command processing pipeline
- Error handling
- Monitoring and instrumentation
- Best practices and patterns
### 2. Connection Pool Implementation ([`redis_pool.md`](redis_pool.md))
- Detailed explanation of the connection pool system
- Pool configuration options
- Connection lifecycle management
- Pool statistics and monitoring
- Error handling in the pool
- Performance considerations
- Best practices for pool usage
### 3. Command Processing ([`redis_command_processing.md`](redis_command_processing.md))
- Command interface and implementation
- Command execution pipeline
- Different execution modes (single, pipeline, transaction)
- Command types and categories
- Error handling and retries
- Best practices for command usage
- Monitoring and debugging
### 4. Testing Framework ([`redis_testing.md`](redis_testing.md))
- Test environment setup using Docker
- Environment variables and configuration
- Running tests with Makefile commands
- Writing tests with Ginkgo and Gomega
- Test organization and patterns
- Coverage reporting
- Best practices for testing
### 5. Clients and Connections ([`clients-and-connections.md`](clients-and-connections.md))
- Detailed client types and their usage
- Connection management and configuration
- Client-specific features and optimizations
- Connection pooling strategies
- Best practices for client usage
## Important Notes
1. This documentation is AI-generated and should be reviewed for accuracy
2. The documentation is based on the actual codebase implementation
3. All examples and code snippets are verified against the source code
4. The documentation is regularly updated to reflect changes in the codebase
## Contributing
For detailed information about contributing to the project, please see the [Contributing Guide](../CONTRIBUTING.md) in the root directory.
If you find any inaccuracies or would like to suggest improvements to the documentation, please:
1. Review the actual code implementation
2. Submit a pull request with the proposed changes
3. Include references to the relevant code files
## Related Resources
- [Go Redis Client GitHub Repository](https://github.com/redis/go-redis)
- [Redis Official Documentation](https://redis.io/documentation)
- [Go Documentation](https://golang.org/doc/)

View File

@ -0,0 +1,874 @@
# Redis Client Architecture
This document explains the relationships between different components of the Redis client implementation, focusing on client types, connections, pools, and hooks.
## Client Hierarchy
### Component Relationships
```mermaid
classDiagram
class baseClient {
+*Options opt
+pool.Pooler connPool
+hooksMixin
+onClose func() error
+clone() *baseClient
+initConn()
+process()
}
class Client {
+baseClient
+cmdable
+hooksMixin
+NewClient()
+WithTimeout()
+Pipeline()
+TxPipeline()
}
class Conn {
+baseClient
+cmdable
+statefulCmdable
+hooksMixin
+newConn()
}
class Pipeline {
+exec pipelineExecer
+init()
+Pipelined()
}
class TxPipeline {
+Pipeline
+wrapMultiExec()
}
class hooksMixin {
+*sync.RWMutex hooksMu
+[]Hook slice
+hooks initial
+hooks current
+clone() hooksMixin
+AddHook()
+chain()
}
Client --> baseClient : embeds
Conn --> baseClient : embeds
Pipeline --> Client : uses
TxPipeline --> Pipeline : extends
baseClient --> hooksMixin : contains
Client --> hooksMixin : contains
Conn --> hooksMixin : contains
```
### Hook Chain Flow
```mermaid
sequenceDiagram
participant Client
participant Hook1
participant Hook2
participant Redis
Client->>Hook1: Execute Command
Hook1->>Hook2: Next Hook
Hook2->>Redis: Execute Command
Redis-->>Hook2: Response
Hook2-->>Hook1: Process Response
Hook1-->>Client: Final Response
```
### Connection Pool Management
```mermaid
stateDiagram-v2
[*] --> Idle
Idle --> InUse: Get Connection
InUse --> Idle: Release Connection
InUse --> Closed: Connection Error
Idle --> Closed: Pool Shutdown
Closed --> [*]
```
### Pipeline Execution Flow
```mermaid
sequenceDiagram
participant Client
participant Pipeline
participant Redis
Client->>Pipeline: Queue Command 1
Client->>Pipeline: Queue Command 2
Client->>Pipeline: Queue Command 3
Pipeline->>Redis: Send Batch
Redis-->>Pipeline: Batch Response
Pipeline-->>Client: Processed Responses
```
### Transaction Pipeline Flow
```mermaid
sequenceDiagram
participant Client
participant TxPipeline
participant Redis
Client->>TxPipeline: Queue Command 1
Client->>TxPipeline: Queue Command 2
TxPipeline->>Redis: MULTI
TxPipeline->>Redis: Command 1
TxPipeline->>Redis: Command 2
TxPipeline->>Redis: EXEC
Redis-->>TxPipeline: Transaction Result
TxPipeline-->>Client: Processed Results
```
### Base Client (`baseClient`)
The `baseClient` is the foundation of all Redis client implementations. It contains:
- Connection pool management
- Basic Redis command execution
- Hook management
- Connection lifecycle handling
```go
type baseClient struct {
opt *Options
connPool pool.Pooler
hooksMixin
onClose func() error
}
```
### Client Types
1. **Client (`Client`)**
- The main Redis client used by applications
- Represents a pool of connections
- Safe for concurrent use
- Embeds `baseClient` and adds command execution capabilities
- Primary entry point for most Redis operations
- Handles connection pooling and retries automatically
2. **Conn (`Conn`)**
- Represents a single Redis connection
- Used for stateful operations like pub/sub
- Required for blocking operations (BLPOP, BRPOP)
- Also embeds `baseClient`
- Has additional stateful command capabilities
- Not safe for concurrent use
3. **Pipeline (`Pipeline`)**
- Used for pipelining multiple commands
- Not a standalone client, but a wrapper around existing clients
- Batches commands and sends them in a single network roundtrip
4. **Transaction Pipeline (`TxPipeline`)**
- Similar to Pipeline but wraps commands in MULTI/EXEC
- Ensures atomic execution of commands
- Also a wrapper around existing clients
## Pointer vs Value Semantics
### When `baseClient` is a Pointer
The `baseClient` is used as a pointer in these scenarios:
1. **Client Creation**
```go
func NewClient(opt *Options) *Client {
c := Client{
baseClient: &baseClient{
opt: opt,
},
}
}
```
- Used as pointer to share the same base client instance
- Allows modifications to propagate to all references
- More efficient for large structs
2. **Connection Pooling**
- Pooled connections need to share the same base client configuration
- Pointer semantics ensure consistent behavior across pooled connections
### When `baseClient` is a Value
The `baseClient` is used as a value in these scenarios:
1. **Cloning**
```go
func (c *baseClient) clone() *baseClient {
clone := *c
clone.hooksMixin = c.hooksMixin.clone()
return &clone
}
```
- Creates independent copies for isolation
- Prevents unintended sharing of state
- Used when creating new connections or client instances
2. **Temporary Operations**
- When creating short-lived client instances
- When isolation is required for specific operations
## Hooks Management
### HooksMixin
The `hooksMixin` is a struct that manages hook chains for different operations:
```go
type hooksMixin struct {
hooksMu *sync.RWMutex
slice []Hook
initial hooks
current hooks
}
```
### Hook Types
1. **Dial Hook**
- Called during connection establishment
- Can modify connection parameters
- Used for custom connection handling
2. **Process Hook**
- Called before command execution
- Can modify commands or add logging
- Used for command monitoring
3. **Pipeline Hook**
- Called during pipeline execution
- Handles batch command processing
- Used for pipeline monitoring
### Hook Lifecycle
1. **Initialization**
- Hooks are initialized when creating a new client
- Default hooks are set up for basic operations
- Hooks can be added or removed at runtime
2. **Hook Chain**
- Hooks are chained in LIFO (Last In, First Out) order
- Each hook can modify the command or response
- Chain can be modified at runtime
- Hooks can prevent command execution by not calling next
3. **Hook Inheritance**
- New connections inherit hooks from their parent client
- Hooks are cloned to prevent shared state
- Each connection maintains its own hook chain
- Hook modifications in child don't affect parent
## Connection Pooling
### Pool Types
1. **Single Connection Pool**
- Used for dedicated connections
- No connection sharing
- Used in `Conn` type
2. **Multi Connection Pool**
- Used for client pools
- Manages multiple connections
- Handles connection reuse
### Pool Management
1. **Connection Acquisition**
- Connections are acquired from the pool
- Pool maintains minimum and maximum connections
- Handles connection timeouts
2. **Connection Release**
- Connections are returned to the pool
- Pool handles connection cleanup
- Manages connection lifecycle
### Pool Configuration
1. **Pool Options**
- Minimum idle connections
- Maximum active connections
- Connection idle timeout
- Connection lifetime
- Pool health check interval
2. **Health Checks**
- Periodic connection validation
- Automatic reconnection on failure
- Connection cleanup on errors
- Pool size maintenance
## Transaction and Pipeline Handling
### Pipeline
1. **Command Batching**
- Commands are queued in memory
- Sent in a single network roundtrip
- Responses are collected in order
2. **Error Handling**
- Pipeline execution is atomic
- Errors are propagated to all commands
- Connection errors trigger retries
### Transaction Pipeline
1. **MULTI/EXEC Wrapping**
- Commands are wrapped in MULTI/EXEC
- Ensures atomic execution
- Handles transaction errors
2. **State Management**
- Maintains transaction state
- Handles rollback scenarios
- Manages connection state
### Pipeline Limitations
1. **Size Limits**
- Maximum commands per pipeline
- Memory usage considerations
- Network buffer size limits
- Response size handling
2. **Transaction Behavior**
- WATCH/UNWATCH key monitoring
- Transaction isolation
- Rollback on failure
- Atomic execution guarantees
## Error Handling and Cleanup
### Error Types and Handling
```mermaid
classDiagram
class Error {
<<interface>>
+Error() string
}
class RedisError {
+string message
+Error() string
}
class ConnectionError {
+string message
+net.Error
+Temporary() bool
+Timeout() bool
}
class TxFailedError {
+string message
+Error() string
}
Error <|-- RedisError
Error <|-- ConnectionError
Error <|-- TxFailedError
net.Error <|-- ConnectionError
```
### Error Handling Flow
```mermaid
sequenceDiagram
participant Client
participant Connection
participant Redis
participant ErrorHandler
Client->>Connection: Execute Command
Connection->>Redis: Send Command
alt Connection Error
Redis-->>Connection: Connection Error
Connection->>ErrorHandler: Handle Connection Error
ErrorHandler->>Connection: Retry or Close
else Redis Error
Redis-->>Connection: Redis Error
Connection->>Client: Return Error
else Transaction Error
Redis-->>Connection: Transaction Failed
Connection->>Client: Return TxFailedError
end
```
### Connection and Client Cleanup
```mermaid
sequenceDiagram
participant User
participant Client
participant ConnectionPool
participant Connection
participant Hook
User->>Client: Close()
Client->>Hook: Execute Close Hooks
Hook-->>Client: Hook Complete
Client->>ConnectionPool: Close All Connections
ConnectionPool->>Connection: Close
Connection->>Redis: Close Connection
Redis-->>Connection: Connection Closed
Connection-->>ConnectionPool: Connection Removed
ConnectionPool-->>Client: Pool Closed
Client-->>User: Client Closed
```
### Error Handling Strategies
1. **Connection Errors**
- Temporary errors (network issues) trigger retries
- Permanent errors (invalid credentials) close the connection
- Connection pool handles reconnection attempts
- Maximum retry attempts configurable via options
2. **Redis Errors**
- Command-specific errors returned to caller
- No automatic retries for Redis errors
- Error types include:
- Command syntax errors
- Type errors
- Permission errors
- Resource limit errors
3. **Transaction Errors**
- MULTI/EXEC failures return `TxFailedError`
- Individual command errors within transaction
- Watch/Unwatch failures
- Connection errors during transaction
### Cleanup Process
1. **Client Cleanup**
```go
func (c *Client) Close() error {
// Execute close hooks
if c.onClose != nil {
c.onClose()
}
// Close connection pool
return c.connPool.Close()
}
```
- Executes registered close hooks
- Closes all connections in pool
- Releases all resources
- Thread-safe operation
2. **Connection Cleanup**
```go
func (c *Conn) Close() error {
// Cleanup connection state
c.state = closed
// Close underlying connection
return c.conn.Close()
}
```
- Closes underlying network connection
- Cleans up connection state
- Removes from connection pool
- Handles pending operations
3. **Pool Cleanup**
- Closes all idle connections
- Waits for in-use connections
- Handles connection timeouts
- Releases pool resources
### Best Practices for Error Handling
1. **Connection Management**
- Always check for connection errors
- Implement proper retry logic
- Handle connection timeouts
- Monitor connection pool health
2. **Resource Cleanup**
- Always call Close() when done
- Use defer for cleanup in critical sections
- Handle cleanup errors
- Monitor resource usage
3. **Error Recovery**
- Implement circuit breakers
- Use backoff strategies
- Monitor error patterns
- Log error details
4. **Transaction Safety**
- Check transaction results
- Handle watch/unwatch failures
- Implement rollback strategies
- Monitor transaction timeouts
### Context and Cancellation
1. **Context Usage**
- Command execution timeout
- Connection establishment timeout
- Operation cancellation
- Resource cleanup on cancellation
2. **Pool Error Handling**
- Connection acquisition timeout
- Pool exhaustion handling
- Connection validation errors
- Resource cleanup on errors
## Best Practices
1. **Client Usage**
- Use `Client` for most operations
- Use `Conn` for stateful operations
- Use pipelines for batch operations
2. **Hook Implementation**
- Keep hooks lightweight
- Handle errors properly
- Call next hook in chain
3. **Connection Management**
- Let the pool handle connections
- Don't manually manage connections
- Use appropriate timeouts
4. **Error Handling**
- Check command errors
- Handle connection errors
- Implement retry logic when needed
## Deep Dive: baseClient Embedding Strategies
### Implementation Examples
1. **Client Implementation (Pointer Embedding)**
```go
type Client struct {
*baseClient // Pointer embedding
cmdable
hooksMixin
}
func NewClient(opt *Options) *Client {
c := Client{
baseClient: &baseClient{ // Created as pointer
opt: opt,
},
}
c.init()
c.connPool = newConnPool(opt, c.dialHook)
return &c
}
```
The `Client` uses pointer embedding because:
- It needs to share the same `baseClient` instance across all operations
- The `baseClient` contains connection pool and options that should be shared
- Modifications to the base client (like timeouts) should affect all operations
- More efficient for large structs since it avoids copying
2. **Conn Implementation (Value Embedding)**
```go
type Conn struct {
baseClient // Value embedding
cmdable
statefulCmdable
hooksMixin
}
func newConn(opt *Options, connPool pool.Pooler, parentHooks hooksMixin) *Conn {
c := Conn{
baseClient: baseClient{ // Created as value
opt: opt,
connPool: connPool,
},
}
c.cmdable = c.Process
c.statefulCmdable = c.Process
c.hooksMixin = parentHooks.clone()
return &c
}
```
The `Conn` uses value embedding because:
- Each connection needs its own independent state
- Connections are short-lived and don't need to share state
- Prevents unintended sharing of connection state
- More memory efficient for single connections
3. **Tx (Transaction) Implementation (Value Embedding)**
```go
type Tx struct {
baseClient // Value embedding
cmdable
statefulCmdable
hooksMixin
}
func (c *Client) newTx() *Tx {
tx := Tx{
baseClient: baseClient{ // Created as value
opt: c.opt,
connPool: pool.NewStickyConnPool(c.connPool),
},
hooksMixin: c.hooksMixin.clone(),
}
tx.init()
return &tx
}
```
The `Tx` uses value embedding because:
- Transactions need isolated state
- Each transaction has its own connection pool
- Prevents transaction state from affecting other operations
- Ensures atomic execution of commands
4. **Pipeline Implementation (No baseClient)**
```go
type Pipeline struct {
cmdable
statefulCmdable
exec pipelineExecer
cmds []Cmder
}
```
The `Pipeline` doesn't embed `baseClient` because:
- It's a temporary command buffer
- Doesn't need its own connection management
- Uses the parent client's connection pool
- More lightweight without base client overhead
### Embedding Strategy Comparison
```mermaid
classDiagram
class Client {
+*baseClient
+cmdable
+hooksMixin
+NewClient()
+WithTimeout()
}
class Conn {
+baseClient
+cmdable
+statefulCmdable
+hooksMixin
+newConn()
}
class Tx {
+baseClient
+cmdable
+statefulCmdable
+hooksMixin
+newTx()
}
class Pipeline {
+cmdable
+statefulCmdable
+exec pipelineExecer
+cmds []Cmder
}
Client --> baseClient : pointer
Conn --> baseClient : value
Tx --> baseClient : value
Pipeline --> baseClient : none
```
### Key Differences in Embedding Strategy
1. **Pointer Embedding (Client)**
- Used when state needs to be shared
- More efficient for large structs
- Allows modifications to propagate
- Better for long-lived instances
- Memory Layout:
```
+-------------------+
| Client |
| +-------------+ |
| | *baseClient | |
| +-------------+ |
| | cmdable | |
| +-------------+ |
| | hooksMixin | |
| +-------------+ |
+-------------------+
```
2. **Value Embedding (Conn, Tx)**
- Used when isolation is needed
- Prevents unintended state sharing
- Better for short-lived instances
- More memory efficient for small instances
- Memory Layout:
```
+-------------------+
| Conn/Tx |
| +-------------+ |
| | baseClient | |
| | +---------+ | |
| | | Options | | |
| | +---------+ | |
| | | Pooler | | |
| | +---------+ | |
| +-------------+ |
| | cmdable | |
| +-------------+ |
| | hooksMixin | |
| +-------------+ |
+-------------------+
```
3. **No Embedding (Pipeline)**
- Used for temporary operations
- Minimizes memory overhead
- Relies on parent client
- Better for command batching
- Memory Layout:
```
+-------------------+
| Pipeline |
| +-------------+ |
| | cmdable | |
| +-------------+ |
| | exec | |
| +-------------+ |
| | cmds | |
| +-------------+ |
+-------------------+
```
### Design Implications
1. **Resource Management**
- Pointer embedding enables shared resource management
- Value embedding ensures resource isolation
- No embedding minimizes resource overhead
2. **State Management**
- Pointer embedding allows state propagation
- Value embedding prevents state leakage
- No embedding avoids state management
3. **Performance Considerations**
- Pointer embedding reduces memory usage for large structs
- Value embedding improves locality for small structs
- No embedding minimizes memory footprint
4. **Error Handling**
- Pointer embedding centralizes error handling
- Value embedding isolates error effects
- No embedding delegates error handling
5. **Cleanup Process**
- Pointer embedding requires coordinated cleanup
- Value embedding enables independent cleanup
- No embedding avoids cleanup complexity
### Best Practices
1. **When to Use Pointer Embedding**
- Long-lived instances
- Shared state requirements
- Large structs
- Centralized management
2. **When to Use Value Embedding**
- Short-lived instances
- State isolation needs
- Small structs
- Independent management
3. **When to Avoid Embedding**
- Temporary operations
- Minimal state needs
- Command batching
- Performance critical paths
### Authentication System
#### Streaming Credentials Provider
The Redis client supports a streaming credentials provider system that allows for dynamic credential updates:
```go
type StreamingCredentialsProvider interface {
Subscribe(listener CredentialsListener) (Credentials, UnsubscribeFunc, error)
}
type CredentialsListener interface {
OnNext(credentials Credentials)
OnError(err error)
}
type Credentials interface {
BasicAuth() (username string, password string)
RawCredentials() string
}
```
Key Features:
- Dynamic credential updates
- Error handling and propagation
- Basic authentication support
- Raw credential access
- Subscription management
#### Re-Authentication Listener
The client includes a re-authentication listener for handling credential updates:
```go
type ReAuthCredentialsListener struct {
reAuth func(credentials Credentials) error
onErr func(err error)
}
```
Features:
- Automatic re-authentication on credential updates
- Error handling and propagation
- Customizable re-authentication logic
- Thread-safe operation
#### Basic Authentication
The client provides a basic authentication implementation:
```go
type basicAuth struct {
username string
password string
}
func NewBasicCredentials(username, password string) Credentials {
return &basicAuth{
username: username,
password: password,
}
}
```
Usage:
- Simple username/password authentication
- Raw credential string generation
- Basic authentication support
- Thread-safe operation
// ... rest of existing content ...

View File

@ -0,0 +1,809 @@
# Redis Client Architecture
## Overview
This document provides a comprehensive description of the Redis client implementation architecture, focusing on the relationships between different components, their responsibilities, and implementation details.
## Core Components
### 1. Connection Management
#### Conn Struct
The `Conn` struct represents a single Redis connection and is defined in `internal/pool/conn.go`. It contains:
```go
type Conn struct {
usedAt int64 // atomic
netConn net.Conn
rd *proto.Reader
bw *bufio.Writer
wr *proto.Writer
Inited bool
pooled bool
createdAt time.Time
}
```
##### Detailed Field Descriptions
- `usedAt` (int64, atomic)
- Tracks the last usage timestamp of the connection
- Uses atomic operations for thread-safe access
- Helps in connection health checks and idle timeout management
- Updated via `SetUsedAt()` and retrieved via `UsedAt()`
- `netConn` (net.Conn)
- The underlying network connection
- Handles raw TCP communication
- Supports both TCP and Unix domain socket connections
- Can be updated via `SetNetConn()` which also resets the reader and writer
- `rd` (*proto.Reader)
- Redis protocol reader
- Handles RESP (REdis Serialization Protocol) parsing
- Manages read buffers and protocol state
- Created with `proto.NewReader()`
- `bw` (*bufio.Writer)
- Buffered writer for efficient I/O
- Reduces system calls by batching writes
- Configurable buffer size based on workload
- Created with `bufio.NewWriter()`
- `wr` (*proto.Writer)
- Redis protocol writer
- Handles RESP serialization
- Manages write buffers and protocol state
- Created with `proto.NewWriter()`
- `Inited` (bool)
- Indicates if the connection has been initialized
- Set after successful authentication and protocol negotiation
- Prevents re-initialization of established connections
- Used in connection pool management
- `pooled` (bool)
- Indicates if the connection is part of a connection pool
- Affects connection lifecycle management
- Determines if connection should be returned to pool
- Set during connection creation
- `createdAt` (time.Time)
- Records connection creation time
- Used for connection lifetime management
- Helps in detecting stale connections
- Used in `isHealthyConn()` checks
#### Connection Lifecycle Methods
##### Creation and Initialization
```go
func NewConn(netConn net.Conn) *Conn {
cn := &Conn{
netConn: netConn,
createdAt: time.Now(),
}
cn.rd = proto.NewReader(netConn)
cn.bw = bufio.NewWriter(netConn)
cn.wr = proto.NewWriter(cn.bw)
cn.SetUsedAt(time.Now())
return cn
}
```
##### Usage Tracking
```go
func (cn *Conn) UsedAt() time.Time {
unix := atomic.LoadInt64(&cn.usedAt)
return time.Unix(unix, 0)
}
func (cn *Conn) SetUsedAt(tm time.Time) {
atomic.StoreInt64(&cn.usedAt, tm.Unix())
}
```
##### Network Operations
```go
func (cn *Conn) WithReader(
ctx context.Context, timeout time.Duration, fn func(rd *proto.Reader) error,
) error {
if timeout >= 0 {
if err := cn.netConn.SetReadDeadline(cn.deadline(ctx, timeout)); err != nil {
return err
}
}
return fn(cn.rd)
}
func (cn *Conn) WithWriter(
ctx context.Context, timeout time.Duration, fn func(wr *proto.Writer) error,
) error {
if timeout >= 0 {
if err := cn.netConn.SetWriteDeadline(cn.deadline(ctx, timeout)); err != nil {
return err
}
}
if cn.bw.Buffered() > 0 {
cn.bw.Reset(cn.netConn)
}
if err := fn(cn.wr); err != nil {
return err
}
return cn.bw.Flush()
}
```
#### Connection Pools
##### 1. ConnPool (`internal/pool/pool.go`)
Detailed implementation:
```go
type ConnPool struct {
cfg *Options
dialErrorsNum uint32 // atomic
lastDialError atomic.Value
queue chan struct{}
connsMu sync.Mutex
conns []*Conn
idleConns []*Conn
poolSize int
idleConnsLen int
stats Stats
_closed uint32 // atomic
}
```
Key Features:
- Thread-safe connection management using mutexes and atomic operations
- Configurable pool size and idle connections
- Connection health monitoring with `isHealthyConn()`
- Automatic connection cleanup
- Connection reuse optimization
- Error handling and recovery
- FIFO/LIFO connection management based on `PoolFIFO` option
- Minimum idle connections maintenance
- Maximum active connections enforcement
- Connection lifetime and idle timeout management
Pool Management:
- `Get()` - Retrieves a connection from the pool or creates a new one
- `Put()` - Returns a connection to the pool
- `Remove()` - Removes a connection from the pool
- `Close()` - Closes the pool and all connections
- `NewConn()` - Creates a new connection
- `CloseConn()` - Closes a specific connection
- `Len()` - Returns total number of connections
- `IdleLen()` - Returns number of idle connections
- `Stats()` - Returns pool statistics
##### 2. SingleConnPool (`internal/pool/pool_single.go`)
Implementation:
```go
type SingleConnPool struct {
pool Pooler
cn *Conn
stickyErr error
}
```
Use Cases:
- Single connection scenarios
- Transaction operations
- Pub/Sub subscriptions
- Pipeline operations
- Maintains a single connection with error state tracking
##### 3. StickyConnPool (`internal/pool/pool_sticky.go`)
Implementation:
```go
type StickyConnPool struct {
pool Pooler
shared int32 // atomic
state uint32 // atomic
ch chan *Conn
_badConnError atomic.Value
}
```
Features:
- Connection stickiness for consistent connection usage
- State management (default, initialized, closed)
- Error handling with `BadConnError`
- Thread safety with atomic operations
- Connection sharing support
- Automatic error recovery
### 2. Client Types
#### Base Client
Detailed implementation:
```go
type baseClient struct {
opt *Options
connPool pool.Pooler
onClose func() error // hook called when client is closed
}
```
Responsibilities:
1. Connection Management
- Pool initialization
- Connection acquisition
- Connection release
- Health monitoring
- Error handling
2. Command Execution
- Protocol handling
- Response parsing
- Error handling
- Retry logic with configurable backoff
3. Lifecycle Management
- Initialization
- Cleanup
- Resource management
- Connection state tracking
#### Client Types
##### 1. Client (`redis.go`)
Features:
- Connection pooling with configurable options
- Command execution with retry support
- Pipeline support for batch operations
- Transaction support with MULTI/EXEC
- Error handling and recovery
- Resource management
- Hooks system for extensibility
- Timeout management
- Connection health monitoring
##### 2. Conn (`redis.go`)
Features:
- Single connection management
- Direct command execution
- No pooling overhead
- Dedicated connection
- Transaction support
- Pipeline support
- Error handling
- Connection state tracking
##### 3. PubSub (`pubsub.go`)
Features:
- Subscription management
- Message handling
- Reconnection logic
- Channel management
- Pattern matching
- Thread safety for subscription operations
- Automatic resubscription on reconnection
- Message channel management
### 3. Error Handling and Cleanup
#### Error Types
Detailed error handling:
```go
var (
ErrClosed = errors.New("redis: client is closed")
ErrPoolExhausted = errors.New("redis: connection pool exhausted")
ErrPoolTimeout = errors.New("redis: connection pool timeout")
)
type BadConnError struct {
wrapped error
}
```
Error Recovery Strategies:
1. Connection Errors
- Automatic reconnection
- Backoff strategies
- Health checks
- Error state tracking
2. Protocol Errors
- Response parsing
- Protocol validation
- Error propagation
- Recovery mechanisms
3. Resource Errors
- Cleanup procedures
- Resource release
- State management
- Error reporting
#### Cleanup Process
Detailed cleanup:
1. Connection Cleanup
```go
func (p *ConnPool) Close() error {
if !atomic.CompareAndSwapUint32(&p._closed, 0, 1) {
return ErrClosed
}
// Cleanup implementation
}
```
2. Resource Management
- Connection closure
- Buffer cleanup
- State reset
- Error handling
- Resource tracking
### 4. Hooks System
#### Implementation Details
```go
type hooksMixin struct {
hooksMu sync.RWMutex
current hooks
}
type hooks struct {
dial DialHook
process ProcessHook
pipeline ProcessPipelineHook
txPipeline ProcessTxPipelineHook
}
```
Hook Types and Usage:
1. `dialHook`
- Connection establishment
- Authentication
- Protocol negotiation
- Error handling
2. `processHook`
- Command execution
- Response handling
- Error processing
- Metrics collection
3. `processPipelineHook`
- Pipeline execution
- Batch processing
- Response aggregation
- Error handling
4. `processTxPipelineHook`
- Transaction management
- Command grouping
- Atomic execution
- Error recovery
### 5. Configuration
#### Options Structure
Detailed configuration:
```go
type Options struct {
// Network settings
Network string
Addr string
Dialer func(ctx context.Context, network, addr string) (net.Conn, error)
// Authentication
Username string
Password string
CredentialsProvider func() (username string, password string)
// Timeouts
DialTimeout time.Duration
ReadTimeout time.Duration
WriteTimeout time.Duration
// Pool settings
PoolSize int
MinIdleConns int
MaxIdleConns int
MaxActiveConns int
PoolTimeout time.Duration
// TLS
TLSConfig *tls.Config
// Protocol
Protocol int
ClientName string
}
```
Configuration Management:
1. Default Values
- Network: "tcp"
- Protocol: 3
- PoolSize: 10 * runtime.GOMAXPROCS
- PoolTimeout: ReadTimeout + 1 second
- MinIdleConns: 0
- MaxIdleConns: 0
- MaxActiveConns: 0 (unlimited)
2. Validation
- Parameter bounds checking
- Required field validation
- Type validation
- Value validation
3. Dynamic Updates
- Runtime configuration changes
- Connection pool adjustments
- Timeout modifications
- Protocol version updates
4. Environment Integration
- URL-based configuration
- Environment variable support
- Configuration file support
- Command-line options
### 6. Monitoring and Instrumentation
#### OpenTelemetry Integration
The client provides comprehensive monitoring capabilities through OpenTelemetry integration:
```go
// Enable tracing instrumentation
if err := redisotel.InstrumentTracing(rdb); err != nil {
panic(err)
}
// Enable metrics instrumentation
if err := redisotel.InstrumentMetrics(rdb); err != nil {
panic(err)
}
```
Features:
- Distributed tracing
- Performance metrics
- Connection monitoring
- Error tracking
- Command execution timing
- Resource usage monitoring
- Pool statistics
- Health checks
#### Custom Hooks
The client supports custom hooks for monitoring and instrumentation:
```go
type redisHook struct{}
func (redisHook) DialHook(hook redis.DialHook) redis.DialHook {
return func(ctx context.Context, network, addr string) (net.Conn, error) {
// Custom monitoring logic
return hook(ctx, network, addr)
}
}
func (redisHook) ProcessHook(hook redis.ProcessHook) redis.ProcessHook {
return func(ctx context.Context, cmd redis.Cmder) error {
// Custom monitoring logic
return hook(ctx, cmd)
}
}
func (redisHook) ProcessPipelineHook(hook redis.ProcessPipelineHook) redis.ProcessPipelineHook {
return func(ctx context.Context, cmds []redis.Cmder) error {
// Custom monitoring logic
return hook(ctx, cmds)
}
}
```
Usage:
- Performance monitoring
- Debugging
- Custom metrics
- Error tracking
- Resource usage
- Connection health
- Command patterns
## Best Practices
### 1. Connection Management
Detailed guidelines:
- Pool sizing based on workload
- Consider concurrent operations
- Account for peak loads
- Monitor pool statistics
- Adjust based on metrics
- Connection monitoring
- Track connection health
- Monitor pool statistics
- Watch for errors
- Log connection events
- Health checks
- Regular connection validation
- Error detection
- Automatic recovery
- State monitoring
- Resource limits
- Set appropriate pool sizes
- Configure timeouts
- Monitor resource usage
- Implement circuit breakers
- Timeout configuration
- Set appropriate timeouts
- Consider network conditions
- Account for operation types
- Monitor timeout events
### 2. Error Handling
Implementation strategies:
- Error recovery
- Automatic retries
- Backoff strategies
- Error classification
- Recovery procedures
- Retry logic
- Configurable attempts
- Exponential backoff
- Error filtering
- State preservation
- Circuit breakers
- Error threshold monitoring
- State management
- Recovery procedures
- Health checks
- Monitoring
- Error tracking
- Performance metrics
- Resource usage
- Health status
- Logging
- Error details
- Context information
- Stack traces
- Performance data
### 3. Resource Cleanup
Cleanup procedures:
- Connection closure
- Proper cleanup
- Error handling
- State management
- Resource release
- Resource release
- Memory cleanup
- File handle closure
- Network cleanup
- State reset
- State management
- Connection state
- Pool state
- Error state
- Resource state
- Error handling
- Error propagation
- Cleanup on error
- State recovery
- Resource cleanup
- Monitoring
- Resource usage
- Cleanup events
- Error tracking
- Performance impact
### 4. Performance Optimization
Optimization techniques:
- Connection pooling
- Efficient reuse
- Load balancing
- Health monitoring
- Resource management
- Pipeline usage
- Batch operations
- Reduced round trips
- Improved throughput
- Resource efficiency
- Batch operations
- Command grouping
- Reduced overhead
- Improved performance
- Resource efficiency
- Resource management
- Efficient allocation
- Proper cleanup
- Monitoring
- Optimization
- Monitoring
- Performance metrics
- Resource usage
- Bottleneck detection
- Optimization opportunities
## Diagrams
### Connection Pool Architecture
```
┌─────────────┐ ┌─────────────┐ ┌─────────────┐
│ Client │────▶│ ConnPool │────▶│ Conn │
└─────────────┘ └─────────────┘ └─────────────┘
┌─────────────┐
│ PoolStats │
└─────────────┘
```
### Error Handling Flow
```
┌─────────────┐ ┌─────────────┐ ┌─────────────┐
│ Command │────▶│ Execution │────▶│ Error Check │
└─────────────┘ └─────────────┘ └─────────────┘
│ │
▼ ▼
┌─────────────┐ ┌─────────────┐
│ Success │ │ Recovery │
└─────────────┘ └─────────────┘
```
### Hook System
```
┌─────────────┐ ┌─────────────┐ ┌─────────────┐
│ Client │────▶│ Hooks │────▶│ Execution │
└─────────────┘ └─────────────┘ └─────────────┘
┌─────────────┐
│ Custom │
│ Behavior │
└─────────────┘
```
### Connection Lifecycle
```
┌─────────────┐ ┌─────────────┐ ┌─────────────┐
│ Creation │────▶│ Active │────▶│ Cleanup │
└─────────────┘ └─────────────┘ └─────────────┘
│ │ │
▼ ▼ ▼
┌─────────────┐ ┌─────────────┐ ┌─────────────┐
│ Init │ │ Usage │ │ Release │
└─────────────┘ └─────────────┘ └─────────────┘
```
### Pool Management
```
┌─────────────┐ ┌─────────────┐ ┌─────────────┐
│ Get │────▶│ Use │────▶│ Put │
└─────────────┘ └─────────────┘ └─────────────┘
│ │ │
▼ ▼ ▼
┌─────────────┐ ┌─────────────┐ ┌─────────────┐
│ Checkout │ │ Monitor │ │ Return │
└─────────────┘ └─────────────┘ └─────────────┘
```
## Known Issues and Areas of Improvement
### 1. Performance Considerations
#### Potential Performance Bottlenecks
1. **Connection Pool Management**
- Lock contention in connection pool operations
- Inefficient connection reuse strategies
- Suboptimal pool sizing algorithms
- High overhead in connection health checks
2. **Memory Management**
- Buffer allocation/deallocation overhead
- Memory fragmentation in long-running applications
- Inefficient buffer reuse strategies
- Potential memory leaks in edge cases
3. **Protocol Handling**
- RESP parsing overhead
- Inefficient command serialization
- Suboptimal batch processing
- Redundant protocol validations
4. **Concurrency Issues**
- Lock contention in shared resources
- Inefficient atomic operations
- Suboptimal goroutine management
- Race conditions in edge cases
### 2. Known Issues
1. **Connection Management**
- Occasional connection leaks under high load
- Suboptimal connection reuse in certain scenarios
- Race conditions in connection pool management
- Inefficient connection cleanup in edge cases
2. **Error Handling**
- Overly aggressive error recovery
- Suboptimal retry strategies
- Incomplete error context propagation
- Inconsistent error handling patterns
3. **Resource Management**
- Memory usage spikes under certain conditions
- Suboptimal buffer management
- Inefficient resource cleanup
- Potential resource leaks in edge cases
4. **Protocol Implementation**
- Inefficient command serialization
- Suboptimal response parsing
- Redundant protocol validations
- Incomplete protocol feature support
### 3. Areas for Improvement
1. **Performance Optimization**
- Implement connection pooling optimizations
- Optimize buffer management
- Improve protocol handling efficiency
- Enhance concurrency patterns
2. **Resource Management**
- Implement more efficient memory management
- Optimize resource cleanup
- Improve connection reuse strategies
- Enhance buffer reuse patterns
3. **Error Handling**
- Implement more sophisticated retry strategies
- Improve error context propagation
- Enhance error recovery mechanisms
- Standardize error handling patterns
4. **Protocol Implementation**
- Optimize command serialization
- Improve response parsing efficiency
- Reduce protocol validation overhead
- Enhance protocol feature support
5. **Monitoring and Diagnostics**
- Implement comprehensive metrics
- Enhance logging capabilities
- Improve debugging support
- Add performance profiling tools
## Conclusion
While the current implementation provides a robust and feature-rich Redis client, there are several areas where performance and reliability can be improved. The focus should be on:
1. Optimizing critical paths
2. Improving resource management
3. Enhancing error handling
4. Implementing better monitoring
5. Reducing overhead in common operations
These improvements will help make the client more competitive with other implementations while maintaining its current strengths in reliability and feature completeness.

View File

@ -0,0 +1,164 @@
# Redis Command Processing
This document describes how commands are processed in the Redis client, including the command pipeline, error handling, and various command execution modes.
## Command Interface
The core of command processing is the `Cmder` interface:
```go
type Cmder interface {
Name() string // Command name (e.g., "set", "get")
FullName() string // Full command name (e.g., "cluster info")
Args() []interface{} // Command arguments
String() string // String representation of command and response
readTimeout() *time.Duration
readReply(rd *proto.Reader) error
SetErr(error)
Err() error
}
```
## Command Processing Pipeline
### 1. Command Creation
- Commands are created using factory functions (e.g., `NewCmd`, `NewStatusCmd`)
- Each command type implements the `Cmder` interface
- Commands can specify read timeouts and key positions
### 2. Command Execution
The execution flow:
1. Command validation
2. Connection acquisition from pool
3. Command writing to Redis
4. Response reading
5. Error handling and retries
### 3. Error Handling
- Network errors trigger retries based on configuration
- Redis errors are returned directly
- Timeout handling with configurable backoff
## Command Execution Modes
### 1. Single Command
```go
err := client.Process(ctx, cmd)
```
### 2. Pipeline
```go
pipe := client.Pipeline()
pipe.Process(ctx, cmd1)
pipe.Process(ctx, cmd2)
cmds, err := pipe.Exec(ctx)
```
### 3. Transaction Pipeline
```go
pipe := client.TxPipeline()
pipe.Process(ctx, cmd1)
pipe.Process(ctx, cmd2)
cmds, err := pipe.Exec(ctx)
```
## Command Types
### 1. Basic Commands
- String commands (SET, GET)
- Hash commands (HGET, HSET)
- List commands (LPUSH, RPOP)
- Set commands (SADD, SMEMBERS)
- Sorted Set commands (ZADD, ZRANGE)
### 2. Advanced Commands
- Scripting (EVAL, EVALSHA)
- Pub/Sub (SUBSCRIBE, PUBLISH)
- Transactions (MULTI, EXEC)
- Cluster commands (CLUSTER INFO)
### 3. Specialized Commands
- Search commands (FT.SEARCH)
- JSON commands (JSON.SET, JSON.GET)
- Time Series commands (TS.ADD, TS.RANGE)
- Probabilistic data structures (BF.ADD, CF.ADD)
## Command Processing in Different Clients
### 1. Standalone Client
- Direct command execution
- Connection pooling
- Automatic retries
### 2. Cluster Client
- Command routing based on key slots
- MOVED/ASK redirection handling
- Cross-slot command batching
### 3. Ring Client
- Command sharding based on key hashing
- Consistent hashing for node selection
- Parallel command execution
## Best Practices
1. **Command Batching**
- Use pipelines for multiple commands
- Batch related commands together
- Consider transaction pipelines for atomic operations
2. **Error Handling**
- Check command errors after execution
- Handle network errors appropriately
- Use retries for transient failures
3. **Performance**
- Use appropriate command types
- Leverage pipelining for bulk operations
- Monitor command execution times
4. **Resource Management**
- Close connections properly
- Use context for timeouts
- Monitor connection pool usage
## Common Issues and Solutions
1. **Timeout Handling**
- Configure appropriate timeouts
- Use context for cancellation
- Implement retry strategies
2. **Connection Issues**
- Monitor connection pool health
- Handle connection failures gracefully
- Implement proper cleanup
3. **Command Errors**
- Validate commands before execution
- Handle Redis-specific errors
- Implement proper error recovery
## Monitoring and Debugging
1. **Command Monitoring**
- Use SLOWLOG for performance analysis
- Monitor command execution times
- Track error rates
2. **Client Information**
- Monitor client connections
- Track command usage patterns
- Analyze performance bottlenecks
## Future Improvements
1. **Command Processing**
- Enhanced error handling
- Improved retry mechanisms
- Better connection management
2. **Performance**
- Optimized command batching
- Enhanced pipelining
- Better resource utilization

271
docs/redis_pool.md Normal file
View File

@ -0,0 +1,271 @@
# Redis Connection Pool Implementation
## Overview
The Redis client implements a sophisticated connection pooling mechanism to efficiently manage Redis connections. This document details the implementation, features, and behavior of the connection pool system.
## Core Components
### 1. Connection Interface (`Pooler`)
```go
type Pooler interface {
NewConn(context.Context) (*Conn, error)
CloseConn(*Conn) error
Get(context.Context) (*Conn, error)
Put(context.Context, *Conn)
Remove(context.Context, *Conn, error)
Len() int
IdleLen() int
Stats() *Stats
Close() error
}
```
The `Pooler` interface defines the contract for all connection pool implementations:
- Connection lifecycle management
- Connection acquisition and release
- Pool statistics and monitoring
- Resource cleanup
### 2. Connection Pool Options
```go
type Options struct {
Dialer func(context.Context) (net.Conn, error)
PoolFIFO bool
PoolSize int
DialTimeout time.Duration
PoolTimeout time.Duration
MinIdleConns int
MaxIdleConns int
MaxActiveConns int
ConnMaxIdleTime time.Duration
ConnMaxLifetime time.Duration
}
```
Key configuration parameters:
- `PoolFIFO`: Use FIFO mode for connection pool GET/PUT (default LIFO)
- `PoolSize`: Base number of connections (default: 10 * runtime.GOMAXPROCS)
- `MinIdleConns`: Minimum number of idle connections
- `MaxIdleConns`: Maximum number of idle connections
- `MaxActiveConns`: Maximum number of active connections
- `ConnMaxIdleTime`: Maximum idle time for connections
- `ConnMaxLifetime`: Maximum lifetime for connections
### 3. Connection Pool Statistics
```go
type Stats struct {
Hits uint32 // number of times free connection was found in the pool
Misses uint32 // number of times free connection was NOT found in the pool
Timeouts uint32 // number of times a wait timeout occurred
TotalConns uint32 // number of total connections in the pool
IdleConns uint32 // number of idle connections in the pool
StaleConns uint32 // number of stale connections removed from the pool
}
```
### 4. Main Connection Pool Implementation (`ConnPool`)
#### Structure
```go
type ConnPool struct {
cfg *Options
dialErrorsNum uint32 // atomic
lastDialError atomic.Value
queue chan struct{}
connsMu sync.Mutex
conns []*Conn
idleConns []*Conn
poolSize int
idleConnsLen int
stats Stats
_closed uint32 // atomic
}
```
#### Key Features
1. **Thread Safety**
- Mutex-protected connection lists
- Atomic operations for counters
- Thread-safe connection management
2. **Connection Management**
- Automatic connection creation
- Connection reuse
- Connection cleanup
- Health checks
3. **Resource Control**
- Maximum connection limits
- Idle connection management
- Connection lifetime control
- Timeout handling
4. **Error Handling**
- Connection error tracking
- Automatic error recovery
- Error propagation
- Connection validation
### 5. Single Connection Pool (`SingleConnPool`)
```go
type SingleConnPool struct {
pool Pooler
cn *Conn
stickyErr error
}
```
Use cases:
- Single connection scenarios
- Transaction operations
- Pub/Sub subscriptions
- Pipeline operations
Features:
- Dedicated connection management
- Error state tracking
- Connection reuse
- Resource cleanup
### 6. Sticky Connection Pool (`StickyConnPool`)
```go
type StickyConnPool struct {
pool Pooler
shared int32 // atomic
state uint32 // atomic
ch chan *Conn
_badConnError atomic.Value
}
```
Features:
- Connection stickiness
- State management
- Error handling
- Thread safety
- Connection sharing
### 7. Connection Health Checks
```go
func (p *ConnPool) isHealthyConn(cn *Conn) bool {
now := time.Now()
if p.cfg.ConnMaxLifetime > 0 && now.Sub(cn.createdAt) >= p.cfg.ConnMaxLifetime {
return false
}
if p.cfg.ConnMaxIdleTime > 0 && now.Sub(cn.UsedAt()) >= p.cfg.ConnMaxIdleTime {
return false
}
if connCheck(cn.netConn) != nil {
return false
}
cn.SetUsedAt(now)
return true
}
```
Health check criteria:
- Connection lifetime
- Idle time
- Network connectivity
- Protocol state
### 8. Error Types
```go
var (
ErrClosed = errors.New("redis: client is closed")
ErrPoolExhausted = errors.New("redis: connection pool exhausted")
ErrPoolTimeout = errors.New("redis: connection pool timeout")
)
type BadConnError struct {
wrapped error
}
```
### 9. Best Practices
1. **Pool Configuration**
- Set appropriate pool size based on workload
- Configure timeouts based on network conditions
- Monitor pool statistics
- Adjust idle connection settings
2. **Connection Management**
- Proper connection cleanup
- Error handling
- Resource limits
- Health monitoring
3. **Performance Optimization**
- Connection reuse
- Efficient pooling
- Resource cleanup
- Error recovery
4. **Monitoring**
- Track pool statistics
- Monitor connection health
- Watch for errors
- Resource usage
### 10. Known Issues and Limitations
1. **Performance Considerations**
- Lock contention in high-concurrency scenarios
- Connection creation overhead
- Resource cleanup impact
- Memory usage
2. **Resource Management**
- Connection leaks in edge cases
- Resource cleanup timing
- Memory fragmentation
- Network resource usage
3. **Error Handling**
- Error recovery strategies
- Connection validation
- Error propagation
- State management
### 11. Future Improvements
1. **Performance**
- Optimize lock contention
- Improve connection reuse
- Enhance resource cleanup
- Better memory management
2. **Features**
- Enhanced monitoring
- Better error handling
- Improved resource management
- Advanced connection validation
3. **Reliability**
- Better error recovery
- Enhanced health checks
- Improved state management
- Better resource cleanup

146
docs/redis_testing.md Normal file
View File

@ -0,0 +1,146 @@
# Redis Testing Guide
## Running Tests
### 1. Setup Test Environment
```bash
# Start Docker containers for testing
make docker.start
# Stop Docker containers when done
make docker.stop
```
### 2. Environment Variables
```bash
# Redis version and image configuration
CLIENT_LIBS_TEST_IMAGE=redislabs/client-libs-test:rs-7.4.0-v2 # Default Redis Stack image
REDIS_VERSION=7.2 # Default Redis version
# Cluster configuration
RE_CLUSTER=false # Set to true for RE testing
RCE_DOCKER=false # Set to true for Docker-based Redis CE testing
```
### 3. Running Tests
```bash
# Run tests with race detection, as executed in the CI
make test.ci
### 4. Test Coverage
```bash
# Generate coverage report
go test -coverprofile=coverage.out
# View coverage report in browser
go tool cover -html=coverage.out
```
## Writing Tests
### 1. Basic Test Structure
```go
package redis_test
import (
. "github.com/bsm/ginkgo/v2"
. "github.com/bsm/gomega"
"github.com/redis/go-redis/v9"
)
var _ = Describe("Redis Client", func() {
var client *redis.Client
var ctx = context.Background()
BeforeEach(func() {
client = redis.NewClient(&redis.Options{
Addr: ":6379",
})
})
AfterEach(func() {
client.Close()
})
It("should handle basic operations", func() {
err := client.Set(ctx, "key", "value", 0).Err()
Expect(err).NotTo(HaveOccurred())
val, err := client.Get(ctx, "key").Result()
Expect(err).NotTo(HaveOccurred())
Expect(val).To(Equal("value"))
})
})
```
### 2. Test Organization
```go
// Use Describe for test groups
Describe("Redis Client", func() {
// Use Context for different scenarios
Context("when connection is established", func() {
// Use It for individual test cases
It("should handle basic operations", func() {
// Test implementation
})
})
})
```
### 3. Common Test Patterns
#### Testing Success Cases
```go
It("should succeed", func() {
err := client.Set(ctx, "key", "value", 0).Err()
Expect(err).NotTo(HaveOccurred())
})
```
#### Testing Error Cases
```go
It("should return error", func() {
_, err := client.Get(ctx, "nonexistent").Result()
Expect(err).To(Equal(redis.Nil))
})
```
#### Testing Timeouts
```go
It("should timeout", func() {
ctx, cancel := context.WithTimeout(ctx, time.Millisecond)
defer cancel()
err := client.Ping(ctx).Err()
Expect(err).To(HaveOccurred())
})
```
### 4. Best Practices
1. **Test Structure**
- Use descriptive test names
- Group related tests together
- Keep tests focused and simple
- Clean up resources in AfterEach
2. **Assertions**
- Use Gomega's Expect syntax
- Be specific in assertions
- Test both success and failure cases
- Include error checking
3. **Resource Management**
- Close connections in AfterEach
- Clean up test data
- Handle timeouts properly
- Manage test isolation