* redis client socket changes needed for sentinel * Sentinel Implementation [EXPERIMENTAL] * add pooling * improve typing with SENTINEL_ client members * cleanup - remove unused comments / commented code * small sendCommand change + revert change to tsconfig * add more sentinel commands needed for testing. * lots of fixups and a reasonable first pass test suite * add a timer option to update topology in background + don't need both sentinel client and pubsubclient + nits * format all the things * more progress * small cleanup * try to group promises together to minimize the internal await points * redo events, to keep a single topology event to listen on * nits + readme * add RedisSentinelFactory to provide lower level access to sentinel * nit * update * add RedisSentinelClient/Type for leased clients returned by aquire() used by function passed to use() * add self for private access + improve emitting * nit * nits * improve testing - improve steady state waiting between tests - get masternode from client, not from sentinels themselves (not consistent and then client isn't changing as we expect - provide extensive logging/tracing on test errors - provide a very low impact tracing mechanism withinthe code that only really impacts code when tracing is in use. * ismall nit for typing * bunch of changes - harden testing - don't use sentinel[0] for debug error dump as could be downed by a test - increase time for sentinel down test to 30s (caused a long taking failover) - add client-error even / don't pass throuh client errors as errors option for pubsub proxy - when passing through cient errors as error events, dont pass the event, but the Error object, as only Error objects are supposed to be on 'error' - * improve pub sub proxy. save the refference to all channel/pattern listeners up front on creation, dont hve to fetch the object each time, as it doesn't change. removes race condition between setting up the listener and the pub sub node going down and being recreated. * wrap the passed through RedisClient error to make clear where its coming from. * refactor sentinel object / factory tests apart * harden tests a little bit more * add pipeline test * add scripts/function tests + fixups / cleanups to get them to work * change to use redis-stack-server for redis nodes to enable module testing * fix test, forgot to return in use function with module * rename test * improve tests to test with redis/sentinel nodes with and withput passwords this tests that we are handling the nodeClientOptions and sentinelClientOptions correctly * cleanup for RedisSentinel type generic typing in tests * remove debugLog, just rely on traace mechanism * added multi tests for script/function/modules * don't emit errors on lease object, only on main object * improve testing * extract out common code to reduce duplication * nit * nits * nit * remove SENTINEL_... commands from main client, load them via module interface * missed adding RedisSentinelModule to correct places in RedisSentinelFactory * nits * fix test logging on error 1) it takes a lot of time now, so needs larger timeout 2) docker logs can be large, so need to increase maxBuffer size so doesn't error (and break test clean up) * invalidate watches when client reconnects + provide API for other wrapper clients to also create invalid watch states programatically. Reasoning: if a user does a WATCH and then the client reconnects, the watch is no longer active, but if a user does a MULTI/EXEC after that, they wont know, and since the WATCH is no longer active, the request has no protection. The API is needed for when a wrapper client (say sentinel, cluster) might close the underlying client and reopen a new one transparently to the user. Just like in the reconnection case, this should result in an error, but its up to the wrapping client to provide the appropriate error * remove WATCH and UNWATCH command files, fix WATCH and UNWATCH return type, some more cleanups * missing file in last commit :P * support for custom message in `WatchError` * setDirtyWatch * update watch docs * fixes needed * wip * get functions/modules to work again self -> _self change * reuse leased client on pipelined commands. though I realize this implementation, really only works after the first write command. unsure this is worth it. * test tweaks * nit * change how "sentinel" object client works, allow it to be reserved no more semaphore type counting * review * fixes to get more tests to pass * handle dirtyWatch and watchEpoch in reset and resetIfDirty * "fix", but not correct, needs more work * fix pubsub proxy * remove timeout from steadyState function in test, caused problems * improve restarting nodes * fix pubsub proxy and test --------- Co-authored-by: Leibale Eidelman <me@leibale.com>
6.7 KiB
Clustering
Basic Example
Connecting to a cluster is a bit different. Create the client by specifying some (or all) of the nodes in your cluster and then use it like a regular client instance:
import { createCluster } from 'redis';
const cluster = await createCluster({
rootNodes: [{
url: 'redis://10.0.0.1:30001'
}, {
url: 'redis://10.0.0.2:30002'
}]
})
.on('error', err => console.log('Redis Cluster Error', err))
.connect();
await cluster.set('key', 'value');
const value = await cluster.get('key');
await cluster.close();
createCluster
configuration
See the client configuration page for the
rootNodes
anddefaults
configuration schemas.
Property | Default | Description |
---|---|---|
rootNodes | An array of root nodes that are part of the cluster, which will be used to get the cluster topology. Each element in the array is a client configuration object. There is no need to specify every node in the cluster: 3 should be enough to reliably connect and obtain the cluster configuration from the server | |
defaults | The default configuration values for every client in the cluster. Use this for example when specifying an ACL user to connect with | |
useReplicas | false |
When true , distribute load by executing readonly commands (such as GET , GEOSEARCH , etc.) across all cluster nodes. When false , only use master nodes |
minimizeConnections | false |
When true , .connect() will only discover the cluster topology, without actually connecting to all the nodes. Useful for short-term or Pub/Sub-only connections. |
maxCommandRedirections | 16 |
The maximum number of times a command will be redirected due to MOVED or ASK errors |
nodeAddressMap | Defines the node address mapping | |
modules | Included Redis Modules | |
scripts | Script definitions (see Lua Scripts) | |
functions | Function definitions (see Functions) |
Auth with password and username
Specifying the password in the URL or a root node will only affect the connection to that specific node. In case you want to set the password for all the connections being created from a cluster instance, use the defaults
option.
createCluster({
rootNodes: [{
url: 'redis://10.0.0.1:30001'
}, {
url: 'redis://10.0.0.2:30002'
}],
defaults: {
username: 'username',
password: 'password'
}
});
Node Address Map
A mapping between the addresses in the cluster (see CLUSTER SHARDS
) and the addresses the client should connect to.
Useful when the cluster is running on a different network to the client.
const rootNodes = [{
url: 'external-host-1.io:30001'
}, {
url: 'external-host-2.io:30002'
}];
// Use either a static mapping:
createCluster({
rootNodes,
nodeAddressMap: {
'10.0.0.1:30001': {
host: 'external-host.io',
port: 30001
},
'10.0.0.2:30002': {
host: 'external-host.io',
port: 30002
}
}
});
// or create the mapping dynamically, as a function:
createCluster({
rootNodes,
nodeAddressMap(address) {
const indexOfDash = address.lastIndexOf('-'),
indexOfDot = address.indexOf('.', indexOfDash),
indexOfColons = address.indexOf(':', indexOfDot);
return {
host: `external-host-${address.substring(indexOfDash + 1, indexOfDot)}.io`,
port: Number(address.substring(indexOfColons + 1))
};
}
});
This is a common problem when using ElastiCache. See Accessing ElastiCache from outside AWS for more information on that.
Command Routing
Commands that operate on Redis Keys
Commands such as GET
, SET
, etc. are routed by the first key specified. For example MGET 1 2 3
will be routed by the key 1
.
Server Commands
Admin commands such as MEMORY STATS
, FLUSHALL
, etc. are not attached to the cluster, and must be executed on a specific node via .getSlotMaster()
.
"Forwarded Commands"
Certain commands (e.g. PUBLISH
) are forwarded to other cluster nodes by the Redis server. The client sends these commands to a random node in order to spread the load across the cluster.