You've already forked node-redis
mirror of
https://github.com/redis/node-redis.git
synced 2025-08-04 15:02:09 +03:00
Add support for sharded PubSub (#2373)
* refactor pubsub, add support for sharded pub sub * run tests in redis 7 only, fix PUBSUB SHARDCHANNELS test * add some comments and fix some bugs * PubSubType, not PubSubTypes 🤦♂️ * remove test.txt * fix some bugs, add tests * add some tests * fix #2345 - allow PING in PubSub mode (remove client side validation) * remove .only * revert changes in cluster/index.ts * fix tests minimum version * handle server sunsubscribe * add 'sharded-channel-moved' event to docs, improve the events section in the main README (fix #2302) * exit "resubscribe" if pubsub not active * Update commands-queue.ts * Release client@1.5.0-rc.0 * WIP * use `node:util` instead of `node:util/types` (to support node 14) * run PubSub resharding test with Redis 7+ * fix inconsistency in live resharding test * add some tests * fix iterateAllNodes when starting from a replica * fix iterateAllNodes random * fix slotNodesIterator * fix slotNodesIterator * clear pubSubNode when node in use * wait for all nodes cluster state to be ok before testing * `cluster.minimizeConections` tests * `client.reconnectStrategry = false | 0` tests * sharded pubsub + cluster 🎉 * add minimum version to sharded pubsub tests * add cluster sharded pubsub live reshard test, use stable dockers for tests, make sure to close pubsub clients when a node disconnects from the cluster * fix "ssubscribe & sunsubscribe" test * lock search docker to 2.4.9 * change numberOfMasters default to 2 * use edge for bloom * add tests * add back getMasters and getSlotMaster as deprecated functions * add some tests * fix reconnect strategy + docs * sharded pubsub docs * Update pub-sub.md * some jsdoc, docs, cluster topology test * clean pub-sub docs Co-authored-by: Simon Prickett <simon@redislabs.com> * reconnect startegy docs and bug fix Co-authored-by: Simon Prickett <simon@redislabs.com> * refine jsdoc and some docs Co-authored-by: Simon Prickett <simon@redislabs.com> * I'm stupid * fix cluster topology test * fix cluster topology test * Update README.md * Update clustering.md * Update pub-sub.md Co-authored-by: Simon Prickett <simon@redislabs.com>
This commit is contained in:
61
README.md
61
README.md
@@ -166,47 +166,7 @@ To learn more about isolated execution, check out the [guide](./docs/isolated-ex
|
|||||||
|
|
||||||
### Pub/Sub
|
### Pub/Sub
|
||||||
|
|
||||||
Subscribing to a channel requires a dedicated stand-alone connection. You can easily get one by `.duplicate()`ing an existing Redis connection.
|
See the [Pub/Sub overview](./docs/pub-sub.md).
|
||||||
|
|
||||||
```typescript
|
|
||||||
const subscriber = client.duplicate();
|
|
||||||
|
|
||||||
await subscriber.connect();
|
|
||||||
```
|
|
||||||
|
|
||||||
Once you have one, simply subscribe and unsubscribe as needed:
|
|
||||||
|
|
||||||
```typescript
|
|
||||||
await subscriber.subscribe('channel', (message) => {
|
|
||||||
console.log(message); // 'message'
|
|
||||||
});
|
|
||||||
|
|
||||||
await subscriber.pSubscribe('channe*', (message, channel) => {
|
|
||||||
console.log(message, channel); // 'message', 'channel'
|
|
||||||
});
|
|
||||||
|
|
||||||
await subscriber.unsubscribe('channel');
|
|
||||||
|
|
||||||
await subscriber.pUnsubscribe('channe*');
|
|
||||||
```
|
|
||||||
|
|
||||||
Publish a message on a channel:
|
|
||||||
|
|
||||||
```typescript
|
|
||||||
await publisher.publish('channel', 'message');
|
|
||||||
```
|
|
||||||
|
|
||||||
There is support for buffers as well:
|
|
||||||
|
|
||||||
```typescript
|
|
||||||
await subscriber.subscribe('channel', (message) => {
|
|
||||||
console.log(message); // <Buffer 6d 65 73 73 61 67 65>
|
|
||||||
}, true);
|
|
||||||
|
|
||||||
await subscriber.pSubscribe('channe*', (message, channel) => {
|
|
||||||
console.log(message, channel); // <Buffer 6d 65 73 73 61 67 65>, <Buffer 63 68 61 6e 6e 65 6c>
|
|
||||||
}, true);
|
|
||||||
```
|
|
||||||
|
|
||||||
### Scan Iterator
|
### Scan Iterator
|
||||||
|
|
||||||
@@ -373,15 +333,18 @@ Check out the [Clustering Guide](./docs/clustering.md) when using Node Redis to
|
|||||||
|
|
||||||
The Node Redis client class is an Nodejs EventEmitter and it emits an event each time the network status changes:
|
The Node Redis client class is an Nodejs EventEmitter and it emits an event each time the network status changes:
|
||||||
|
|
||||||
| Event name | Scenes | Arguments to be passed to the listener |
|
| Name | When | Listener arguments |
|
||||||
|----------------|-------------------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------|
|
|-------------------------|------------------------------------------------------------------------------------|------------------------------------------------------------|
|
||||||
| `connect` | The client is initiating a connection to the server. | _No argument_ |
|
| `connect` | Initiating a connection to the server | *No arguments* |
|
||||||
| `ready` | The client successfully initiated the connection to the server. | _No argument_ |
|
| `ready` | Client is ready to use | *No arguments* |
|
||||||
| `end` | The client disconnected the connection to the server via `.quit()` or `.disconnect()`. | _No argument_ |
|
| `end` | Connection has been closed (via `.quit()` or `.disconnect()`) | *No arguments* |
|
||||||
| `error` | When a network error has occurred, such as unable to connect to the server or the connection closed unexpectedly. | 1 argument: The error object, such as `SocketClosedUnexpectedlyError: Socket closed unexpectedly` or `Error: connect ECONNREFUSED [IP]:[PORT]` |
|
| `error` | An error has occurred—usually a network issue such as "Socket closed unexpectedly" | `(error: Error)` |
|
||||||
| `reconnecting` | The client is trying to reconnect to the server. | _No argument_ |
|
| `reconnecting` | Client is trying to reconnect to the server | *No arguments* |
|
||||||
|
| `sharded-channel-moved` | See [here](./docs/pub-sub.md#sharded-channel-moved-event) | See [here](./docs/pub-sub.md#sharded-channel-moved-event) |
|
||||||
|
|
||||||
The client will not emit [any other events](./docs/v3-to-v4.md#all-the-removed-events) beyond those listed above.
|
> :warning: You **MUST** listen to `error` events. If a client doesn't have at least one `error` listener registered and an `error` occurs, that error will be thrown and the Node.js process will exit. See the [`EventEmitter` docs](https://nodejs.org/api/events.html#events_error_events) for more details.
|
||||||
|
|
||||||
|
> The client will not emit [any other events](./docs/v3-to-v4.md#all-the-removed-events) beyond those listed above.
|
||||||
|
|
||||||
## Supported Redis versions
|
## Supported Redis versions
|
||||||
|
|
||||||
|
@@ -15,7 +15,7 @@
|
|||||||
| socket.reconnectStrategy | `retries => Math.min(retries * 50, 500)` | A function containing the [Reconnect Strategy](#reconnect-strategy) logic |
|
| socket.reconnectStrategy | `retries => Math.min(retries * 50, 500)` | A function containing the [Reconnect Strategy](#reconnect-strategy) logic |
|
||||||
| username | | ACL username ([see ACL guide](https://redis.io/topics/acl)) |
|
| username | | ACL username ([see ACL guide](https://redis.io/topics/acl)) |
|
||||||
| password | | ACL password or the old "--requirepass" password |
|
| password | | ACL password or the old "--requirepass" password |
|
||||||
| name | | Connection name ([see `CLIENT SETNAME`](https://redis.io/commands/client-setname)) |
|
| name | | Client name ([see `CLIENT SETNAME`](https://redis.io/commands/client-setname)) |
|
||||||
| database | | Redis database number (see [`SELECT`](https://redis.io/commands/select) command) |
|
| database | | Redis database number (see [`SELECT`](https://redis.io/commands/select) command) |
|
||||||
| modules | | Included [Redis Modules](../README.md#packages) |
|
| modules | | Included [Redis Modules](../README.md#packages) |
|
||||||
| scripts | | Script definitions (see [Lua Scripts](../README.md#lua-scripts)) |
|
| scripts | | Script definitions (see [Lua Scripts](../README.md#lua-scripts)) |
|
||||||
@@ -25,30 +25,22 @@
|
|||||||
| readonly | `false` | Connect in [`READONLY`](https://redis.io/commands/readonly) mode |
|
| readonly | `false` | Connect in [`READONLY`](https://redis.io/commands/readonly) mode |
|
||||||
| legacyMode | `false` | Maintain some backwards compatibility (see the [Migration Guide](./v3-to-v4.md)) |
|
| legacyMode | `false` | Maintain some backwards compatibility (see the [Migration Guide](./v3-to-v4.md)) |
|
||||||
| isolationPoolOptions | | See the [Isolated Execution Guide](./isolated-execution.md) |
|
| isolationPoolOptions | | See the [Isolated Execution Guide](./isolated-execution.md) |
|
||||||
| pingInterval | | Send `PING` command at interval (in ms). Useful with "[Azure Cache for Redis](https://learn.microsoft.com/en-us/azure/azure-cache-for-redis/cache-best-practices-connection#idle-timeout)" |
|
| pingInterval | | Send `PING` command at interval (in ms). Useful with ["Azure Cache for Redis"](https://learn.microsoft.com/en-us/azure/azure-cache-for-redis/cache-best-practices-connection#idle-timeout) |
|
||||||
|
|
||||||
## Reconnect Strategy
|
## Reconnect Strategy
|
||||||
|
|
||||||
When a network error occurs the client will automatically try to reconnect, following a default linear strategy (the more attempts, the more waiting before trying to reconnect).
|
When the socket closes unexpectedly (without calling `.quit()`/`.disconnect()`), the client uses `reconnectStrategy` to decide what to do. The following values are supported:
|
||||||
|
1. `false` -> do not reconnect, close the client and flush the command queue.
|
||||||
|
2. `number` -> wait for `X` milliseconds before reconnecting.
|
||||||
|
3. `(retries: number, cause: Error) => false | number | Error` -> `number` is the same as configuring a `number` directly, `Error` is the same as `false`, but with a custom error.
|
||||||
|
|
||||||
This strategy can be overridden by providing a `socket.reconnectStrategy` option during the client's creation.
|
By default the strategy is `Math.min(retries * 50, 500)`, but it can be overwritten like so:
|
||||||
|
|
||||||
The `socket.reconnectStrategy` is a function that:
|
```javascript
|
||||||
|
createClient({
|
||||||
- Receives the number of retries attempted so far.
|
socket: {
|
||||||
- Returns `number | Error`:
|
reconnectStrategy: retries => Math.min(retries * 50, 1000)
|
||||||
- `number`: wait time in milliseconds prior to attempting a reconnect.
|
}
|
||||||
- `Error`: closes the client and flushes internal command queues.
|
|
||||||
|
|
||||||
The example below shows the default `reconnectStrategy` and how to override it.
|
|
||||||
|
|
||||||
```typescript
|
|
||||||
import { createClient } from 'redis';
|
|
||||||
|
|
||||||
const client = createClient({
|
|
||||||
socket: {
|
|
||||||
reconnectStrategy: (retries) => Math.min(retries * 50, 500)
|
|
||||||
}
|
|
||||||
});
|
});
|
||||||
```
|
```
|
||||||
|
|
||||||
@@ -60,7 +52,7 @@ To enable TLS, set `socket.tls` to `true`. Below are some basic examples.
|
|||||||
|
|
||||||
### Create a SSL client
|
### Create a SSL client
|
||||||
|
|
||||||
```typescript
|
```javascript
|
||||||
createClient({
|
createClient({
|
||||||
socket: {
|
socket: {
|
||||||
tls: true,
|
tls: true,
|
||||||
@@ -72,7 +64,7 @@ createClient({
|
|||||||
|
|
||||||
### Create a SSL client using a self-signed certificate
|
### Create a SSL client using a self-signed certificate
|
||||||
|
|
||||||
```typescript
|
```javascript
|
||||||
createClient({
|
createClient({
|
||||||
socket: {
|
socket: {
|
||||||
tls: true,
|
tls: true,
|
||||||
|
@@ -35,6 +35,7 @@ const value = await cluster.get('key');
|
|||||||
| rootNodes | | An array of root nodes that are part of the cluster, which will be used to get the cluster topology. Each element in the array is a client configuration object. There is no need to specify every node in the cluster, 3 should be enough to reliably connect and obtain the cluster configuration from the server |
|
| rootNodes | | An array of root nodes that are part of the cluster, which will be used to get the cluster topology. Each element in the array is a client configuration object. There is no need to specify every node in the cluster, 3 should be enough to reliably connect and obtain the cluster configuration from the server |
|
||||||
| defaults | | The default configuration values for every client in the cluster. Use this for example when specifying an ACL user to connect with |
|
| defaults | | The default configuration values for every client in the cluster. Use this for example when specifying an ACL user to connect with |
|
||||||
| useReplicas | `false` | When `true`, distribute load by executing readonly commands (such as `GET`, `GEOSEARCH`, etc.) across all cluster nodes. When `false`, only use master nodes |
|
| useReplicas | `false` | When `true`, distribute load by executing readonly commands (such as `GET`, `GEOSEARCH`, etc.) across all cluster nodes. When `false`, only use master nodes |
|
||||||
|
| minimizeConnections | `false` | When `true`, `.connect()` will only discover the cluster topology, without actually connecting to all the nodes. Useful for short-term or Pub/Sub-only connections. |
|
||||||
| maxCommandRedirections | `16` | The maximum number of times a command will be redirected due to `MOVED` or `ASK` errors |
|
| maxCommandRedirections | `16` | The maximum number of times a command will be redirected due to `MOVED` or `ASK` errors |
|
||||||
| nodeAddressMap | | Defines the [node address mapping](#node-address-map) |
|
| nodeAddressMap | | Defines the [node address mapping](#node-address-map) |
|
||||||
| modules | | Included [Redis Modules](../README.md#packages) |
|
| modules | | Included [Redis Modules](../README.md#packages) |
|
||||||
@@ -59,27 +60,45 @@ createCluster({
|
|||||||
|
|
||||||
## Node Address Map
|
## Node Address Map
|
||||||
|
|
||||||
A node address map is required when a Redis cluster is configured with addresses that are inaccessible by the machine running the Redis client.
|
A mapping between the addresses in the cluster (see `CLUSTER SHARDS`) and the addresses the client should connect to.
|
||||||
This is a mapping of addresses and ports, with the values being the accessible address/port combination. Example:
|
Useful when the cluster is running on a different network to the client.
|
||||||
|
|
||||||
```javascript
|
```javascript
|
||||||
|
const rootNodes = [{
|
||||||
|
url: 'external-host-1.io:30001'
|
||||||
|
}, {
|
||||||
|
url: 'external-host-2.io:30002'
|
||||||
|
}];
|
||||||
|
|
||||||
|
// Use either a static mapping:
|
||||||
createCluster({
|
createCluster({
|
||||||
rootNodes: [{
|
rootNodes,
|
||||||
url: 'external-host-1.io:30001'
|
|
||||||
}, {
|
|
||||||
url: 'external-host-2.io:30002'
|
|
||||||
}],
|
|
||||||
nodeAddressMap: {
|
nodeAddressMap: {
|
||||||
'10.0.0.1:30001': {
|
'10.0.0.1:30001': {
|
||||||
host: 'external-host-1.io',
|
host: 'external-host.io',
|
||||||
port: 30001
|
port: 30001
|
||||||
},
|
},
|
||||||
'10.0.0.2:30002': {
|
'10.0.0.2:30002': {
|
||||||
host: 'external-host-2.io',
|
host: 'external-host.io',
|
||||||
port: 30002
|
port: 30002
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
});
|
});
|
||||||
|
|
||||||
|
// or create the mapping dynamically, as a function:
|
||||||
|
createCluster({
|
||||||
|
rootNodes,
|
||||||
|
nodeAddressMap(address) {
|
||||||
|
const indexOfDash = address.lastIndexOf('-'),
|
||||||
|
indexOfDot = address.indexOf('.', indexOfDash),
|
||||||
|
indexOfColons = address.indexOf(':', indexOfDot);
|
||||||
|
|
||||||
|
return {
|
||||||
|
host: `external-host-${address.substring(indexOfDash + 1, indexOfDot)}.io`,
|
||||||
|
port: Number(address.substring(indexOfColons + 1))
|
||||||
|
};
|
||||||
|
}
|
||||||
|
});
|
||||||
```
|
```
|
||||||
|
|
||||||
> This is a common problem when using ElastiCache. See [Accessing ElastiCache from outside AWS](https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/accessing-elasticache.html) for more information on that.
|
> This is a common problem when using ElastiCache. See [Accessing ElastiCache from outside AWS](https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/accessing-elasticache.html) for more information on that.
|
||||||
|
86
docs/pub-sub.md
Normal file
86
docs/pub-sub.md
Normal file
@@ -0,0 +1,86 @@
|
|||||||
|
# Pub/Sub
|
||||||
|
|
||||||
|
The Pub/Sub API is implemented by `RedisClient` and `RedisCluster`.
|
||||||
|
|
||||||
|
## Pub/Sub with `RedisClient`
|
||||||
|
|
||||||
|
Pub/Sub requires a dedicated stand-alone client. You can easily get one by `.duplicate()`ing an existing `RedisClient`:
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
const subscriber = client.duplicate();
|
||||||
|
subscribe.on('error', err => console.error(err));
|
||||||
|
await subscriber.connect();
|
||||||
|
```
|
||||||
|
|
||||||
|
When working with a `RedisCluster`, this is handled automatically for you.
|
||||||
|
|
||||||
|
### `sharded-channel-moved` event
|
||||||
|
|
||||||
|
`RedisClient` emits the `sharded-channel-moved` event when the ["cluster slot"](https://redis.io/docs/reference/cluster-spec/#key-distribution-model) of a subscribed [Sharded Pub/Sub](https://redis.io/docs/manual/pubsub/#sharded-pubsub) channel has been moved to another shard.
|
||||||
|
|
||||||
|
The event listener signature is as follows:
|
||||||
|
```typescript
|
||||||
|
(
|
||||||
|
channel: string,
|
||||||
|
listeners: {
|
||||||
|
buffers: Set<Listener>;
|
||||||
|
strings: Set<Listener>;
|
||||||
|
}
|
||||||
|
)`.
|
||||||
|
```
|
||||||
|
|
||||||
|
## Subscribing
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
const listener = (message, channel) => console.log(message, channel);
|
||||||
|
await client.subscribe('channel', listener);
|
||||||
|
await client.pSubscribe('channe*', listener);
|
||||||
|
// Use sSubscribe for sharded Pub/Sub:
|
||||||
|
await client.sSubscribe('channel', listener);
|
||||||
|
```
|
||||||
|
|
||||||
|
## Publishing
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
await client.publish('channel', 'message');
|
||||||
|
// Use sPublish for sharded Pub/Sub:
|
||||||
|
await client.sPublish('channel', 'message');
|
||||||
|
```
|
||||||
|
|
||||||
|
## Unsubscribing
|
||||||
|
|
||||||
|
The code below unsubscribes all listeners from all channels.
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
await client.unsubscribe();
|
||||||
|
await client.pUnsubscribe();
|
||||||
|
// Use sUnsubscribe for sharded Pub/Sub:
|
||||||
|
await client.sUnsubscribe();
|
||||||
|
```
|
||||||
|
|
||||||
|
To unsubscribe from specific channels:
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
await client.unsubscribe('channel');
|
||||||
|
await client.unsubscribe(['1', '2']);
|
||||||
|
```
|
||||||
|
|
||||||
|
To unsubscribe a specific listener:
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
await client.unsubscribe('channel', listener);
|
||||||
|
```
|
||||||
|
|
||||||
|
## Buffers
|
||||||
|
|
||||||
|
Publishing and subscribing using `Buffer`s is also supported:
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
await subscriber.subscribe('channel', message => {
|
||||||
|
console.log(message); // <Buffer 6d 65 73 73 61 67 65>
|
||||||
|
}, true); // true = subscribe in `Buffer` mode.
|
||||||
|
|
||||||
|
await subscriber.publish(Buffer.from('channel'), Buffer.from('message'));
|
||||||
|
```
|
||||||
|
|
||||||
|
> NOTE: Buffers and strings are supported both for the channel name and the message. You can mix and match these as desired.
|
@@ -1,18 +1,18 @@
|
|||||||
import * as LinkedList from 'yallist';
|
import * as LinkedList from 'yallist';
|
||||||
import { AbortError, ErrorReply } from '../errors';
|
import { AbortError, ErrorReply } from '../errors';
|
||||||
import { RedisCommandArgument, RedisCommandArguments, RedisCommandRawReply } from '../commands';
|
import { RedisCommandArguments, RedisCommandRawReply } from '../commands';
|
||||||
import RESP2Decoder from './RESP2/decoder';
|
import RESP2Decoder from './RESP2/decoder';
|
||||||
import encodeCommand from './RESP2/encoder';
|
import encodeCommand from './RESP2/encoder';
|
||||||
|
import { ChannelListeners, PubSub, PubSubCommand, PubSubListener, PubSubType, PubSubTypeListeners } from './pub-sub';
|
||||||
|
|
||||||
export interface QueueCommandOptions {
|
export interface QueueCommandOptions {
|
||||||
asap?: boolean;
|
asap?: boolean;
|
||||||
chainId?: symbol;
|
chainId?: symbol;
|
||||||
signal?: AbortSignal;
|
signal?: AbortSignal;
|
||||||
returnBuffers?: boolean;
|
returnBuffers?: boolean;
|
||||||
ignorePubSubMode?: boolean;
|
|
||||||
}
|
}
|
||||||
|
|
||||||
interface CommandWaitingToBeSent extends CommandWaitingForReply {
|
export interface CommandWaitingToBeSent extends CommandWaitingForReply {
|
||||||
args: RedisCommandArguments;
|
args: RedisCommandArguments;
|
||||||
chainId?: symbol;
|
chainId?: symbol;
|
||||||
abort?: {
|
abort?: {
|
||||||
@@ -28,27 +28,9 @@ interface CommandWaitingForReply {
|
|||||||
returnBuffers?: boolean;
|
returnBuffers?: boolean;
|
||||||
}
|
}
|
||||||
|
|
||||||
export enum PubSubSubscribeCommands {
|
const PONG = Buffer.from('pong');
|
||||||
SUBSCRIBE = 'SUBSCRIBE',
|
|
||||||
PSUBSCRIBE = 'PSUBSCRIBE'
|
|
||||||
}
|
|
||||||
|
|
||||||
export enum PubSubUnsubscribeCommands {
|
export type OnShardedChannelMoved = (channel: string, listeners: ChannelListeners) => void;
|
||||||
UNSUBSCRIBE = 'UNSUBSCRIBE',
|
|
||||||
PUNSUBSCRIBE = 'PUNSUBSCRIBE'
|
|
||||||
}
|
|
||||||
|
|
||||||
export type PubSubListener<
|
|
||||||
RETURN_BUFFERS extends boolean = false,
|
|
||||||
T = RETURN_BUFFERS extends true ? Buffer : string
|
|
||||||
> = (message: T, channel: T) => unknown;
|
|
||||||
|
|
||||||
interface PubSubListeners {
|
|
||||||
buffers: Set<PubSubListener<true>>;
|
|
||||||
strings: Set<PubSubListener<false>>;
|
|
||||||
}
|
|
||||||
|
|
||||||
type PubSubListenersMap = Map<string, PubSubListeners>;
|
|
||||||
|
|
||||||
export default class RedisCommandsQueue {
|
export default class RedisCommandsQueue {
|
||||||
static #flushQueue<T extends CommandWaitingForReply>(queue: LinkedList<T>, err: Error): void {
|
static #flushQueue<T extends CommandWaitingForReply>(queue: LinkedList<T>, err: Error): void {
|
||||||
@@ -57,65 +39,52 @@ export default class RedisCommandsQueue {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
static #emitPubSubMessage(listenersMap: PubSubListenersMap, message: Buffer, channel: Buffer, pattern?: Buffer): void {
|
|
||||||
const keyString = (pattern ?? channel).toString(),
|
|
||||||
listeners = listenersMap.get(keyString);
|
|
||||||
|
|
||||||
if (!listeners) return;
|
|
||||||
|
|
||||||
for (const listener of listeners.buffers) {
|
|
||||||
listener(message, channel);
|
|
||||||
}
|
|
||||||
|
|
||||||
if (!listeners.strings.size) return;
|
|
||||||
|
|
||||||
const channelString = pattern ? channel.toString() : keyString,
|
|
||||||
messageString = channelString === '__redis__:invalidate' ?
|
|
||||||
// https://github.com/redis/redis/pull/7469
|
|
||||||
// https://github.com/redis/redis/issues/7463
|
|
||||||
(message === null ? null : (message as any as Array<Buffer>).map(x => x.toString())) as any :
|
|
||||||
message.toString();
|
|
||||||
for (const listener of listeners.strings) {
|
|
||||||
listener(messageString, channelString);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
readonly #maxLength: number | null | undefined;
|
readonly #maxLength: number | null | undefined;
|
||||||
readonly #waitingToBeSent = new LinkedList<CommandWaitingToBeSent>();
|
readonly #waitingToBeSent = new LinkedList<CommandWaitingToBeSent>();
|
||||||
readonly #waitingForReply = new LinkedList<CommandWaitingForReply>();
|
readonly #waitingForReply = new LinkedList<CommandWaitingForReply>();
|
||||||
|
readonly #onShardedChannelMoved: OnShardedChannelMoved;
|
||||||
|
|
||||||
readonly #pubSubState = {
|
readonly #pubSub = new PubSub();
|
||||||
isActive: false,
|
|
||||||
subscribing: 0,
|
|
||||||
subscribed: 0,
|
|
||||||
unsubscribing: 0,
|
|
||||||
listeners: {
|
|
||||||
channels: new Map(),
|
|
||||||
patterns: new Map()
|
|
||||||
}
|
|
||||||
};
|
|
||||||
|
|
||||||
static readonly #PUB_SUB_MESSAGES = {
|
get isPubSubActive() {
|
||||||
message: Buffer.from('message'),
|
return this.#pubSub.isActive;
|
||||||
pMessage: Buffer.from('pmessage'),
|
}
|
||||||
subscribe: Buffer.from('subscribe'),
|
|
||||||
pSubscribe: Buffer.from('psubscribe'),
|
|
||||||
unsubscribe: Buffer.from('unsubscribe'),
|
|
||||||
pUnsubscribe: Buffer.from('punsubscribe')
|
|
||||||
};
|
|
||||||
|
|
||||||
#chainInExecution: symbol | undefined;
|
#chainInExecution: symbol | undefined;
|
||||||
|
|
||||||
#decoder = new RESP2Decoder({
|
#decoder = new RESP2Decoder({
|
||||||
returnStringsAsBuffers: () => {
|
returnStringsAsBuffers: () => {
|
||||||
return !!this.#waitingForReply.head?.value.returnBuffers ||
|
return !!this.#waitingForReply.head?.value.returnBuffers ||
|
||||||
this.#pubSubState.isActive;
|
this.#pubSub.isActive;
|
||||||
},
|
},
|
||||||
onReply: reply => {
|
onReply: reply => {
|
||||||
if (this.#handlePubSubReply(reply)) {
|
if (this.#pubSub.isActive && Array.isArray(reply)) {
|
||||||
return;
|
if (this.#pubSub.handleMessageReply(reply as Array<Buffer>)) return;
|
||||||
} else if (!this.#waitingForReply.length) {
|
|
||||||
throw new Error('Got an unexpected reply from Redis');
|
const isShardedUnsubscribe = PubSub.isShardedUnsubscribe(reply as Array<Buffer>);
|
||||||
|
if (isShardedUnsubscribe && !this.#waitingForReply.length) {
|
||||||
|
const channel = (reply[1] as Buffer).toString();
|
||||||
|
this.#onShardedChannelMoved(
|
||||||
|
channel,
|
||||||
|
this.#pubSub.removeShardedListeners(channel)
|
||||||
|
);
|
||||||
|
return;
|
||||||
|
} else if (isShardedUnsubscribe || PubSub.isStatusReply(reply as Array<Buffer>)) {
|
||||||
|
const head = this.#waitingForReply.head!.value;
|
||||||
|
if (
|
||||||
|
(Number.isNaN(head.channelsCounter!) && reply[2] === 0) ||
|
||||||
|
--head.channelsCounter! === 0
|
||||||
|
) {
|
||||||
|
this.#waitingForReply.shift()!.resolve();
|
||||||
|
}
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
if (PONG.equals(reply[0] as Buffer)) {
|
||||||
|
const { resolve, returnBuffers } = this.#waitingForReply.shift()!,
|
||||||
|
buffer = ((reply[1] as Buffer).length === 0 ? reply[0] : reply[1]) as Buffer;
|
||||||
|
resolve(returnBuffers ? buffer : buffer.toString());
|
||||||
|
return;
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
const { resolve, reject } = this.#waitingForReply.shift()!;
|
const { resolve, reject } = this.#waitingForReply.shift()!;
|
||||||
@@ -127,14 +96,16 @@ export default class RedisCommandsQueue {
|
|||||||
}
|
}
|
||||||
});
|
});
|
||||||
|
|
||||||
constructor(maxLength: number | null | undefined) {
|
constructor(
|
||||||
|
maxLength: number | null | undefined,
|
||||||
|
onShardedChannelMoved: OnShardedChannelMoved
|
||||||
|
) {
|
||||||
this.#maxLength = maxLength;
|
this.#maxLength = maxLength;
|
||||||
|
this.#onShardedChannelMoved = onShardedChannelMoved;
|
||||||
}
|
}
|
||||||
|
|
||||||
addCommand<T = RedisCommandRawReply>(args: RedisCommandArguments, options?: QueueCommandOptions): Promise<T> {
|
addCommand<T = RedisCommandRawReply>(args: RedisCommandArguments, options?: QueueCommandOptions): Promise<T> {
|
||||||
if (this.#pubSubState.isActive && !options?.ignorePubSubMode) {
|
if (this.#maxLength && this.#waitingToBeSent.length + this.#waitingForReply.length >= this.#maxLength) {
|
||||||
return Promise.reject(new Error('Cannot send commands in PubSub mode'));
|
|
||||||
} else if (this.#maxLength && this.#waitingToBeSent.length + this.#waitingForReply.length >= this.#maxLength) {
|
|
||||||
return Promise.reject(new Error('The queue is full'));
|
return Promise.reject(new Error('The queue is full'));
|
||||||
} else if (options?.signal?.aborted) {
|
} else if (options?.signal?.aborted) {
|
||||||
return Promise.reject(new AbortError());
|
return Promise.reject(new AbortError());
|
||||||
@@ -173,158 +144,76 @@ export default class RedisCommandsQueue {
|
|||||||
}
|
}
|
||||||
|
|
||||||
subscribe<T extends boolean>(
|
subscribe<T extends boolean>(
|
||||||
command: PubSubSubscribeCommands,
|
type: PubSubType,
|
||||||
channels: RedisCommandArgument | Array<RedisCommandArgument>,
|
channels: string | Array<string>,
|
||||||
listener: PubSubListener<T>,
|
listener: PubSubListener<T>,
|
||||||
returnBuffers?: T
|
returnBuffers?: T
|
||||||
): Promise<void> {
|
) {
|
||||||
const channelsToSubscribe: Array<RedisCommandArgument> = [],
|
return this.#pushPubSubCommand(
|
||||||
listenersMap = command === PubSubSubscribeCommands.SUBSCRIBE ?
|
this.#pubSub.subscribe(type, channels, listener, returnBuffers)
|
||||||
this.#pubSubState.listeners.channels :
|
);
|
||||||
this.#pubSubState.listeners.patterns;
|
|
||||||
for (const channel of (Array.isArray(channels) ? channels : [channels])) {
|
|
||||||
const channelString = typeof channel === 'string' ? channel : channel.toString();
|
|
||||||
let listeners = listenersMap.get(channelString);
|
|
||||||
if (!listeners) {
|
|
||||||
listeners = {
|
|
||||||
buffers: new Set(),
|
|
||||||
strings: new Set()
|
|
||||||
};
|
|
||||||
listenersMap.set(channelString, listeners);
|
|
||||||
channelsToSubscribe.push(channel);
|
|
||||||
}
|
|
||||||
|
|
||||||
// https://github.com/microsoft/TypeScript/issues/23132
|
|
||||||
(returnBuffers ? listeners.buffers : listeners.strings).add(listener as any);
|
|
||||||
}
|
|
||||||
|
|
||||||
if (!channelsToSubscribe.length) {
|
|
||||||
return Promise.resolve();
|
|
||||||
}
|
|
||||||
|
|
||||||
return this.#pushPubSubCommand(command, channelsToSubscribe);
|
|
||||||
}
|
}
|
||||||
|
|
||||||
unsubscribe<T extends boolean>(
|
unsubscribe<T extends boolean>(
|
||||||
command: PubSubUnsubscribeCommands,
|
type: PubSubType,
|
||||||
channels?: string | Array<string>,
|
channels?: string | Array<string>,
|
||||||
listener?: PubSubListener<T>,
|
listener?: PubSubListener<T>,
|
||||||
returnBuffers?: T
|
returnBuffers?: T
|
||||||
): Promise<void> {
|
) {
|
||||||
const listeners = command === PubSubUnsubscribeCommands.UNSUBSCRIBE ?
|
return this.#pushPubSubCommand(
|
||||||
this.#pubSubState.listeners.channels :
|
this.#pubSub.unsubscribe(type, channels, listener, returnBuffers)
|
||||||
this.#pubSubState.listeners.patterns;
|
);
|
||||||
|
|
||||||
if (!channels) {
|
|
||||||
const size = listeners.size;
|
|
||||||
listeners.clear();
|
|
||||||
return this.#pushPubSubCommand(command, size);
|
|
||||||
}
|
|
||||||
|
|
||||||
const channelsToUnsubscribe = [];
|
|
||||||
for (const channel of (Array.isArray(channels) ? channels : [channels])) {
|
|
||||||
const sets = listeners.get(channel);
|
|
||||||
if (!sets) continue;
|
|
||||||
|
|
||||||
let shouldUnsubscribe;
|
|
||||||
if (listener) {
|
|
||||||
// https://github.com/microsoft/TypeScript/issues/23132
|
|
||||||
(returnBuffers ? sets.buffers : sets.strings).delete(listener as any);
|
|
||||||
shouldUnsubscribe = !sets.buffers.size && !sets.strings.size;
|
|
||||||
} else {
|
|
||||||
shouldUnsubscribe = true;
|
|
||||||
}
|
|
||||||
|
|
||||||
if (shouldUnsubscribe) {
|
|
||||||
channelsToUnsubscribe.push(channel);
|
|
||||||
listeners.delete(channel);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
if (!channelsToUnsubscribe.length) {
|
|
||||||
return Promise.resolve();
|
|
||||||
}
|
|
||||||
|
|
||||||
return this.#pushPubSubCommand(command, channelsToUnsubscribe);
|
|
||||||
}
|
}
|
||||||
|
|
||||||
#pushPubSubCommand(command: PubSubSubscribeCommands | PubSubUnsubscribeCommands, channels: number | Array<RedisCommandArgument>): Promise<void> {
|
resubscribe(): Promise<any> | undefined {
|
||||||
return new Promise((resolve, reject) => {
|
const commands = this.#pubSub.resubscribe();
|
||||||
const isSubscribe = command === PubSubSubscribeCommands.SUBSCRIBE || command === PubSubSubscribeCommands.PSUBSCRIBE,
|
if (!commands.length) return;
|
||||||
inProgressKey = isSubscribe ? 'subscribing' : 'unsubscribing',
|
|
||||||
commandArgs: Array<RedisCommandArgument> = [command];
|
|
||||||
|
|
||||||
let channelsCounter: number;
|
return Promise.all(
|
||||||
if (typeof channels === 'number') { // unsubscribe only
|
commands.map(command => this.#pushPubSubCommand(command))
|
||||||
channelsCounter = channels;
|
);
|
||||||
} else {
|
}
|
||||||
commandArgs.push(...channels);
|
|
||||||
channelsCounter = channels.length;
|
|
||||||
}
|
|
||||||
|
|
||||||
this.#pubSubState.isActive = true;
|
extendPubSubChannelListeners(
|
||||||
this.#pubSubState[inProgressKey] += channelsCounter;
|
type: PubSubType,
|
||||||
|
channel: string,
|
||||||
|
listeners: ChannelListeners
|
||||||
|
) {
|
||||||
|
return this.#pushPubSubCommand(
|
||||||
|
this.#pubSub.extendChannelListeners(type, channel, listeners)
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
extendPubSubListeners(type: PubSubType, listeners: PubSubTypeListeners) {
|
||||||
|
return this.#pushPubSubCommand(
|
||||||
|
this.#pubSub.extendTypeListeners(type, listeners)
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
getPubSubListeners(type: PubSubType) {
|
||||||
|
return this.#pubSub.getTypeListeners(type);
|
||||||
|
}
|
||||||
|
|
||||||
|
#pushPubSubCommand(command: PubSubCommand) {
|
||||||
|
if (command === undefined) return;
|
||||||
|
|
||||||
|
return new Promise<void>((resolve, reject) => {
|
||||||
this.#waitingToBeSent.push({
|
this.#waitingToBeSent.push({
|
||||||
args: commandArgs,
|
args: command.args,
|
||||||
channelsCounter,
|
channelsCounter: command.channelsCounter,
|
||||||
returnBuffers: true,
|
returnBuffers: true,
|
||||||
resolve: () => {
|
resolve: () => {
|
||||||
this.#pubSubState[inProgressKey] -= channelsCounter;
|
command.resolve();
|
||||||
this.#pubSubState.subscribed += channelsCounter * (isSubscribe ? 1 : -1);
|
|
||||||
this.#updatePubSubActiveState();
|
|
||||||
resolve();
|
resolve();
|
||||||
},
|
},
|
||||||
reject: err => {
|
reject: err => {
|
||||||
this.#pubSubState[inProgressKey] -= channelsCounter * (isSubscribe ? 1 : -1);
|
command.reject?.();
|
||||||
this.#updatePubSubActiveState();
|
|
||||||
reject(err);
|
reject(err);
|
||||||
}
|
}
|
||||||
});
|
});
|
||||||
});
|
});
|
||||||
}
|
}
|
||||||
|
|
||||||
#updatePubSubActiveState(): void {
|
|
||||||
if (
|
|
||||||
!this.#pubSubState.subscribed &&
|
|
||||||
!this.#pubSubState.subscribing &&
|
|
||||||
!this.#pubSubState.subscribed
|
|
||||||
) {
|
|
||||||
this.#pubSubState.isActive = false;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
resubscribe(): Promise<any> | undefined {
|
|
||||||
this.#pubSubState.subscribed = 0;
|
|
||||||
this.#pubSubState.subscribing = 0;
|
|
||||||
this.#pubSubState.unsubscribing = 0;
|
|
||||||
|
|
||||||
const promises = [],
|
|
||||||
{ channels, patterns } = this.#pubSubState.listeners;
|
|
||||||
|
|
||||||
if (channels.size) {
|
|
||||||
promises.push(
|
|
||||||
this.#pushPubSubCommand(
|
|
||||||
PubSubSubscribeCommands.SUBSCRIBE,
|
|
||||||
[...channels.keys()]
|
|
||||||
)
|
|
||||||
);
|
|
||||||
}
|
|
||||||
|
|
||||||
if (patterns.size) {
|
|
||||||
promises.push(
|
|
||||||
this.#pushPubSubCommand(
|
|
||||||
PubSubSubscribeCommands.PSUBSCRIBE,
|
|
||||||
[...patterns.keys()]
|
|
||||||
)
|
|
||||||
);
|
|
||||||
}
|
|
||||||
|
|
||||||
if (promises.length) {
|
|
||||||
return Promise.all(promises);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
getCommandToSend(): RedisCommandArguments | undefined {
|
getCommandToSend(): RedisCommandArguments | undefined {
|
||||||
const toSend = this.#waitingToBeSent.shift();
|
const toSend = this.#waitingToBeSent.shift();
|
||||||
if (!toSend) return;
|
if (!toSend) return;
|
||||||
@@ -351,39 +240,9 @@ export default class RedisCommandsQueue {
|
|||||||
this.#decoder.write(chunk);
|
this.#decoder.write(chunk);
|
||||||
}
|
}
|
||||||
|
|
||||||
#handlePubSubReply(reply: any): boolean {
|
|
||||||
if (!this.#pubSubState.isActive || !Array.isArray(reply)) return false;
|
|
||||||
|
|
||||||
if (RedisCommandsQueue.#PUB_SUB_MESSAGES.message.equals(reply[0])) {
|
|
||||||
RedisCommandsQueue.#emitPubSubMessage(
|
|
||||||
this.#pubSubState.listeners.channels,
|
|
||||||
reply[2],
|
|
||||||
reply[1]
|
|
||||||
);
|
|
||||||
} else if (RedisCommandsQueue.#PUB_SUB_MESSAGES.pMessage.equals(reply[0])) {
|
|
||||||
RedisCommandsQueue.#emitPubSubMessage(
|
|
||||||
this.#pubSubState.listeners.patterns,
|
|
||||||
reply[3],
|
|
||||||
reply[2],
|
|
||||||
reply[1]
|
|
||||||
);
|
|
||||||
} else if (
|
|
||||||
RedisCommandsQueue.#PUB_SUB_MESSAGES.subscribe.equals(reply[0]) ||
|
|
||||||
RedisCommandsQueue.#PUB_SUB_MESSAGES.pSubscribe.equals(reply[0]) ||
|
|
||||||
RedisCommandsQueue.#PUB_SUB_MESSAGES.unsubscribe.equals(reply[0]) ||
|
|
||||||
RedisCommandsQueue.#PUB_SUB_MESSAGES.pUnsubscribe.equals(reply[0])
|
|
||||||
) {
|
|
||||||
if (--this.#waitingForReply.head!.value.channelsCounter! === 0) {
|
|
||||||
this.#waitingForReply.shift()!.resolve();
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
return true;
|
|
||||||
}
|
|
||||||
|
|
||||||
flushWaitingForReply(err: Error): void {
|
flushWaitingForReply(err: Error): void {
|
||||||
this.#decoder.reset();
|
this.#decoder.reset();
|
||||||
this.#pubSubState.isActive = false;
|
this.#pubSub.reset();
|
||||||
RedisCommandsQueue.#flushQueue(this.#waitingForReply, err);
|
RedisCommandsQueue.#flushQueue(this.#waitingForReply, err);
|
||||||
|
|
||||||
if (!this.#chainInExecution) return;
|
if (!this.#chainInExecution) return;
|
||||||
@@ -396,6 +255,8 @@ export default class RedisCommandsQueue {
|
|||||||
}
|
}
|
||||||
|
|
||||||
flushAll(err: Error): void {
|
flushAll(err: Error): void {
|
||||||
|
this.#decoder.reset();
|
||||||
|
this.#pubSub.reset();
|
||||||
RedisCommandsQueue.#flushQueue(this.#waitingForReply, err);
|
RedisCommandsQueue.#flushQueue(this.#waitingForReply, err);
|
||||||
RedisCommandsQueue.#flushQueue(this.#waitingToBeSent, err);
|
RedisCommandsQueue.#flushQueue(this.#waitingToBeSent, err);
|
||||||
}
|
}
|
||||||
|
@@ -98,6 +98,7 @@ import * as PING from '../commands/PING';
|
|||||||
import * as PUBSUB_CHANNELS from '../commands/PUBSUB_CHANNELS';
|
import * as PUBSUB_CHANNELS from '../commands/PUBSUB_CHANNELS';
|
||||||
import * as PUBSUB_NUMPAT from '../commands/PUBSUB_NUMPAT';
|
import * as PUBSUB_NUMPAT from '../commands/PUBSUB_NUMPAT';
|
||||||
import * as PUBSUB_NUMSUB from '../commands/PUBSUB_NUMSUB';
|
import * as PUBSUB_NUMSUB from '../commands/PUBSUB_NUMSUB';
|
||||||
|
import * as PUBSUB_SHARDCHANNELS from '../commands/PUBSUB_SHARDCHANNELS';
|
||||||
import * as RANDOMKEY from '../commands/RANDOMKEY';
|
import * as RANDOMKEY from '../commands/RANDOMKEY';
|
||||||
import * as READONLY from '../commands/READONLY';
|
import * as READONLY from '../commands/READONLY';
|
||||||
import * as READWRITE from '../commands/READWRITE';
|
import * as READWRITE from '../commands/READWRITE';
|
||||||
@@ -317,6 +318,8 @@ export default {
|
|||||||
pubSubNumPat: PUBSUB_NUMPAT,
|
pubSubNumPat: PUBSUB_NUMPAT,
|
||||||
PUBSUB_NUMSUB,
|
PUBSUB_NUMSUB,
|
||||||
pubSubNumSub: PUBSUB_NUMSUB,
|
pubSubNumSub: PUBSUB_NUMSUB,
|
||||||
|
PUBSUB_SHARDCHANNELS,
|
||||||
|
pubSubShardChannels: PUBSUB_SHARDCHANNELS,
|
||||||
RANDOMKEY,
|
RANDOMKEY,
|
||||||
randomKey: RANDOMKEY,
|
randomKey: RANDOMKEY,
|
||||||
READONLY,
|
READONLY,
|
||||||
|
@@ -2,14 +2,20 @@ import { strict as assert } from 'assert';
|
|||||||
import testUtils, { GLOBAL, waitTillBeenCalled } from '../test-utils';
|
import testUtils, { GLOBAL, waitTillBeenCalled } from '../test-utils';
|
||||||
import RedisClient, { RedisClientType } from '.';
|
import RedisClient, { RedisClientType } from '.';
|
||||||
import { RedisClientMultiCommandType } from './multi-command';
|
import { RedisClientMultiCommandType } from './multi-command';
|
||||||
import { RedisCommandArguments, RedisCommandRawReply, RedisModules, RedisFunctions, RedisScripts } from '../commands';
|
import { RedisCommandRawReply, RedisModules, RedisFunctions, RedisScripts } from '../commands';
|
||||||
import { AbortError, ClientClosedError, ClientOfflineError, ConnectionTimeoutError, DisconnectsClientError, SocketClosedUnexpectedlyError, WatchError } from '../errors';
|
import { AbortError, ClientClosedError, ClientOfflineError, ConnectionTimeoutError, DisconnectsClientError, ErrorReply, SocketClosedUnexpectedlyError, WatchError } from '../errors';
|
||||||
import { defineScript } from '../lua-script';
|
import { defineScript } from '../lua-script';
|
||||||
import { spy } from 'sinon';
|
import { spy } from 'sinon';
|
||||||
import { once } from 'events';
|
import { once } from 'events';
|
||||||
import { ClientKillFilters } from '../commands/CLIENT_KILL';
|
import { ClientKillFilters } from '../commands/CLIENT_KILL';
|
||||||
|
import { ClusterSlotStates } from '../commands/CLUSTER_SETSLOT';
|
||||||
import { promisify } from 'util';
|
import { promisify } from 'util';
|
||||||
|
|
||||||
|
// We need to use 'require', because it's not possible with Typescript to import
|
||||||
|
// function that are exported as 'module.exports = function`, without esModuleInterop
|
||||||
|
// set to true.
|
||||||
|
const calculateSlot = require('cluster-key-slot');
|
||||||
|
|
||||||
export const SQUARE_SCRIPT = defineScript({
|
export const SQUARE_SCRIPT = defineScript({
|
||||||
SCRIPT: 'return ARGV[1] * ARGV[1];',
|
SCRIPT: 'return ARGV[1] * ARGV[1];',
|
||||||
NUMBER_OF_KEYS: 0,
|
NUMBER_OF_KEYS: 0,
|
||||||
@@ -817,7 +823,34 @@ describe('Client', () => {
|
|||||||
}
|
}
|
||||||
}, GLOBAL.SERVERS.OPEN);
|
}, GLOBAL.SERVERS.OPEN);
|
||||||
|
|
||||||
testUtils.testWithClient('should be able to quit in PubSub mode', async client => {
|
testUtils.testWithClient('should be able to PING in PubSub mode', async client => {
|
||||||
|
await client.connect();
|
||||||
|
|
||||||
|
try {
|
||||||
|
await client.subscribe('channel', () => {
|
||||||
|
// noop
|
||||||
|
});
|
||||||
|
|
||||||
|
const [string, buffer, customString, customBuffer] = await Promise.all([
|
||||||
|
client.ping(),
|
||||||
|
client.ping(client.commandOptions({ returnBuffers: true })),
|
||||||
|
client.ping('custom'),
|
||||||
|
client.ping(client.commandOptions({ returnBuffers: true }), 'custom')
|
||||||
|
]);
|
||||||
|
|
||||||
|
assert.equal(string, 'pong');
|
||||||
|
assert.deepEqual(buffer, Buffer.from('pong'));
|
||||||
|
assert.equal(customString, 'custom');
|
||||||
|
assert.deepEqual(customBuffer, Buffer.from('custom'));
|
||||||
|
} finally {
|
||||||
|
await client.disconnect();
|
||||||
|
}
|
||||||
|
}, {
|
||||||
|
...GLOBAL.SERVERS.OPEN,
|
||||||
|
disableClientSetup: true
|
||||||
|
});
|
||||||
|
|
||||||
|
testUtils.testWithClient('should be able to QUIT in PubSub mode', async client => {
|
||||||
await client.subscribe('channel', () => {
|
await client.subscribe('channel', () => {
|
||||||
// noop
|
// noop
|
||||||
});
|
});
|
||||||
@@ -826,6 +859,122 @@ describe('Client', () => {
|
|||||||
|
|
||||||
assert.equal(client.isOpen, false);
|
assert.equal(client.isOpen, false);
|
||||||
}, GLOBAL.SERVERS.OPEN);
|
}, GLOBAL.SERVERS.OPEN);
|
||||||
|
|
||||||
|
testUtils.testWithClient('should reject GET in PubSub mode', async client => {
|
||||||
|
await client.connect();
|
||||||
|
|
||||||
|
try {
|
||||||
|
await client.subscribe('channel', () => {
|
||||||
|
// noop
|
||||||
|
});
|
||||||
|
|
||||||
|
await assert.rejects(client.get('key'), ErrorReply);
|
||||||
|
} finally {
|
||||||
|
await client.disconnect();
|
||||||
|
}
|
||||||
|
}, {
|
||||||
|
...GLOBAL.SERVERS.OPEN,
|
||||||
|
disableClientSetup: true
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('shareded PubSub', () => {
|
||||||
|
testUtils.isVersionGreaterThanHook([7]);
|
||||||
|
|
||||||
|
testUtils.testWithClient('should be able to receive messages', async publisher => {
|
||||||
|
const subscriber = publisher.duplicate();
|
||||||
|
|
||||||
|
await subscriber.connect();
|
||||||
|
|
||||||
|
try {
|
||||||
|
const listener = spy();
|
||||||
|
await subscriber.sSubscribe('channel', listener);
|
||||||
|
|
||||||
|
await Promise.all([
|
||||||
|
waitTillBeenCalled(listener),
|
||||||
|
publisher.sPublish('channel', 'message')
|
||||||
|
]);
|
||||||
|
|
||||||
|
assert.ok(listener.calledOnceWithExactly('message', 'channel'));
|
||||||
|
|
||||||
|
await subscriber.sUnsubscribe();
|
||||||
|
|
||||||
|
// should be able to send commands
|
||||||
|
await assert.doesNotReject(subscriber.ping());
|
||||||
|
} finally {
|
||||||
|
await subscriber.disconnect();
|
||||||
|
}
|
||||||
|
}, {
|
||||||
|
...GLOBAL.SERVERS.OPEN
|
||||||
|
});
|
||||||
|
|
||||||
|
testUtils.testWithClient('should emit sharded-channel-moved event', async publisher => {
|
||||||
|
await publisher.clusterAddSlotsRange({ start: 0, end: 16383 });
|
||||||
|
|
||||||
|
const subscriber = publisher.duplicate();
|
||||||
|
|
||||||
|
await subscriber.connect();
|
||||||
|
|
||||||
|
try {
|
||||||
|
await subscriber.sSubscribe('channel', () => {});
|
||||||
|
|
||||||
|
await Promise.all([
|
||||||
|
publisher.clusterSetSlot(
|
||||||
|
calculateSlot('channel'),
|
||||||
|
ClusterSlotStates.NODE,
|
||||||
|
await publisher.clusterMyId()
|
||||||
|
),
|
||||||
|
once(subscriber, 'sharded-channel-moved')
|
||||||
|
]);
|
||||||
|
|
||||||
|
assert.equal(
|
||||||
|
await subscriber.ping(),
|
||||||
|
'PONG'
|
||||||
|
);
|
||||||
|
} finally {
|
||||||
|
await subscriber.disconnect();
|
||||||
|
}
|
||||||
|
}, {
|
||||||
|
serverArguments: ['--cluster-enabled', 'yes']
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
testUtils.testWithClient('should handle errors in SUBSCRIBE', async publisher => {
|
||||||
|
const subscriber = publisher.duplicate();
|
||||||
|
|
||||||
|
await subscriber.connect();
|
||||||
|
|
||||||
|
try {
|
||||||
|
const listener1 = spy();
|
||||||
|
await subscriber.subscribe('1', listener1);
|
||||||
|
|
||||||
|
await publisher.aclSetUser('default', 'resetchannels');
|
||||||
|
|
||||||
|
|
||||||
|
const listener2 = spy();
|
||||||
|
await assert.rejects(subscriber.subscribe('2', listener2));
|
||||||
|
|
||||||
|
await Promise.all([
|
||||||
|
waitTillBeenCalled(listener1),
|
||||||
|
publisher.aclSetUser('default', 'allchannels'),
|
||||||
|
publisher.publish('1', 'message'),
|
||||||
|
]);
|
||||||
|
assert.ok(listener1.calledOnceWithExactly('message', '1'));
|
||||||
|
|
||||||
|
await subscriber.subscribe('2', listener2);
|
||||||
|
|
||||||
|
await Promise.all([
|
||||||
|
waitTillBeenCalled(listener2),
|
||||||
|
publisher.publish('2', 'message'),
|
||||||
|
]);
|
||||||
|
assert.ok(listener2.calledOnceWithExactly('message', '2'));
|
||||||
|
} finally {
|
||||||
|
await subscriber.disconnect();
|
||||||
|
}
|
||||||
|
}, {
|
||||||
|
// this test change ACL rules, running in isolated server
|
||||||
|
serverArguments: [],
|
||||||
|
minimumDockerVersion: [6 ,2] // ACL PubSub rules were added in Redis 6.2
|
||||||
|
});
|
||||||
});
|
});
|
||||||
|
|
||||||
testUtils.testWithClient('ConnectionTimeoutError', async client => {
|
testUtils.testWithClient('ConnectionTimeoutError', async client => {
|
||||||
|
@@ -1,7 +1,7 @@
|
|||||||
import COMMANDS from './commands';
|
import COMMANDS from './commands';
|
||||||
import { RedisCommand, RedisCommandArguments, RedisCommandRawReply, RedisCommandReply, RedisFunctions, RedisModules, RedisExtensions, RedisScript, RedisScripts, RedisCommandSignature, ConvertArgumentType, RedisFunction, ExcludeMappedString, RedisCommands } from '../commands';
|
import { RedisCommand, RedisCommandArguments, RedisCommandRawReply, RedisCommandReply, RedisFunctions, RedisModules, RedisExtensions, RedisScript, RedisScripts, RedisCommandSignature, ConvertArgumentType, RedisFunction, ExcludeMappedString, RedisCommands } from '../commands';
|
||||||
import RedisSocket, { RedisSocketOptions, RedisTlsSocketOptions } from './socket';
|
import RedisSocket, { RedisSocketOptions, RedisTlsSocketOptions } from './socket';
|
||||||
import RedisCommandsQueue, { PubSubListener, PubSubSubscribeCommands, PubSubUnsubscribeCommands, QueueCommandOptions } from './commands-queue';
|
import RedisCommandsQueue, { QueueCommandOptions } from './commands-queue';
|
||||||
import RedisClientMultiCommand, { RedisClientMultiCommandType } from './multi-command';
|
import RedisClientMultiCommand, { RedisClientMultiCommandType } from './multi-command';
|
||||||
import { RedisMultiQueuedCommand } from '../multi-command';
|
import { RedisMultiQueuedCommand } from '../multi-command';
|
||||||
import { EventEmitter } from 'events';
|
import { EventEmitter } from 'events';
|
||||||
@@ -14,23 +14,57 @@ import { Pool, Options as PoolOptions, createPool } from 'generic-pool';
|
|||||||
import { ClientClosedError, ClientOfflineError, DisconnectsClientError } from '../errors';
|
import { ClientClosedError, ClientOfflineError, DisconnectsClientError } from '../errors';
|
||||||
import { URL } from 'url';
|
import { URL } from 'url';
|
||||||
import { TcpSocketConnectOpts } from 'net';
|
import { TcpSocketConnectOpts } from 'net';
|
||||||
|
import { PubSubType, PubSubListener, PubSubTypeListeners, ChannelListeners } from './pub-sub';
|
||||||
|
|
||||||
export interface RedisClientOptions<
|
export interface RedisClientOptions<
|
||||||
M extends RedisModules = RedisModules,
|
M extends RedisModules = RedisModules,
|
||||||
F extends RedisFunctions = RedisFunctions,
|
F extends RedisFunctions = RedisFunctions,
|
||||||
S extends RedisScripts = RedisScripts
|
S extends RedisScripts = RedisScripts
|
||||||
> extends RedisExtensions<M, F, S> {
|
> extends RedisExtensions<M, F, S> {
|
||||||
|
/**
|
||||||
|
* `redis[s]://[[username][:password]@][host][:port][/db-number]`
|
||||||
|
* See [`redis`](https://www.iana.org/assignments/uri-schemes/prov/redis) and [`rediss`](https://www.iana.org/assignments/uri-schemes/prov/rediss) IANA registration for more details
|
||||||
|
*/
|
||||||
url?: string;
|
url?: string;
|
||||||
|
/**
|
||||||
|
* Socket connection properties
|
||||||
|
*/
|
||||||
socket?: RedisSocketOptions;
|
socket?: RedisSocketOptions;
|
||||||
|
/**
|
||||||
|
* ACL username ([see ACL guide](https://redis.io/topics/acl))
|
||||||
|
*/
|
||||||
username?: string;
|
username?: string;
|
||||||
|
/**
|
||||||
|
* ACL password or the old "--requirepass" password
|
||||||
|
*/
|
||||||
password?: string;
|
password?: string;
|
||||||
|
/**
|
||||||
|
* Client name ([see `CLIENT SETNAME`](https://redis.io/commands/client-setname))
|
||||||
|
*/
|
||||||
name?: string;
|
name?: string;
|
||||||
|
/**
|
||||||
|
* Redis database number (see [`SELECT`](https://redis.io/commands/select) command)
|
||||||
|
*/
|
||||||
database?: number;
|
database?: number;
|
||||||
|
/**
|
||||||
|
* Maximum length of the client's internal command queue
|
||||||
|
*/
|
||||||
commandsQueueMaxLength?: number;
|
commandsQueueMaxLength?: number;
|
||||||
|
/**
|
||||||
|
* When `true`, commands are rejected when the client is reconnecting.
|
||||||
|
* When `false`, commands are queued for execution after reconnection.
|
||||||
|
*/
|
||||||
disableOfflineQueue?: boolean;
|
disableOfflineQueue?: boolean;
|
||||||
|
/**
|
||||||
|
* Connect in [`READONLY`](https://redis.io/commands/readonly) mode
|
||||||
|
*/
|
||||||
readonly?: boolean;
|
readonly?: boolean;
|
||||||
legacyMode?: boolean;
|
legacyMode?: boolean;
|
||||||
isolationPoolOptions?: PoolOptions;
|
isolationPoolOptions?: PoolOptions;
|
||||||
|
/**
|
||||||
|
* Send `PING` command at interval (in ms).
|
||||||
|
* Useful with Redis deployments that do not use TCP Keep-Alive.
|
||||||
|
*/
|
||||||
pingInterval?: number;
|
pingInterval?: number;
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -171,6 +205,10 @@ export default class RedisClient<
|
|||||||
return this.#socket.isReady;
|
return this.#socket.isReady;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
get isPubSubActive() {
|
||||||
|
return this.#queue.isPubSubActive;
|
||||||
|
}
|
||||||
|
|
||||||
get v4(): Record<string, any> {
|
get v4(): Record<string, any> {
|
||||||
if (!this.#options?.legacyMode) {
|
if (!this.#options?.legacyMode) {
|
||||||
throw new Error('the client is not in "legacy mode"');
|
throw new Error('the client is not in "legacy mode"');
|
||||||
@@ -215,7 +253,10 @@ export default class RedisClient<
|
|||||||
}
|
}
|
||||||
|
|
||||||
#initiateQueue(): RedisCommandsQueue {
|
#initiateQueue(): RedisCommandsQueue {
|
||||||
return new RedisCommandsQueue(this.#options?.commandsQueueMaxLength);
|
return new RedisCommandsQueue(
|
||||||
|
this.#options?.commandsQueueMaxLength,
|
||||||
|
(channel, listeners) => this.emit('sharded-channel-moved', channel, listeners)
|
||||||
|
);
|
||||||
}
|
}
|
||||||
|
|
||||||
#initiateSocket(): RedisSocket {
|
#initiateSocket(): RedisSocket {
|
||||||
@@ -377,8 +418,8 @@ export default class RedisClient<
|
|||||||
});
|
});
|
||||||
}
|
}
|
||||||
|
|
||||||
async connect(): Promise<void> {
|
connect(): Promise<void> {
|
||||||
await this.#socket.connect();
|
return this.#socket.connect();
|
||||||
}
|
}
|
||||||
|
|
||||||
async commandsExecutor<C extends RedisCommand>(
|
async commandsExecutor<C extends RedisCommand>(
|
||||||
@@ -500,18 +541,9 @@ export default class RedisClient<
|
|||||||
|
|
||||||
select = this.SELECT;
|
select = this.SELECT;
|
||||||
|
|
||||||
#subscribe<T extends boolean>(
|
#pubSubCommand(promise: Promise<void> | undefined) {
|
||||||
command: PubSubSubscribeCommands,
|
if (promise === undefined) return Promise.resolve();
|
||||||
channels: string | Array<string>,
|
|
||||||
listener: PubSubListener<T>,
|
|
||||||
bufferMode?: T
|
|
||||||
): Promise<void> {
|
|
||||||
const promise = this.#queue.subscribe(
|
|
||||||
command,
|
|
||||||
channels,
|
|
||||||
listener,
|
|
||||||
bufferMode
|
|
||||||
);
|
|
||||||
this.#tick();
|
this.#tick();
|
||||||
return promise;
|
return promise;
|
||||||
}
|
}
|
||||||
@@ -521,77 +553,127 @@ export default class RedisClient<
|
|||||||
listener: PubSubListener<T>,
|
listener: PubSubListener<T>,
|
||||||
bufferMode?: T
|
bufferMode?: T
|
||||||
): Promise<void> {
|
): Promise<void> {
|
||||||
return this.#subscribe(
|
return this.#pubSubCommand(
|
||||||
PubSubSubscribeCommands.SUBSCRIBE,
|
this.#queue.subscribe(
|
||||||
channels,
|
PubSubType.CHANNELS,
|
||||||
listener,
|
channels,
|
||||||
bufferMode
|
listener,
|
||||||
|
bufferMode
|
||||||
|
)
|
||||||
);
|
);
|
||||||
}
|
}
|
||||||
|
|
||||||
subscribe = this.SUBSCRIBE;
|
subscribe = this.SUBSCRIBE;
|
||||||
|
|
||||||
PSUBSCRIBE<T extends boolean = false>(
|
|
||||||
patterns: string | Array<string>,
|
|
||||||
listener: PubSubListener<T>,
|
|
||||||
bufferMode?: T
|
|
||||||
): Promise<void> {
|
|
||||||
return this.#subscribe(
|
|
||||||
PubSubSubscribeCommands.PSUBSCRIBE,
|
|
||||||
patterns,
|
|
||||||
listener,
|
|
||||||
bufferMode
|
|
||||||
);
|
|
||||||
}
|
|
||||||
|
|
||||||
pSubscribe = this.PSUBSCRIBE;
|
|
||||||
|
|
||||||
#unsubscribe<T extends boolean>(
|
|
||||||
command: PubSubUnsubscribeCommands,
|
|
||||||
channels?: string | Array<string>,
|
|
||||||
listener?: PubSubListener<T>,
|
|
||||||
bufferMode?: T
|
|
||||||
): Promise<void> {
|
|
||||||
const promise = this.#queue.unsubscribe(command, channels, listener, bufferMode);
|
|
||||||
this.#tick();
|
|
||||||
return promise;
|
|
||||||
}
|
|
||||||
|
|
||||||
UNSUBSCRIBE<T extends boolean = false>(
|
UNSUBSCRIBE<T extends boolean = false>(
|
||||||
channels?: string | Array<string>,
|
channels?: string | Array<string>,
|
||||||
listener?: PubSubListener<T>,
|
listener?: PubSubListener<T>,
|
||||||
bufferMode?: T
|
bufferMode?: T
|
||||||
): Promise<void> {
|
): Promise<void> {
|
||||||
return this.#unsubscribe(
|
return this.#pubSubCommand(
|
||||||
PubSubUnsubscribeCommands.UNSUBSCRIBE,
|
this.#queue.unsubscribe(
|
||||||
channels,
|
PubSubType.CHANNELS,
|
||||||
listener,
|
channels,
|
||||||
bufferMode
|
listener,
|
||||||
|
bufferMode
|
||||||
|
)
|
||||||
);
|
);
|
||||||
}
|
}
|
||||||
|
|
||||||
unsubscribe = this.UNSUBSCRIBE;
|
unsubscribe = this.UNSUBSCRIBE;
|
||||||
|
|
||||||
|
PSUBSCRIBE<T extends boolean = false>(
|
||||||
|
patterns: string | Array<string>,
|
||||||
|
listener: PubSubListener<T>,
|
||||||
|
bufferMode?: T
|
||||||
|
): Promise<void> {
|
||||||
|
return this.#pubSubCommand(
|
||||||
|
this.#queue.subscribe(
|
||||||
|
PubSubType.PATTERNS,
|
||||||
|
patterns,
|
||||||
|
listener,
|
||||||
|
bufferMode
|
||||||
|
)
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
pSubscribe = this.PSUBSCRIBE;
|
||||||
|
|
||||||
PUNSUBSCRIBE<T extends boolean = false>(
|
PUNSUBSCRIBE<T extends boolean = false>(
|
||||||
patterns?: string | Array<string>,
|
patterns?: string | Array<string>,
|
||||||
listener?: PubSubListener<T>,
|
listener?: PubSubListener<T>,
|
||||||
bufferMode?: T
|
bufferMode?: T
|
||||||
): Promise<void> {
|
): Promise<void> {
|
||||||
return this.#unsubscribe(
|
return this.#pubSubCommand(
|
||||||
PubSubUnsubscribeCommands.PUNSUBSCRIBE,
|
this.#queue.unsubscribe(
|
||||||
patterns,
|
PubSubType.PATTERNS,
|
||||||
listener,
|
patterns,
|
||||||
bufferMode
|
listener,
|
||||||
|
bufferMode
|
||||||
|
)
|
||||||
);
|
);
|
||||||
}
|
}
|
||||||
|
|
||||||
pUnsubscribe = this.PUNSUBSCRIBE;
|
pUnsubscribe = this.PUNSUBSCRIBE;
|
||||||
|
|
||||||
|
SSUBSCRIBE<T extends boolean = false>(
|
||||||
|
channels: string | Array<string>,
|
||||||
|
listener: PubSubListener<T>,
|
||||||
|
bufferMode?: T
|
||||||
|
): Promise<void> {
|
||||||
|
return this.#pubSubCommand(
|
||||||
|
this.#queue.subscribe(
|
||||||
|
PubSubType.SHARDED,
|
||||||
|
channels,
|
||||||
|
listener,
|
||||||
|
bufferMode
|
||||||
|
)
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
sSubscribe = this.SSUBSCRIBE;
|
||||||
|
|
||||||
|
SUNSUBSCRIBE<T extends boolean = false>(
|
||||||
|
channels?: string | Array<string>,
|
||||||
|
listener?: PubSubListener<T>,
|
||||||
|
bufferMode?: T
|
||||||
|
): Promise<void> {
|
||||||
|
return this.#pubSubCommand(
|
||||||
|
this.#queue.unsubscribe(
|
||||||
|
PubSubType.SHARDED,
|
||||||
|
channels,
|
||||||
|
listener,
|
||||||
|
bufferMode
|
||||||
|
)
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
sUnsubscribe = this.SUNSUBSCRIBE;
|
||||||
|
|
||||||
|
getPubSubListeners(type: PubSubType) {
|
||||||
|
return this.#queue.getPubSubListeners(type);
|
||||||
|
}
|
||||||
|
|
||||||
|
extendPubSubChannelListeners(
|
||||||
|
type: PubSubType,
|
||||||
|
channel: string,
|
||||||
|
listeners: ChannelListeners
|
||||||
|
) {
|
||||||
|
return this.#pubSubCommand(
|
||||||
|
this.#queue.extendPubSubChannelListeners(type, channel, listeners)
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
extendPubSubListeners(type: PubSubType, listeners: PubSubTypeListeners) {
|
||||||
|
return this.#pubSubCommand(
|
||||||
|
this.#queue.extendPubSubListeners(type, listeners)
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
QUIT(): Promise<string> {
|
QUIT(): Promise<string> {
|
||||||
return this.#socket.quit(async () => {
|
return this.#socket.quit(async () => {
|
||||||
const quitPromise = this.#queue.addCommand<string>(['QUIT'], {
|
const quitPromise = this.#queue.addCommand<string>(['QUIT']);
|
||||||
ignorePubSubMode: true
|
|
||||||
});
|
|
||||||
this.#tick();
|
this.#tick();
|
||||||
const [reply] = await Promise.all([
|
const [reply] = await Promise.all([
|
||||||
quitPromise,
|
quitPromise,
|
||||||
|
151
packages/client/lib/client/pub-sub.spec.ts
Normal file
151
packages/client/lib/client/pub-sub.spec.ts
Normal file
@@ -0,0 +1,151 @@
|
|||||||
|
import { strict as assert } from 'assert';
|
||||||
|
import { PubSub, PubSubType } from './pub-sub';
|
||||||
|
|
||||||
|
describe('PubSub', () => {
|
||||||
|
const TYPE = PubSubType.CHANNELS,
|
||||||
|
CHANNEL = 'channel',
|
||||||
|
LISTENER = () => {};
|
||||||
|
|
||||||
|
describe('subscribe to new channel', () => {
|
||||||
|
function createAndSubscribe() {
|
||||||
|
const pubSub = new PubSub(),
|
||||||
|
command = pubSub.subscribe(TYPE, CHANNEL, LISTENER);
|
||||||
|
|
||||||
|
assert.equal(pubSub.isActive, true);
|
||||||
|
assert.ok(command);
|
||||||
|
assert.equal(command.channelsCounter, 1);
|
||||||
|
|
||||||
|
return {
|
||||||
|
pubSub,
|
||||||
|
command
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
|
it('resolve', () => {
|
||||||
|
const { pubSub, command } = createAndSubscribe();
|
||||||
|
|
||||||
|
command.resolve();
|
||||||
|
|
||||||
|
assert.equal(pubSub.isActive, true);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('reject', () => {
|
||||||
|
const { pubSub, command } = createAndSubscribe();
|
||||||
|
|
||||||
|
assert.ok(command.reject);
|
||||||
|
command.reject();
|
||||||
|
|
||||||
|
assert.equal(pubSub.isActive, false);
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
it('subscribe to already subscribed channel', () => {
|
||||||
|
const pubSub = new PubSub(),
|
||||||
|
firstSubscribe = pubSub.subscribe(TYPE, CHANNEL, LISTENER);
|
||||||
|
assert.ok(firstSubscribe);
|
||||||
|
|
||||||
|
const secondSubscribe = pubSub.subscribe(TYPE, CHANNEL, LISTENER);
|
||||||
|
assert.ok(secondSubscribe);
|
||||||
|
|
||||||
|
firstSubscribe.resolve();
|
||||||
|
|
||||||
|
assert.equal(
|
||||||
|
pubSub.subscribe(TYPE, CHANNEL, LISTENER),
|
||||||
|
undefined
|
||||||
|
);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('unsubscribe all', () => {
|
||||||
|
const pubSub = new PubSub();
|
||||||
|
|
||||||
|
const subscribe = pubSub.subscribe(TYPE, CHANNEL, LISTENER);
|
||||||
|
assert.ok(subscribe);
|
||||||
|
subscribe.resolve();
|
||||||
|
assert.equal(pubSub.isActive, true);
|
||||||
|
|
||||||
|
const unsubscribe = pubSub.unsubscribe(TYPE);
|
||||||
|
assert.equal(pubSub.isActive, true);
|
||||||
|
assert.ok(unsubscribe);
|
||||||
|
unsubscribe.resolve();
|
||||||
|
assert.equal(pubSub.isActive, false);
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('unsubscribe from channel', () => {
|
||||||
|
it('when not subscribed', () => {
|
||||||
|
const pubSub = new PubSub(),
|
||||||
|
unsubscribe = pubSub.unsubscribe(TYPE, CHANNEL);
|
||||||
|
assert.ok(unsubscribe);
|
||||||
|
unsubscribe.resolve();
|
||||||
|
assert.equal(pubSub.isActive, false);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('when already subscribed', () => {
|
||||||
|
const pubSub = new PubSub(),
|
||||||
|
subscribe = pubSub.subscribe(TYPE, CHANNEL, LISTENER);
|
||||||
|
assert.ok(subscribe);
|
||||||
|
subscribe.resolve();
|
||||||
|
assert.equal(pubSub.isActive, true);
|
||||||
|
|
||||||
|
const unsubscribe = pubSub.unsubscribe(TYPE, CHANNEL);
|
||||||
|
assert.equal(pubSub.isActive, true);
|
||||||
|
assert.ok(unsubscribe);
|
||||||
|
unsubscribe.resolve();
|
||||||
|
assert.equal(pubSub.isActive, false);
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('unsubscribe from listener', () => {
|
||||||
|
it('when it\'s the only listener', () => {
|
||||||
|
const pubSub = new PubSub(),
|
||||||
|
subscribe = pubSub.subscribe(TYPE, CHANNEL, LISTENER);
|
||||||
|
assert.ok(subscribe);
|
||||||
|
subscribe.resolve();
|
||||||
|
assert.equal(pubSub.isActive, true);
|
||||||
|
|
||||||
|
const unsubscribe = pubSub.unsubscribe(TYPE, CHANNEL, LISTENER);
|
||||||
|
assert.ok(unsubscribe);
|
||||||
|
unsubscribe.resolve();
|
||||||
|
assert.equal(pubSub.isActive, false);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('when there are more listeners', () => {
|
||||||
|
const pubSub = new PubSub(),
|
||||||
|
subscribe = pubSub.subscribe(TYPE, CHANNEL, LISTENER);
|
||||||
|
assert.ok(subscribe);
|
||||||
|
subscribe.resolve();
|
||||||
|
assert.equal(pubSub.isActive, true);
|
||||||
|
|
||||||
|
assert.equal(
|
||||||
|
pubSub.subscribe(TYPE, CHANNEL, () => {}),
|
||||||
|
undefined
|
||||||
|
);
|
||||||
|
|
||||||
|
assert.equal(
|
||||||
|
pubSub.unsubscribe(TYPE, CHANNEL, LISTENER),
|
||||||
|
undefined
|
||||||
|
);
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('non-existing listener', () => {
|
||||||
|
it('on subscribed channel', () => {
|
||||||
|
const pubSub = new PubSub(),
|
||||||
|
subscribe = pubSub.subscribe(TYPE, CHANNEL, LISTENER);
|
||||||
|
assert.ok(subscribe);
|
||||||
|
subscribe.resolve();
|
||||||
|
assert.equal(pubSub.isActive, true);
|
||||||
|
|
||||||
|
assert.equal(
|
||||||
|
pubSub.unsubscribe(TYPE, CHANNEL, () => {}),
|
||||||
|
undefined
|
||||||
|
);
|
||||||
|
assert.equal(pubSub.isActive, true);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('on unsubscribed channel', () => {
|
||||||
|
const pubSub = new PubSub();
|
||||||
|
assert.ok(pubSub.unsubscribe(TYPE, CHANNEL, () => {}));
|
||||||
|
assert.equal(pubSub.isActive, false);
|
||||||
|
});
|
||||||
|
});
|
||||||
|
});
|
||||||
|
});
|
408
packages/client/lib/client/pub-sub.ts
Normal file
408
packages/client/lib/client/pub-sub.ts
Normal file
@@ -0,0 +1,408 @@
|
|||||||
|
import { RedisCommandArgument } from "../commands";
|
||||||
|
|
||||||
|
export enum PubSubType {
|
||||||
|
CHANNELS = 'CHANNELS',
|
||||||
|
PATTERNS = 'PATTERNS',
|
||||||
|
SHARDED = 'SHARDED'
|
||||||
|
}
|
||||||
|
|
||||||
|
const COMMANDS = {
|
||||||
|
[PubSubType.CHANNELS]: {
|
||||||
|
subscribe: Buffer.from('subscribe'),
|
||||||
|
unsubscribe: Buffer.from('unsubscribe'),
|
||||||
|
message: Buffer.from('message')
|
||||||
|
},
|
||||||
|
[PubSubType.PATTERNS]: {
|
||||||
|
subscribe: Buffer.from('psubscribe'),
|
||||||
|
unsubscribe: Buffer.from('punsubscribe'),
|
||||||
|
message: Buffer.from('pmessage')
|
||||||
|
},
|
||||||
|
[PubSubType.SHARDED]: {
|
||||||
|
subscribe: Buffer.from('ssubscribe'),
|
||||||
|
unsubscribe: Buffer.from('sunsubscribe'),
|
||||||
|
message: Buffer.from('smessage')
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
export type PubSubListener<
|
||||||
|
RETURN_BUFFERS extends boolean = false
|
||||||
|
> = <T extends RETURN_BUFFERS extends true ? Buffer : string>(message: T, channel: T) => unknown;
|
||||||
|
|
||||||
|
export interface ChannelListeners {
|
||||||
|
unsubscribing: boolean;
|
||||||
|
buffers: Set<PubSubListener<true>>;
|
||||||
|
strings: Set<PubSubListener<false>>;
|
||||||
|
}
|
||||||
|
|
||||||
|
export type PubSubTypeListeners = Map<string, ChannelListeners>;
|
||||||
|
|
||||||
|
type Listeners = Record<PubSubType, PubSubTypeListeners>;
|
||||||
|
|
||||||
|
export type PubSubCommand = ReturnType<
|
||||||
|
typeof PubSub.prototype.subscribe |
|
||||||
|
typeof PubSub.prototype.unsubscribe |
|
||||||
|
typeof PubSub.prototype.extendTypeListeners
|
||||||
|
>;
|
||||||
|
|
||||||
|
export class PubSub {
|
||||||
|
static isStatusReply(reply: Array<Buffer>): boolean {
|
||||||
|
return (
|
||||||
|
COMMANDS[PubSubType.CHANNELS].subscribe.equals(reply[0]) ||
|
||||||
|
COMMANDS[PubSubType.CHANNELS].unsubscribe.equals(reply[0]) ||
|
||||||
|
COMMANDS[PubSubType.PATTERNS].subscribe.equals(reply[0]) ||
|
||||||
|
COMMANDS[PubSubType.PATTERNS].unsubscribe.equals(reply[0]) ||
|
||||||
|
COMMANDS[PubSubType.SHARDED].subscribe.equals(reply[0])
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
static isShardedUnsubscribe(reply: Array<Buffer>): boolean {
|
||||||
|
return COMMANDS[PubSubType.SHARDED].unsubscribe.equals(reply[0]);
|
||||||
|
}
|
||||||
|
|
||||||
|
static #channelsArray(channels: string | Array<string>) {
|
||||||
|
return (Array.isArray(channels) ? channels : [channels]);
|
||||||
|
}
|
||||||
|
|
||||||
|
static #listenersSet<T extends boolean>(
|
||||||
|
listeners: ChannelListeners,
|
||||||
|
returnBuffers?: T
|
||||||
|
) {
|
||||||
|
return (returnBuffers ? listeners.buffers : listeners.strings);
|
||||||
|
}
|
||||||
|
|
||||||
|
#subscribing = 0;
|
||||||
|
|
||||||
|
#isActive = false;
|
||||||
|
|
||||||
|
get isActive() {
|
||||||
|
return this.#isActive;
|
||||||
|
}
|
||||||
|
|
||||||
|
#listeners: Listeners = {
|
||||||
|
[PubSubType.CHANNELS]: new Map(),
|
||||||
|
[PubSubType.PATTERNS]: new Map(),
|
||||||
|
[PubSubType.SHARDED]: new Map()
|
||||||
|
};
|
||||||
|
|
||||||
|
subscribe<T extends boolean>(
|
||||||
|
type: PubSubType,
|
||||||
|
channels: string | Array<string>,
|
||||||
|
listener: PubSubListener<T>,
|
||||||
|
returnBuffers?: T
|
||||||
|
) {
|
||||||
|
const args: Array<RedisCommandArgument> = [COMMANDS[type].subscribe],
|
||||||
|
channelsArray = PubSub.#channelsArray(channels);
|
||||||
|
for (const channel of channelsArray) {
|
||||||
|
let channelListeners = this.#listeners[type].get(channel);
|
||||||
|
if (!channelListeners || channelListeners.unsubscribing) {
|
||||||
|
args.push(channel);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if (args.length === 1) {
|
||||||
|
// all channels are already subscribed, add listeners without issuing a command
|
||||||
|
for (const channel of channelsArray) {
|
||||||
|
PubSub.#listenersSet(
|
||||||
|
this.#listeners[type].get(channel)!,
|
||||||
|
returnBuffers
|
||||||
|
).add(listener);
|
||||||
|
}
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
this.#isActive = true;
|
||||||
|
this.#subscribing++;
|
||||||
|
return {
|
||||||
|
args,
|
||||||
|
channelsCounter: args.length - 1,
|
||||||
|
resolve: () => {
|
||||||
|
this.#subscribing--;
|
||||||
|
for (const channel of channelsArray) {
|
||||||
|
let listeners = this.#listeners[type].get(channel);
|
||||||
|
if (!listeners) {
|
||||||
|
listeners = {
|
||||||
|
unsubscribing: false,
|
||||||
|
buffers: new Set(),
|
||||||
|
strings: new Set()
|
||||||
|
};
|
||||||
|
this.#listeners[type].set(channel, listeners);
|
||||||
|
}
|
||||||
|
|
||||||
|
PubSub.#listenersSet(listeners, returnBuffers).add(listener);
|
||||||
|
}
|
||||||
|
},
|
||||||
|
reject: () => {
|
||||||
|
this.#subscribing--;
|
||||||
|
this.#updateIsActive();
|
||||||
|
}
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
|
extendChannelListeners(
|
||||||
|
type: PubSubType,
|
||||||
|
channel: string,
|
||||||
|
listeners: ChannelListeners
|
||||||
|
) {
|
||||||
|
if (!this.#extendChannelListeners(type, channel, listeners)) return;
|
||||||
|
|
||||||
|
this.#isActive = true;
|
||||||
|
this.#subscribing++;
|
||||||
|
return {
|
||||||
|
args: [
|
||||||
|
COMMANDS[type].subscribe,
|
||||||
|
channel
|
||||||
|
],
|
||||||
|
channelsCounter: 1,
|
||||||
|
resolve: () => this.#subscribing--,
|
||||||
|
reject: () => {
|
||||||
|
this.#subscribing--;
|
||||||
|
this.#updateIsActive();
|
||||||
|
}
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
|
#extendChannelListeners(
|
||||||
|
type: PubSubType,
|
||||||
|
channel: string,
|
||||||
|
listeners: ChannelListeners
|
||||||
|
) {
|
||||||
|
const existingListeners = this.#listeners[type].get(channel);
|
||||||
|
if (!existingListeners) {
|
||||||
|
this.#listeners[type].set(channel, listeners);
|
||||||
|
return true;
|
||||||
|
}
|
||||||
|
|
||||||
|
for (const listener of listeners.buffers) {
|
||||||
|
existingListeners.buffers.add(listener);
|
||||||
|
}
|
||||||
|
|
||||||
|
for (const listener of listeners.strings) {
|
||||||
|
existingListeners.strings.add(listener);
|
||||||
|
}
|
||||||
|
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
|
||||||
|
extendTypeListeners(type: PubSubType, listeners: PubSubTypeListeners) {
|
||||||
|
const args: Array<RedisCommandArgument> = [COMMANDS[type].subscribe];
|
||||||
|
for (const [channel, channelListeners] of listeners) {
|
||||||
|
if (this.#extendChannelListeners(type, channel, channelListeners)) {
|
||||||
|
args.push(channel);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if (args.length === 1) return;
|
||||||
|
|
||||||
|
this.#isActive = true;
|
||||||
|
this.#subscribing++;
|
||||||
|
return {
|
||||||
|
args,
|
||||||
|
channelsCounter: args.length - 1,
|
||||||
|
resolve: () => this.#subscribing--,
|
||||||
|
reject: () => {
|
||||||
|
this.#subscribing--;
|
||||||
|
this.#updateIsActive();
|
||||||
|
}
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
|
unsubscribe<T extends boolean>(
|
||||||
|
type: PubSubType,
|
||||||
|
channels?: string | Array<string>,
|
||||||
|
listener?: PubSubListener<T>,
|
||||||
|
returnBuffers?: T
|
||||||
|
) {
|
||||||
|
const listeners = this.#listeners[type];
|
||||||
|
if (!channels) {
|
||||||
|
return this.#unsubscribeCommand(
|
||||||
|
[COMMANDS[type].unsubscribe],
|
||||||
|
// cannot use `this.#subscribed` because there might be some `SUBSCRIBE` commands in the queue
|
||||||
|
// cannot use `this.#subscribed + this.#subscribing` because some `SUBSCRIBE` commands might fail
|
||||||
|
NaN,
|
||||||
|
() => listeners.clear()
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
const channelsArray = PubSub.#channelsArray(channels);
|
||||||
|
if (!listener) {
|
||||||
|
return this.#unsubscribeCommand(
|
||||||
|
[COMMANDS[type].unsubscribe, ...channelsArray],
|
||||||
|
channelsArray.length,
|
||||||
|
() => {
|
||||||
|
for (const channel of channelsArray) {
|
||||||
|
listeners.delete(channel);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
const args: Array<RedisCommandArgument> = [COMMANDS[type].unsubscribe];
|
||||||
|
for (const channel of channelsArray) {
|
||||||
|
const sets = listeners.get(channel);
|
||||||
|
if (sets) {
|
||||||
|
let current,
|
||||||
|
other;
|
||||||
|
if (returnBuffers) {
|
||||||
|
current = sets.buffers;
|
||||||
|
other = sets.strings;
|
||||||
|
} else {
|
||||||
|
current = sets.strings;
|
||||||
|
other = sets.buffers;
|
||||||
|
}
|
||||||
|
|
||||||
|
const currentSize = current.has(listener) ? current.size - 1 : current.size;
|
||||||
|
if (currentSize !== 0 || other.size !== 0) continue;
|
||||||
|
sets.unsubscribing = true;
|
||||||
|
}
|
||||||
|
|
||||||
|
args.push(channel);
|
||||||
|
}
|
||||||
|
|
||||||
|
if (args.length === 1) {
|
||||||
|
// all channels has other listeners,
|
||||||
|
// delete the listeners without issuing a command
|
||||||
|
for (const channel of channelsArray) {
|
||||||
|
PubSub.#listenersSet(
|
||||||
|
listeners.get(channel)!,
|
||||||
|
returnBuffers
|
||||||
|
).delete(listener);
|
||||||
|
}
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
return this.#unsubscribeCommand(
|
||||||
|
args,
|
||||||
|
args.length - 1,
|
||||||
|
() => {
|
||||||
|
for (const channel of channelsArray) {
|
||||||
|
const sets = listeners.get(channel);
|
||||||
|
if (!sets) continue;
|
||||||
|
|
||||||
|
(returnBuffers ? sets.buffers : sets.strings).delete(listener);
|
||||||
|
if (sets.buffers.size === 0 && sets.strings.size === 0) {
|
||||||
|
listeners.delete(channel);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
#unsubscribeCommand(
|
||||||
|
args: Array<RedisCommandArgument>,
|
||||||
|
channelsCounter: number,
|
||||||
|
removeListeners: () => void
|
||||||
|
) {
|
||||||
|
return {
|
||||||
|
args,
|
||||||
|
channelsCounter,
|
||||||
|
resolve: () => {
|
||||||
|
removeListeners();
|
||||||
|
this.#updateIsActive();
|
||||||
|
},
|
||||||
|
reject: undefined // use the same structure as `subscribe`
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
|
#updateIsActive() {
|
||||||
|
this.#isActive = (
|
||||||
|
this.#listeners[PubSubType.CHANNELS].size !== 0 ||
|
||||||
|
this.#listeners[PubSubType.PATTERNS].size !== 0 ||
|
||||||
|
this.#listeners[PubSubType.CHANNELS].size !== 0 ||
|
||||||
|
this.#subscribing !== 0
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
reset() {
|
||||||
|
this.#isActive = false;
|
||||||
|
this.#subscribing = 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
resubscribe(): Array<PubSubCommand> {
|
||||||
|
const commands = [];
|
||||||
|
for (const [type, listeners] of Object.entries(this.#listeners)) {
|
||||||
|
if (!listeners.size) continue;
|
||||||
|
|
||||||
|
this.#isActive = true;
|
||||||
|
this.#subscribing++;
|
||||||
|
const callback = () => this.#subscribing--;
|
||||||
|
commands.push({
|
||||||
|
args: [
|
||||||
|
COMMANDS[type as PubSubType].subscribe,
|
||||||
|
...listeners.keys()
|
||||||
|
],
|
||||||
|
channelsCounter: listeners.size,
|
||||||
|
resolve: callback,
|
||||||
|
reject: callback
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
return commands;
|
||||||
|
}
|
||||||
|
|
||||||
|
handleMessageReply(reply: Array<Buffer>): boolean {
|
||||||
|
if (COMMANDS[PubSubType.CHANNELS].message.equals(reply[0])) {
|
||||||
|
this.#emitPubSubMessage(
|
||||||
|
PubSubType.CHANNELS,
|
||||||
|
reply[2],
|
||||||
|
reply[1]
|
||||||
|
);
|
||||||
|
return true;
|
||||||
|
} else if (COMMANDS[PubSubType.PATTERNS].message.equals(reply[0])) {
|
||||||
|
this.#emitPubSubMessage(
|
||||||
|
PubSubType.PATTERNS,
|
||||||
|
reply[3],
|
||||||
|
reply[2],
|
||||||
|
reply[1]
|
||||||
|
);
|
||||||
|
return true;
|
||||||
|
} else if (COMMANDS[PubSubType.SHARDED].message.equals(reply[0])) {
|
||||||
|
this.#emitPubSubMessage(
|
||||||
|
PubSubType.SHARDED,
|
||||||
|
reply[2],
|
||||||
|
reply[1]
|
||||||
|
);
|
||||||
|
return true;
|
||||||
|
}
|
||||||
|
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
|
||||||
|
removeShardedListeners(channel: string): ChannelListeners {
|
||||||
|
const listeners = this.#listeners[PubSubType.SHARDED].get(channel)!;
|
||||||
|
this.#listeners[PubSubType.SHARDED].delete(channel);
|
||||||
|
this.#updateIsActive();
|
||||||
|
return listeners;
|
||||||
|
}
|
||||||
|
|
||||||
|
#emitPubSubMessage(
|
||||||
|
type: PubSubType,
|
||||||
|
message: Buffer,
|
||||||
|
channel: Buffer,
|
||||||
|
pattern?: Buffer
|
||||||
|
): void {
|
||||||
|
const keyString = (pattern ?? channel).toString(),
|
||||||
|
listeners = this.#listeners[type].get(keyString);
|
||||||
|
|
||||||
|
if (!listeners) return;
|
||||||
|
|
||||||
|
for (const listener of listeners.buffers) {
|
||||||
|
listener(message, channel);
|
||||||
|
}
|
||||||
|
|
||||||
|
if (!listeners.strings.size) return;
|
||||||
|
|
||||||
|
const channelString = pattern ? channel.toString() : keyString,
|
||||||
|
messageString = channelString === '__redis__:invalidate' ?
|
||||||
|
// https://github.com/redis/redis/pull/7469
|
||||||
|
// https://github.com/redis/redis/issues/7463
|
||||||
|
(message === null ? null : (message as any as Array<Buffer>).map(x => x.toString())) as any :
|
||||||
|
message.toString();
|
||||||
|
for (const listener of listeners.strings) {
|
||||||
|
listener(messageString, channelString);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
getTypeListeners(type: PubSubType): PubSubTypeListeners {
|
||||||
|
return this.#listeners[type];
|
||||||
|
}
|
||||||
|
}
|
@@ -1,5 +1,6 @@
|
|||||||
import { strict as assert } from 'assert';
|
import { strict as assert } from 'node:assert';
|
||||||
import { spy } from 'sinon';
|
import { spy } from 'sinon';
|
||||||
|
import { once } from 'node:events';
|
||||||
import RedisSocket, { RedisSocketOptions } from './socket';
|
import RedisSocket, { RedisSocketOptions } from './socket';
|
||||||
|
|
||||||
describe('Socket', () => {
|
describe('Socket', () => {
|
||||||
@@ -17,16 +18,42 @@ describe('Socket', () => {
|
|||||||
}
|
}
|
||||||
|
|
||||||
describe('reconnectStrategy', () => {
|
describe('reconnectStrategy', () => {
|
||||||
|
it('false', async () => {
|
||||||
|
const socket = createSocket({
|
||||||
|
host: 'error',
|
||||||
|
connectTimeout: 1,
|
||||||
|
reconnectStrategy: false
|
||||||
|
});
|
||||||
|
|
||||||
|
await assert.rejects(socket.connect());
|
||||||
|
|
||||||
|
assert.equal(socket.isOpen, false);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('0', async () => {
|
||||||
|
const socket = createSocket({
|
||||||
|
host: 'error',
|
||||||
|
connectTimeout: 1,
|
||||||
|
reconnectStrategy: 0
|
||||||
|
});
|
||||||
|
|
||||||
|
socket.connect();
|
||||||
|
await once(socket, 'error');
|
||||||
|
assert.equal(socket.isOpen, true);
|
||||||
|
assert.equal(socket.isReady, false);
|
||||||
|
socket.disconnect();
|
||||||
|
assert.equal(socket.isOpen, false);
|
||||||
|
});
|
||||||
|
|
||||||
it('custom strategy', async () => {
|
it('custom strategy', async () => {
|
||||||
const numberOfRetries = 10;
|
const numberOfRetries = 3;
|
||||||
|
|
||||||
const reconnectStrategy = spy((retries: number) => {
|
const reconnectStrategy = spy((retries: number) => {
|
||||||
assert.equal(retries + 1, reconnectStrategy.callCount);
|
assert.equal(retries + 1, reconnectStrategy.callCount);
|
||||||
|
|
||||||
if (retries === numberOfRetries) return new Error(`${numberOfRetries}`);
|
if (retries === numberOfRetries) return new Error(`${numberOfRetries}`);
|
||||||
|
|
||||||
const time = retries * 2;
|
return 0;
|
||||||
return time;
|
|
||||||
});
|
});
|
||||||
|
|
||||||
const socket = createSocket({
|
const socket = createSocket({
|
||||||
|
@@ -6,10 +6,26 @@ import { ConnectionTimeoutError, ClientClosedError, SocketClosedUnexpectedlyErro
|
|||||||
import { promiseTimeout } from '../utils';
|
import { promiseTimeout } from '../utils';
|
||||||
|
|
||||||
export interface RedisSocketCommonOptions {
|
export interface RedisSocketCommonOptions {
|
||||||
|
/**
|
||||||
|
* Connection Timeout (in milliseconds)
|
||||||
|
*/
|
||||||
connectTimeout?: number;
|
connectTimeout?: number;
|
||||||
|
/**
|
||||||
|
* Toggle [`Nagle's algorithm`](https://nodejs.org/api/net.html#net_socket_setnodelay_nodelay)
|
||||||
|
*/
|
||||||
noDelay?: boolean;
|
noDelay?: boolean;
|
||||||
|
/**
|
||||||
|
* Toggle [`keep-alive`](https://nodejs.org/api/net.html#net_socket_setkeepalive_enable_initialdelay)
|
||||||
|
*/
|
||||||
keepAlive?: number | false;
|
keepAlive?: number | false;
|
||||||
reconnectStrategy?(retries: number): number | Error;
|
/**
|
||||||
|
* When the socket closes unexpectedly (without calling `.quit()`/`.disconnect()`), the client uses `reconnectStrategy` to decide what to do. The following values are supported:
|
||||||
|
* 1. `false` -> do not reconnect, close the client and flush the command queue.
|
||||||
|
* 2. `number` -> wait for `X` milliseconds before reconnecting.
|
||||||
|
* 3. `(retries: number, cause: Error) => false | number | Error` -> `number` is the same as configuring a `number` directly, `Error` is the same as `false`, but with a custom error.
|
||||||
|
* Defaults to `retries => Math.min(retries * 50, 500)`
|
||||||
|
*/
|
||||||
|
reconnectStrategy?: false | number | ((retries: number, cause: Error) => false | Error | number);
|
||||||
}
|
}
|
||||||
|
|
||||||
type RedisNetSocketOptions = Partial<net.SocketConnectOpts> & {
|
type RedisNetSocketOptions = Partial<net.SocketConnectOpts> & {
|
||||||
@@ -83,12 +99,16 @@ export default class RedisSocket extends EventEmitter {
|
|||||||
this.#options = RedisSocket.#initiateOptions(options);
|
this.#options = RedisSocket.#initiateOptions(options);
|
||||||
}
|
}
|
||||||
|
|
||||||
reconnectStrategy(retries: number): number | Error {
|
#reconnectStrategy(retries: number, cause: Error) {
|
||||||
if (this.#options.reconnectStrategy) {
|
if (this.#options.reconnectStrategy === false) {
|
||||||
|
return false;
|
||||||
|
} else if (typeof this.#options.reconnectStrategy === 'number') {
|
||||||
|
return this.#options.reconnectStrategy;
|
||||||
|
} else if (this.#options.reconnectStrategy) {
|
||||||
try {
|
try {
|
||||||
const retryIn = this.#options.reconnectStrategy(retries);
|
const retryIn = this.#options.reconnectStrategy(retries, cause);
|
||||||
if (typeof retryIn !== 'number' && !(retryIn instanceof Error)) {
|
if (retryIn !== false && !(retryIn instanceof Error) && typeof retryIn !== 'number') {
|
||||||
throw new TypeError('Reconnect strategy should return `number | Error`');
|
throw new TypeError(`Reconnect strategy should return \`false | Error | number\`, got ${retryIn} instead`);
|
||||||
}
|
}
|
||||||
|
|
||||||
return retryIn;
|
return retryIn;
|
||||||
@@ -100,6 +120,21 @@ export default class RedisSocket extends EventEmitter {
|
|||||||
return Math.min(retries * 50, 500);
|
return Math.min(retries * 50, 500);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
#shouldReconnect(retries: number, cause: Error) {
|
||||||
|
const retryIn = this.#reconnectStrategy(retries, cause);
|
||||||
|
if (retryIn === false) {
|
||||||
|
this.#isOpen = false;
|
||||||
|
this.emit('error', cause);
|
||||||
|
return cause;
|
||||||
|
} else if (retryIn instanceof Error) {
|
||||||
|
this.#isOpen = false;
|
||||||
|
this.emit('error', cause);
|
||||||
|
return new ReconnectStrategyError(retryIn, cause);
|
||||||
|
}
|
||||||
|
|
||||||
|
return retryIn;
|
||||||
|
}
|
||||||
|
|
||||||
async connect(): Promise<void> {
|
async connect(): Promise<void> {
|
||||||
if (this.#isOpen) {
|
if (this.#isOpen) {
|
||||||
throw new Error('Socket already opened');
|
throw new Error('Socket already opened');
|
||||||
@@ -109,13 +144,9 @@ export default class RedisSocket extends EventEmitter {
|
|||||||
return this.#connect();
|
return this.#connect();
|
||||||
}
|
}
|
||||||
|
|
||||||
async #connect(hadError?: boolean): Promise<void> {
|
async #connect(): Promise<void> {
|
||||||
let retries = 0;
|
let retries = 0;
|
||||||
do {
|
do {
|
||||||
if (retries > 0 || hadError) {
|
|
||||||
this.emit('reconnecting');
|
|
||||||
}
|
|
||||||
|
|
||||||
try {
|
try {
|
||||||
this.#socket = await this.#createSocket();
|
this.#socket = await this.#createSocket();
|
||||||
this.#writableNeedDrain = false;
|
this.#writableNeedDrain = false;
|
||||||
@@ -131,17 +162,17 @@ export default class RedisSocket extends EventEmitter {
|
|||||||
this.#isReady = true;
|
this.#isReady = true;
|
||||||
this.emit('ready');
|
this.emit('ready');
|
||||||
} catch (err) {
|
} catch (err) {
|
||||||
const retryIn = this.reconnectStrategy(retries);
|
const retryIn = this.#shouldReconnect(retries, err as Error);
|
||||||
if (retryIn instanceof Error) {
|
if (typeof retryIn !== 'number') {
|
||||||
this.#isOpen = false;
|
throw retryIn;
|
||||||
this.emit('error', err);
|
|
||||||
throw new ReconnectStrategyError(retryIn, err);
|
|
||||||
}
|
}
|
||||||
|
|
||||||
this.emit('error', err);
|
this.emit('error', err);
|
||||||
await promiseTimeout(retryIn);
|
await promiseTimeout(retryIn);
|
||||||
}
|
}
|
||||||
|
|
||||||
retries++;
|
retries++;
|
||||||
|
this.emit('reconnecting');
|
||||||
} while (this.#isOpen && !this.#isReady);
|
} while (this.#isOpen && !this.#isReady);
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -203,9 +234,10 @@ export default class RedisSocket extends EventEmitter {
|
|||||||
this.#isReady = false;
|
this.#isReady = false;
|
||||||
this.emit('error', err);
|
this.emit('error', err);
|
||||||
|
|
||||||
if (!this.#isOpen) return;
|
if (!this.#isOpen || typeof this.#shouldReconnect(0, err) !== 'number') return;
|
||||||
|
|
||||||
this.#connect(true).catch(() => {
|
this.emit('reconnecting');
|
||||||
|
this.#connect().catch(() => {
|
||||||
// the error was already emitted, silently ignore it
|
// the error was already emitted, silently ignore it
|
||||||
});
|
});
|
||||||
}
|
}
|
||||||
|
@@ -1,23 +1,17 @@
|
|||||||
import RedisClient, { InstantiableRedisClient, RedisClientType } from '../client';
|
import RedisClient, { InstantiableRedisClient, RedisClientType } from '../client';
|
||||||
import { RedisClusterMasterNode, RedisClusterReplicaNode } from '../commands/CLUSTER_NODES';
|
|
||||||
import { RedisClusterClientOptions, RedisClusterOptions } from '.';
|
import { RedisClusterClientOptions, RedisClusterOptions } from '.';
|
||||||
import { RedisCommandArgument, RedisFunctions, RedisModules, RedisScripts } from '../commands';
|
import { RedisCommandArgument, RedisFunctions, RedisModules, RedisScripts } from '../commands';
|
||||||
import { RootNodesUnavailableError } from '../errors';
|
import { RootNodesUnavailableError } from '../errors';
|
||||||
|
import { ClusterSlotsNode } from '../commands/CLUSTER_SLOTS';
|
||||||
|
import { types } from 'node:util';
|
||||||
|
import { ChannelListeners, PubSubType, PubSubTypeListeners } from '../client/pub-sub';
|
||||||
|
import { EventEmitter } from 'node:stream';
|
||||||
|
|
||||||
// We need to use 'require', because it's not possible with Typescript to import
|
// We need to use 'require', because it's not possible with Typescript to import
|
||||||
// function that are exported as 'module.exports = function`, without esModuleInterop
|
// function that are exported as 'module.exports = function`, without esModuleInterop
|
||||||
// set to true.
|
// set to true.
|
||||||
const calculateSlot = require('cluster-key-slot');
|
const calculateSlot = require('cluster-key-slot');
|
||||||
|
|
||||||
export interface ClusterNode<
|
|
||||||
M extends RedisModules,
|
|
||||||
F extends RedisFunctions,
|
|
||||||
S extends RedisScripts
|
|
||||||
> {
|
|
||||||
id: string;
|
|
||||||
client: RedisClientType<M, F, S>;
|
|
||||||
}
|
|
||||||
|
|
||||||
interface NodeAddress {
|
interface NodeAddress {
|
||||||
host: string;
|
host: string;
|
||||||
port: number;
|
port: number;
|
||||||
@@ -27,133 +21,236 @@ export type NodeAddressMap = {
|
|||||||
[address: string]: NodeAddress;
|
[address: string]: NodeAddress;
|
||||||
} | ((address: string) => NodeAddress | undefined);
|
} | ((address: string) => NodeAddress | undefined);
|
||||||
|
|
||||||
interface SlotNodes<
|
type ValueOrPromise<T> = T | Promise<T>;
|
||||||
|
|
||||||
|
type ClientOrPromise<
|
||||||
|
M extends RedisModules,
|
||||||
|
F extends RedisFunctions,
|
||||||
|
S extends RedisScripts
|
||||||
|
> = ValueOrPromise<RedisClientType<M, F, S>>;
|
||||||
|
|
||||||
|
export interface Node<
|
||||||
M extends RedisModules,
|
M extends RedisModules,
|
||||||
F extends RedisFunctions,
|
F extends RedisFunctions,
|
||||||
S extends RedisScripts
|
S extends RedisScripts
|
||||||
> {
|
> {
|
||||||
master: ClusterNode<M, F, S>;
|
address: string;
|
||||||
replicas: Array<ClusterNode<M, F, S>>;
|
client?: ClientOrPromise<M, F, S>;
|
||||||
clientIterator: IterableIterator<RedisClientType<M, F, S>> | undefined;
|
|
||||||
}
|
}
|
||||||
|
|
||||||
type OnError = (err: unknown) => void;
|
export interface ShardNode<
|
||||||
|
M extends RedisModules,
|
||||||
|
F extends RedisFunctions,
|
||||||
|
S extends RedisScripts
|
||||||
|
> extends Node<M, F, S> {
|
||||||
|
id: string;
|
||||||
|
host: string;
|
||||||
|
port: number;
|
||||||
|
readonly: boolean;
|
||||||
|
}
|
||||||
|
|
||||||
|
export interface MasterNode<
|
||||||
|
M extends RedisModules,
|
||||||
|
F extends RedisFunctions,
|
||||||
|
S extends RedisScripts
|
||||||
|
> extends ShardNode<M, F, S> {
|
||||||
|
pubSubClient?: ClientOrPromise<M, F, S>;
|
||||||
|
}
|
||||||
|
|
||||||
|
export interface Shard<
|
||||||
|
M extends RedisModules,
|
||||||
|
F extends RedisFunctions,
|
||||||
|
S extends RedisScripts
|
||||||
|
> {
|
||||||
|
master: MasterNode<M, F, S>;
|
||||||
|
replicas?: Array<ShardNode<M, F, S>>;
|
||||||
|
nodesIterator?: IterableIterator<ShardNode<M, F, S>>;
|
||||||
|
}
|
||||||
|
|
||||||
|
type ShardWithReplicas<
|
||||||
|
M extends RedisModules,
|
||||||
|
F extends RedisFunctions,
|
||||||
|
S extends RedisScripts
|
||||||
|
> = Shard<M, F, S> & Required<Pick<Shard<M, F, S>, 'replicas'>>;
|
||||||
|
|
||||||
|
export type PubSubNode<
|
||||||
|
M extends RedisModules,
|
||||||
|
F extends RedisFunctions,
|
||||||
|
S extends RedisScripts
|
||||||
|
> = Required<Node<M, F, S>>;
|
||||||
|
|
||||||
|
type PubSubToResubscribe = Record<
|
||||||
|
PubSubType.CHANNELS | PubSubType.PATTERNS,
|
||||||
|
PubSubTypeListeners
|
||||||
|
>;
|
||||||
|
|
||||||
|
export type OnShardedChannelMovedError = (
|
||||||
|
err: unknown,
|
||||||
|
channel: string,
|
||||||
|
listeners?: ChannelListeners
|
||||||
|
) => void;
|
||||||
|
|
||||||
export default class RedisClusterSlots<
|
export default class RedisClusterSlots<
|
||||||
M extends RedisModules,
|
M extends RedisModules,
|
||||||
F extends RedisFunctions,
|
F extends RedisFunctions,
|
||||||
S extends RedisScripts
|
S extends RedisScripts
|
||||||
> {
|
> {
|
||||||
|
static #SLOTS = 16384;
|
||||||
|
|
||||||
readonly #options: RedisClusterOptions<M, F, S>;
|
readonly #options: RedisClusterOptions<M, F, S>;
|
||||||
readonly #Client: InstantiableRedisClient<M, F, S>;
|
readonly #Client: InstantiableRedisClient<M, F, S>;
|
||||||
readonly #onError: OnError;
|
readonly #emit: EventEmitter['emit'];
|
||||||
readonly #nodeByAddress = new Map<string, ClusterNode<M, F, S>>();
|
slots = new Array<Shard<M, F, S>>(RedisClusterSlots.#SLOTS);
|
||||||
readonly #slots: Array<SlotNodes<M, F, S>> = [];
|
shards = new Array<Shard<M, F, S>>();
|
||||||
|
masters = new Array<ShardNode<M, F, S>>();
|
||||||
|
replicas = new Array<ShardNode<M, F, S>>();
|
||||||
|
readonly nodeByAddress = new Map<string, MasterNode<M, F, S> | ShardNode<M, F, S>>();
|
||||||
|
pubSubNode?: PubSubNode<M, F, S>;
|
||||||
|
|
||||||
constructor(options: RedisClusterOptions<M, F, S>, onError: OnError) {
|
#isOpen = false;
|
||||||
this.#options = options;
|
|
||||||
this.#Client = RedisClient.extend(options);
|
get isOpen() {
|
||||||
this.#onError = onError;
|
return this.#isOpen;
|
||||||
}
|
}
|
||||||
|
|
||||||
async connect(): Promise<void> {
|
constructor(
|
||||||
for (const rootNode of this.#options.rootNodes) {
|
options: RedisClusterOptions<M, F, S>,
|
||||||
if (await this.#discoverNodes(rootNode)) return;
|
emit: EventEmitter['emit']
|
||||||
|
) {
|
||||||
|
this.#options = options;
|
||||||
|
this.#Client = RedisClient.extend(options);
|
||||||
|
this.#emit = emit;
|
||||||
|
}
|
||||||
|
|
||||||
|
async connect() {
|
||||||
|
if (this.#isOpen) {
|
||||||
|
throw new Error('Cluster already open');
|
||||||
|
}
|
||||||
|
|
||||||
|
this.#isOpen = true;
|
||||||
|
try {
|
||||||
|
await this.#discoverWithRootNodes();
|
||||||
|
} catch (err) {
|
||||||
|
this.#isOpen = false;
|
||||||
|
throw err;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
async #discoverWithRootNodes() {
|
||||||
|
let start = Math.floor(Math.random() * this.#options.rootNodes.length);
|
||||||
|
for (let i = start; i < this.#options.rootNodes.length; i++) {
|
||||||
|
if (await this.#discover(this.#options.rootNodes[i])) return;
|
||||||
|
}
|
||||||
|
|
||||||
|
for (let i = 0; i < start; i++) {
|
||||||
|
if (await this.#discover(this.#options.rootNodes[i])) return;
|
||||||
}
|
}
|
||||||
|
|
||||||
throw new RootNodesUnavailableError();
|
throw new RootNodesUnavailableError();
|
||||||
}
|
}
|
||||||
|
|
||||||
async #discoverNodes(clientOptions?: RedisClusterClientOptions): Promise<boolean> {
|
#resetSlots() {
|
||||||
const client = this.#initiateClient(clientOptions);
|
this.slots = new Array(RedisClusterSlots.#SLOTS);
|
||||||
|
this.shards = [];
|
||||||
|
this.masters = [];
|
||||||
|
this.replicas = [];
|
||||||
|
this.#randomNodeIterator = undefined;
|
||||||
|
}
|
||||||
|
|
||||||
|
async #discover(rootNode?: RedisClusterClientOptions) {
|
||||||
|
this.#resetSlots();
|
||||||
|
const addressesInUse = new Set<string>();
|
||||||
|
|
||||||
|
try {
|
||||||
|
const shards = await this.#getShards(rootNode),
|
||||||
|
promises: Array<Promise<unknown>> = [],
|
||||||
|
eagerConnect = this.#options.minimizeConnections !== true;
|
||||||
|
for (const { from, to, master, replicas } of shards) {
|
||||||
|
const shard: Shard<M, F, S> = {
|
||||||
|
master: this.#initiateSlotNode(master, false, eagerConnect, addressesInUse, promises)
|
||||||
|
};
|
||||||
|
|
||||||
|
if (this.#options.useReplicas) {
|
||||||
|
shard.replicas = replicas.map(replica =>
|
||||||
|
this.#initiateSlotNode(replica, true, eagerConnect, addressesInUse, promises)
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
this.shards.push(shard);
|
||||||
|
|
||||||
|
for (let i = from; i <= to; i++) {
|
||||||
|
this.slots[i] = shard;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if (this.pubSubNode && !addressesInUse.has(this.pubSubNode.address)) {
|
||||||
|
if (types.isPromise(this.pubSubNode.client)) {
|
||||||
|
promises.push(
|
||||||
|
this.pubSubNode.client.then(client => client.disconnect())
|
||||||
|
);
|
||||||
|
this.pubSubNode = undefined;
|
||||||
|
} else {
|
||||||
|
promises.push(this.pubSubNode.client.disconnect());
|
||||||
|
|
||||||
|
const channelsListeners = this.pubSubNode.client.getPubSubListeners(PubSubType.CHANNELS),
|
||||||
|
patternsListeners = this.pubSubNode.client.getPubSubListeners(PubSubType.PATTERNS);
|
||||||
|
|
||||||
|
if (channelsListeners.size || patternsListeners.size) {
|
||||||
|
promises.push(
|
||||||
|
this.#initiatePubSubClient({
|
||||||
|
[PubSubType.CHANNELS]: channelsListeners,
|
||||||
|
[PubSubType.PATTERNS]: patternsListeners
|
||||||
|
})
|
||||||
|
);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
for (const [address, node] of this.nodeByAddress.entries()) {
|
||||||
|
if (addressesInUse.has(address)) continue;
|
||||||
|
|
||||||
|
if (node.client) {
|
||||||
|
promises.push(
|
||||||
|
this.#execOnNodeClient(node.client, client => client.disconnect())
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
const { pubSubClient } = node as MasterNode<M, F, S>;
|
||||||
|
if (pubSubClient) {
|
||||||
|
promises.push(
|
||||||
|
this.#execOnNodeClient(pubSubClient, client => client.disconnect())
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
this.nodeByAddress.delete(address);
|
||||||
|
}
|
||||||
|
|
||||||
|
await Promise.all(promises);
|
||||||
|
|
||||||
|
return true;
|
||||||
|
} catch (err) {
|
||||||
|
this.#emit('error', err);
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
async #getShards(rootNode?: RedisClusterClientOptions) {
|
||||||
|
const client = new this.#Client(
|
||||||
|
this.#clientOptionsDefaults(rootNode, true)
|
||||||
|
);
|
||||||
|
|
||||||
|
client.on('error', err => this.#emit('error', err));
|
||||||
|
|
||||||
await client.connect();
|
await client.connect();
|
||||||
|
|
||||||
try {
|
try {
|
||||||
await this.#reset(await client.clusterNodes());
|
// using `CLUSTER SLOTS` and not `CLUSTER SHARDS` to support older versions
|
||||||
return true;
|
return await client.clusterSlots();
|
||||||
} catch (err) {
|
|
||||||
this.#onError(err);
|
|
||||||
return false;
|
|
||||||
} finally {
|
} finally {
|
||||||
if (client.isOpen) {
|
await client.disconnect();
|
||||||
await client.disconnect();
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
#runningRediscoverPromise?: Promise<void>;
|
|
||||||
|
|
||||||
async rediscover(startWith: RedisClientType<M, F, S>): Promise<void> {
|
|
||||||
if (!this.#runningRediscoverPromise) {
|
|
||||||
this.#runningRediscoverPromise = this.#rediscover(startWith)
|
|
||||||
.finally(() => this.#runningRediscoverPromise = undefined);
|
|
||||||
}
|
|
||||||
|
|
||||||
return this.#runningRediscoverPromise;
|
|
||||||
}
|
|
||||||
|
|
||||||
async #rediscover(startWith: RedisClientType<M, F, S>): Promise<void> {
|
|
||||||
if (await this.#discoverNodes(startWith.options)) return;
|
|
||||||
|
|
||||||
for (const { client } of this.#nodeByAddress.values()) {
|
|
||||||
if (client === startWith) continue;
|
|
||||||
|
|
||||||
if (await this.#discoverNodes(client.options)) return;
|
|
||||||
}
|
|
||||||
|
|
||||||
throw new Error('None of the cluster nodes is available');
|
|
||||||
}
|
|
||||||
|
|
||||||
async #reset(masters: Array<RedisClusterMasterNode>): Promise<void> {
|
|
||||||
// Override this.#slots and add not existing clients to this.#nodeByAddress
|
|
||||||
const promises: Array<Promise<void>> = [],
|
|
||||||
clientsInUse = new Set<string>();
|
|
||||||
for (const master of masters) {
|
|
||||||
const slot = {
|
|
||||||
master: this.#initiateClientForNode(master, false, clientsInUse, promises),
|
|
||||||
replicas: this.#options.useReplicas ?
|
|
||||||
master.replicas.map(replica => this.#initiateClientForNode(replica, true, clientsInUse, promises)) :
|
|
||||||
[],
|
|
||||||
clientIterator: undefined // will be initiated in use
|
|
||||||
};
|
|
||||||
|
|
||||||
for (const { from, to } of master.slots) {
|
|
||||||
for (let i = from; i <= to; i++) {
|
|
||||||
this.#slots[i] = slot;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Remove unused clients from this.#nodeByAddress using clientsInUse
|
|
||||||
for (const [address, { client }] of this.#nodeByAddress.entries()) {
|
|
||||||
if (clientsInUse.has(address)) continue;
|
|
||||||
|
|
||||||
promises.push(client.disconnect());
|
|
||||||
this.#nodeByAddress.delete(address);
|
|
||||||
}
|
|
||||||
|
|
||||||
await Promise.all(promises);
|
|
||||||
}
|
|
||||||
|
|
||||||
#clientOptionsDefaults(options?: RedisClusterClientOptions): RedisClusterClientOptions | undefined {
|
|
||||||
if (!this.#options.defaults) return options;
|
|
||||||
|
|
||||||
return {
|
|
||||||
...this.#options.defaults,
|
|
||||||
...options,
|
|
||||||
socket: this.#options.defaults.socket && options?.socket ? {
|
|
||||||
...this.#options.defaults.socket,
|
|
||||||
...options.socket
|
|
||||||
} : this.#options.defaults.socket ?? options?.socket
|
|
||||||
};
|
|
||||||
}
|
|
||||||
|
|
||||||
#initiateClient(options?: RedisClusterClientOptions): RedisClientType<M, F, S> {
|
|
||||||
return new this.#Client(this.#clientOptionsDefaults(options))
|
|
||||||
.on('error', this.#onError);
|
|
||||||
}
|
|
||||||
|
|
||||||
#getNodeAddress(address: string): NodeAddress | undefined {
|
#getNodeAddress(address: string): NodeAddress | undefined {
|
||||||
switch (typeof this.#options.nodeAddressMap) {
|
switch (typeof this.#options.nodeAddressMap) {
|
||||||
case 'object':
|
case 'object':
|
||||||
@@ -164,111 +261,123 @@ export default class RedisClusterSlots<
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
#initiateClientForNode(
|
#clientOptionsDefaults(
|
||||||
nodeData: RedisClusterMasterNode | RedisClusterReplicaNode,
|
options?: RedisClusterClientOptions,
|
||||||
readonly: boolean,
|
disableReconnect?: boolean
|
||||||
clientsInUse: Set<string>,
|
): RedisClusterClientOptions | undefined {
|
||||||
promises: Array<Promise<void>>
|
let result: RedisClusterClientOptions | undefined;
|
||||||
): ClusterNode<M, F, S> {
|
if (this.#options.defaults) {
|
||||||
const address = `${nodeData.host}:${nodeData.port}`;
|
let socket;
|
||||||
clientsInUse.add(address);
|
if (this.#options.defaults.socket) {
|
||||||
|
socket = options?.socket ? {
|
||||||
|
...this.#options.defaults.socket,
|
||||||
|
...options.socket
|
||||||
|
} : this.#options.defaults.socket;
|
||||||
|
} else {
|
||||||
|
socket = options?.socket;
|
||||||
|
}
|
||||||
|
|
||||||
let node = this.#nodeByAddress.get(address);
|
result = {
|
||||||
|
...this.#options.defaults,
|
||||||
|
...options,
|
||||||
|
socket
|
||||||
|
};
|
||||||
|
} else {
|
||||||
|
result = options;
|
||||||
|
}
|
||||||
|
|
||||||
|
if (disableReconnect) {
|
||||||
|
result ??= {};
|
||||||
|
result.socket ??= {};
|
||||||
|
result.socket.reconnectStrategy = false;
|
||||||
|
}
|
||||||
|
|
||||||
|
return result;
|
||||||
|
}
|
||||||
|
|
||||||
|
#initiateSlotNode(
|
||||||
|
{ id, ip, port }: ClusterSlotsNode,
|
||||||
|
readonly: boolean,
|
||||||
|
eagerConnent: boolean,
|
||||||
|
addressesInUse: Set<string>,
|
||||||
|
promises: Array<Promise<unknown>>
|
||||||
|
) {
|
||||||
|
const address = `${ip}:${port}`;
|
||||||
|
addressesInUse.add(address);
|
||||||
|
|
||||||
|
let node = this.nodeByAddress.get(address);
|
||||||
if (!node) {
|
if (!node) {
|
||||||
node = {
|
node = {
|
||||||
id: nodeData.id,
|
id,
|
||||||
client: this.#initiateClient({
|
host: ip,
|
||||||
socket: this.#getNodeAddress(address) ?? {
|
port,
|
||||||
host: nodeData.host,
|
address,
|
||||||
port: nodeData.port
|
readonly,
|
||||||
},
|
client: undefined
|
||||||
readonly
|
|
||||||
})
|
|
||||||
};
|
};
|
||||||
promises.push(node.client.connect());
|
|
||||||
this.#nodeByAddress.set(address, node);
|
if (eagerConnent) {
|
||||||
|
promises.push(this.#createNodeClient(node));
|
||||||
|
}
|
||||||
|
|
||||||
|
this.nodeByAddress.set(address, node);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
(readonly ? this.replicas : this.masters).push(node);
|
||||||
|
|
||||||
return node;
|
return node;
|
||||||
}
|
}
|
||||||
|
|
||||||
getSlotMaster(slot: number): ClusterNode<M, F, S> {
|
async #createClient(
|
||||||
return this.#slots[slot].master;
|
node: ShardNode<M, F, S>,
|
||||||
}
|
readonly = node.readonly
|
||||||
|
) {
|
||||||
*#slotClientIterator(slotNumber: number): IterableIterator<RedisClientType<M, F, S>> {
|
const client = new this.#Client(
|
||||||
const slot = this.#slots[slotNumber];
|
this.#clientOptionsDefaults({
|
||||||
yield slot.master.client;
|
socket: this.#getNodeAddress(node.address) ?? {
|
||||||
|
host: node.host,
|
||||||
for (const replica of slot.replicas) {
|
port: node.port
|
||||||
yield replica.client;
|
},
|
||||||
}
|
readonly
|
||||||
}
|
})
|
||||||
|
|
||||||
#getSlotClient(slotNumber: number): RedisClientType<M, F, S> {
|
|
||||||
const slot = this.#slots[slotNumber];
|
|
||||||
if (!slot.clientIterator) {
|
|
||||||
slot.clientIterator = this.#slotClientIterator(slotNumber);
|
|
||||||
}
|
|
||||||
|
|
||||||
const {done, value} = slot.clientIterator.next();
|
|
||||||
if (done) {
|
|
||||||
slot.clientIterator = undefined;
|
|
||||||
return this.#getSlotClient(slotNumber);
|
|
||||||
}
|
|
||||||
|
|
||||||
return value;
|
|
||||||
}
|
|
||||||
|
|
||||||
#randomClientIterator?: IterableIterator<ClusterNode<M, F, S>>;
|
|
||||||
|
|
||||||
#getRandomClient(): RedisClientType<M, F, S> {
|
|
||||||
if (!this.#nodeByAddress.size) {
|
|
||||||
throw new Error('Cluster is not connected');
|
|
||||||
}
|
|
||||||
|
|
||||||
if (!this.#randomClientIterator) {
|
|
||||||
this.#randomClientIterator = this.#nodeByAddress.values();
|
|
||||||
}
|
|
||||||
|
|
||||||
const {done, value} = this.#randomClientIterator.next();
|
|
||||||
if (done) {
|
|
||||||
this.#randomClientIterator = undefined;
|
|
||||||
return this.#getRandomClient();
|
|
||||||
}
|
|
||||||
|
|
||||||
return value.client;
|
|
||||||
}
|
|
||||||
|
|
||||||
getClient(firstKey?: RedisCommandArgument, isReadonly?: boolean): RedisClientType<M, F, S> {
|
|
||||||
if (!firstKey) {
|
|
||||||
return this.#getRandomClient();
|
|
||||||
}
|
|
||||||
|
|
||||||
const slot = calculateSlot(firstKey);
|
|
||||||
if (!isReadonly || !this.#options.useReplicas) {
|
|
||||||
return this.getSlotMaster(slot).client;
|
|
||||||
}
|
|
||||||
|
|
||||||
return this.#getSlotClient(slot);
|
|
||||||
}
|
|
||||||
|
|
||||||
getMasters(): Array<ClusterNode<M, F, S>> {
|
|
||||||
const masters = [];
|
|
||||||
for (const node of this.#nodeByAddress.values()) {
|
|
||||||
if (node.client.options?.readonly) continue;
|
|
||||||
|
|
||||||
masters.push(node);
|
|
||||||
}
|
|
||||||
|
|
||||||
return masters;
|
|
||||||
}
|
|
||||||
|
|
||||||
getNodeByAddress(address: string): ClusterNode<M, F, S> | undefined {
|
|
||||||
const mappedAddress = this.#getNodeAddress(address);
|
|
||||||
return this.#nodeByAddress.get(
|
|
||||||
mappedAddress ? `${mappedAddress.host}:${mappedAddress.port}` : address
|
|
||||||
);
|
);
|
||||||
|
client.on('error', err => this.#emit('error', err));
|
||||||
|
|
||||||
|
await client.connect();
|
||||||
|
|
||||||
|
return client;
|
||||||
|
}
|
||||||
|
|
||||||
|
#createNodeClient(node: ShardNode<M, F, S>) {
|
||||||
|
const promise = this.#createClient(node)
|
||||||
|
.then(client => {
|
||||||
|
node.client = client;
|
||||||
|
return client;
|
||||||
|
})
|
||||||
|
.catch(err => {
|
||||||
|
node.client = undefined;
|
||||||
|
throw err;
|
||||||
|
});
|
||||||
|
node.client = promise;
|
||||||
|
return promise;
|
||||||
|
}
|
||||||
|
|
||||||
|
nodeClient(node: ShardNode<M, F, S>) {
|
||||||
|
return node.client ?? this.#createNodeClient(node);
|
||||||
|
}
|
||||||
|
|
||||||
|
#runningRediscoverPromise?: Promise<void>;
|
||||||
|
|
||||||
|
async rediscover(startWith: RedisClientType<M, F, S>): Promise<void> {
|
||||||
|
this.#runningRediscoverPromise ??= this.#rediscover(startWith)
|
||||||
|
.finally(() => this.#runningRediscoverPromise = undefined);
|
||||||
|
return this.#runningRediscoverPromise;
|
||||||
|
}
|
||||||
|
|
||||||
|
async #rediscover(startWith: RedisClientType<M, F, S>): Promise<void> {
|
||||||
|
if (await this.#discover(startWith.options)) return;
|
||||||
|
|
||||||
|
return this.#discoverWithRootNodes();
|
||||||
}
|
}
|
||||||
|
|
||||||
quit(): Promise<void> {
|
quit(): Promise<void> {
|
||||||
@@ -280,14 +389,233 @@ export default class RedisClusterSlots<
|
|||||||
}
|
}
|
||||||
|
|
||||||
async #destroy(fn: (client: RedisClientType<M, F, S>) => Promise<unknown>): Promise<void> {
|
async #destroy(fn: (client: RedisClientType<M, F, S>) => Promise<unknown>): Promise<void> {
|
||||||
|
this.#isOpen = false;
|
||||||
|
|
||||||
const promises = [];
|
const promises = [];
|
||||||
for (const { client } of this.#nodeByAddress.values()) {
|
for (const { master, replicas } of this.shards) {
|
||||||
promises.push(fn(client));
|
if (master.client) {
|
||||||
|
promises.push(
|
||||||
|
this.#execOnNodeClient(master.client, fn)
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
if (master.pubSubClient) {
|
||||||
|
promises.push(
|
||||||
|
this.#execOnNodeClient(master.pubSubClient, fn)
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
if (replicas) {
|
||||||
|
for (const { client } of replicas) {
|
||||||
|
if (client) {
|
||||||
|
promises.push(
|
||||||
|
this.#execOnNodeClient(client, fn)
|
||||||
|
);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
await Promise.all(promises);
|
if (this.pubSubNode) {
|
||||||
|
promises.push(this.#execOnNodeClient(this.pubSubNode.client, fn));
|
||||||
|
this.pubSubNode = undefined;
|
||||||
|
}
|
||||||
|
|
||||||
this.#nodeByAddress.clear();
|
this.#resetSlots();
|
||||||
this.#slots.splice(0);
|
this.nodeByAddress.clear();
|
||||||
|
|
||||||
|
await Promise.allSettled(promises);
|
||||||
|
}
|
||||||
|
|
||||||
|
#execOnNodeClient(
|
||||||
|
client: ClientOrPromise<M, F, S>,
|
||||||
|
fn: (client: RedisClientType<M, F, S>) => Promise<unknown>
|
||||||
|
) {
|
||||||
|
return types.isPromise(client) ?
|
||||||
|
client.then(fn) :
|
||||||
|
fn(client);
|
||||||
|
}
|
||||||
|
|
||||||
|
getClient(
|
||||||
|
firstKey: RedisCommandArgument | undefined,
|
||||||
|
isReadonly: boolean | undefined
|
||||||
|
): ClientOrPromise<M, F, S> {
|
||||||
|
if (!firstKey) {
|
||||||
|
return this.nodeClient(this.getRandomNode());
|
||||||
|
}
|
||||||
|
|
||||||
|
const slotNumber = calculateSlot(firstKey);
|
||||||
|
if (!isReadonly) {
|
||||||
|
return this.nodeClient(this.slots[slotNumber].master);
|
||||||
|
}
|
||||||
|
|
||||||
|
return this.nodeClient(this.getSlotRandomNode(slotNumber));
|
||||||
|
}
|
||||||
|
|
||||||
|
*#iterateAllNodes() {
|
||||||
|
let i = Math.floor(Math.random() * (this.masters.length + this.replicas.length));
|
||||||
|
if (i < this.masters.length) {
|
||||||
|
do {
|
||||||
|
yield this.masters[i];
|
||||||
|
} while (++i < this.masters.length);
|
||||||
|
|
||||||
|
for (const replica of this.replicas) {
|
||||||
|
yield replica;
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
i -= this.masters.length;
|
||||||
|
do {
|
||||||
|
yield this.replicas[i];
|
||||||
|
} while (++i < this.replicas.length);
|
||||||
|
}
|
||||||
|
|
||||||
|
while (true) {
|
||||||
|
for (const master of this.masters) {
|
||||||
|
yield master;
|
||||||
|
}
|
||||||
|
|
||||||
|
for (const replica of this.replicas) {
|
||||||
|
yield replica;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
#randomNodeIterator?: IterableIterator<ShardNode<M, F, S>>;
|
||||||
|
|
||||||
|
getRandomNode() {
|
||||||
|
this.#randomNodeIterator ??= this.#iterateAllNodes();
|
||||||
|
return this.#randomNodeIterator.next().value as ShardNode<M, F, S>;
|
||||||
|
}
|
||||||
|
|
||||||
|
*#slotNodesIterator(slot: ShardWithReplicas<M, F, S>) {
|
||||||
|
let i = Math.floor(Math.random() * (1 + slot.replicas.length));
|
||||||
|
if (i < slot.replicas.length) {
|
||||||
|
do {
|
||||||
|
yield slot.replicas[i];
|
||||||
|
} while (++i < slot.replicas.length);
|
||||||
|
}
|
||||||
|
|
||||||
|
while (true) {
|
||||||
|
yield slot.master;
|
||||||
|
|
||||||
|
for (const replica of slot.replicas) {
|
||||||
|
yield replica;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
getSlotRandomNode(slotNumber: number) {
|
||||||
|
const slot = this.slots[slotNumber];
|
||||||
|
if (!slot.replicas?.length) {
|
||||||
|
return slot.master;
|
||||||
|
}
|
||||||
|
|
||||||
|
slot.nodesIterator ??= this.#slotNodesIterator(slot as ShardWithReplicas<M, F, S>);
|
||||||
|
return slot.nodesIterator.next().value as ShardNode<M, F, S>;
|
||||||
|
}
|
||||||
|
|
||||||
|
getMasterByAddress(address: string) {
|
||||||
|
const master = this.nodeByAddress.get(address);
|
||||||
|
if (!master) return;
|
||||||
|
|
||||||
|
return this.nodeClient(master);
|
||||||
|
}
|
||||||
|
|
||||||
|
getPubSubClient() {
|
||||||
|
return this.pubSubNode ?
|
||||||
|
this.pubSubNode.client :
|
||||||
|
this.#initiatePubSubClient();
|
||||||
|
}
|
||||||
|
|
||||||
|
async #initiatePubSubClient(toResubscribe?: PubSubToResubscribe) {
|
||||||
|
const index = Math.floor(Math.random() * (this.masters.length + this.replicas.length)),
|
||||||
|
node = index < this.masters.length ?
|
||||||
|
this.masters[index] :
|
||||||
|
this.replicas[index - this.masters.length];
|
||||||
|
|
||||||
|
this.pubSubNode = {
|
||||||
|
address: node.address,
|
||||||
|
client: this.#createClient(node, true)
|
||||||
|
.then(async client => {
|
||||||
|
if (toResubscribe) {
|
||||||
|
await Promise.all([
|
||||||
|
client.extendPubSubListeners(PubSubType.CHANNELS, toResubscribe[PubSubType.CHANNELS]),
|
||||||
|
client.extendPubSubListeners(PubSubType.PATTERNS, toResubscribe[PubSubType.PATTERNS])
|
||||||
|
]);
|
||||||
|
}
|
||||||
|
|
||||||
|
this.pubSubNode!.client = client;
|
||||||
|
return client;
|
||||||
|
})
|
||||||
|
.catch(err => {
|
||||||
|
this.pubSubNode = undefined;
|
||||||
|
throw err;
|
||||||
|
})
|
||||||
|
};
|
||||||
|
|
||||||
|
return this.pubSubNode.client as Promise<RedisClientType<M, F, S>>;
|
||||||
|
}
|
||||||
|
|
||||||
|
async executeUnsubscribeCommand(
|
||||||
|
unsubscribe: (client: RedisClientType<M, F, S>) => Promise<void>
|
||||||
|
): Promise<void> {
|
||||||
|
const client = await this.getPubSubClient();
|
||||||
|
await unsubscribe(client);
|
||||||
|
|
||||||
|
if (!client.isPubSubActive) {
|
||||||
|
await client.disconnect();
|
||||||
|
this.pubSubNode = undefined;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
getShardedPubSubClient(channel: string) {
|
||||||
|
const { master } = this.slots[calculateSlot(channel)];
|
||||||
|
return master.pubSubClient ?? this.#initiateShardedPubSubClient(master);
|
||||||
|
}
|
||||||
|
|
||||||
|
#initiateShardedPubSubClient(master: MasterNode<M, F, S>) {
|
||||||
|
const promise = this.#createClient(master, true)
|
||||||
|
.then(client => {
|
||||||
|
client.on('server-sunsubscribe', async (channel, listeners) => {
|
||||||
|
try {
|
||||||
|
await this.rediscover(client);
|
||||||
|
const redirectTo = await this.getShardedPubSubClient(channel);
|
||||||
|
redirectTo.extendPubSubChannelListeners(
|
||||||
|
PubSubType.SHARDED,
|
||||||
|
channel,
|
||||||
|
listeners
|
||||||
|
);
|
||||||
|
} catch (err) {
|
||||||
|
this.#emit('sharded-shannel-moved-error', err, channel, listeners);
|
||||||
|
}
|
||||||
|
});
|
||||||
|
|
||||||
|
master.pubSubClient = client;
|
||||||
|
return client;
|
||||||
|
})
|
||||||
|
.catch(err => {
|
||||||
|
master.pubSubClient = undefined;
|
||||||
|
throw err;
|
||||||
|
});
|
||||||
|
|
||||||
|
master.pubSubClient = promise;
|
||||||
|
|
||||||
|
return promise;
|
||||||
|
}
|
||||||
|
|
||||||
|
async executeShardedUnsubscribeCommand(
|
||||||
|
channel: string,
|
||||||
|
unsubscribe: (client: RedisClientType<M, F, S>) => Promise<void>
|
||||||
|
): Promise<void> {
|
||||||
|
const { master } = this.slots[calculateSlot(channel)];
|
||||||
|
if (!master.pubSubClient) return Promise.resolve();
|
||||||
|
|
||||||
|
const client = await master.pubSubClient;
|
||||||
|
await unsubscribe(client);
|
||||||
|
|
||||||
|
if (!client.isPubSubActive) {
|
||||||
|
await client.disconnect();
|
||||||
|
master.pubSubClient = undefined;
|
||||||
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
@@ -135,6 +135,7 @@ import * as SORT_RO from '../commands/SORT_RO';
|
|||||||
import * as SORT_STORE from '../commands/SORT_STORE';
|
import * as SORT_STORE from '../commands/SORT_STORE';
|
||||||
import * as SORT from '../commands/SORT';
|
import * as SORT from '../commands/SORT';
|
||||||
import * as SPOP from '../commands/SPOP';
|
import * as SPOP from '../commands/SPOP';
|
||||||
|
import * as SPUBLISH from '../commands/SPUBLISH';
|
||||||
import * as SRANDMEMBER_COUNT from '../commands/SRANDMEMBER_COUNT';
|
import * as SRANDMEMBER_COUNT from '../commands/SRANDMEMBER_COUNT';
|
||||||
import * as SRANDMEMBER from '../commands/SRANDMEMBER';
|
import * as SRANDMEMBER from '../commands/SRANDMEMBER';
|
||||||
import * as SREM from '../commands/SREM';
|
import * as SREM from '../commands/SREM';
|
||||||
@@ -483,6 +484,8 @@ export default {
|
|||||||
sort: SORT,
|
sort: SORT,
|
||||||
SPOP,
|
SPOP,
|
||||||
sPop: SPOP,
|
sPop: SPOP,
|
||||||
|
SPUBLISH,
|
||||||
|
sPublish: SPUBLISH,
|
||||||
SRANDMEMBER_COUNT,
|
SRANDMEMBER_COUNT,
|
||||||
sRandMemberCount: SRANDMEMBER_COUNT,
|
sRandMemberCount: SRANDMEMBER_COUNT,
|
||||||
SRANDMEMBER,
|
SRANDMEMBER,
|
||||||
|
@@ -1,25 +1,29 @@
|
|||||||
import { strict as assert } from 'assert';
|
import { strict as assert } from 'assert';
|
||||||
import testUtils, { GLOBAL } from '../test-utils';
|
import testUtils, { GLOBAL, waitTillBeenCalled } from '../test-utils';
|
||||||
import RedisCluster from '.';
|
import RedisCluster from '.';
|
||||||
import { ClusterSlotStates } from '../commands/CLUSTER_SETSLOT';
|
import { ClusterSlotStates } from '../commands/CLUSTER_SETSLOT';
|
||||||
import { SQUARE_SCRIPT } from '../client/index.spec';
|
import { SQUARE_SCRIPT } from '../client/index.spec';
|
||||||
import { RootNodesUnavailableError } from '../errors';
|
import { RootNodesUnavailableError } from '../errors';
|
||||||
|
import { spy } from 'sinon';
|
||||||
// We need to use 'require', because it's not possible with Typescript to import
|
import { promiseTimeout } from '../utils';
|
||||||
// function that are exported as 'module.exports = function`, without esModuleInterop
|
import RedisClient from '../client';
|
||||||
// set to true.
|
|
||||||
const calculateSlot = require('cluster-key-slot');
|
|
||||||
|
|
||||||
describe('Cluster', () => {
|
describe('Cluster', () => {
|
||||||
testUtils.testWithCluster('sendCommand', async cluster => {
|
testUtils.testWithCluster('sendCommand', async cluster => {
|
||||||
await cluster.publish('channel', 'message');
|
assert.equal(
|
||||||
await cluster.set('a', 'b');
|
await cluster.sendCommand(undefined, true, ['PING']),
|
||||||
await cluster.set('a{a}', 'bb');
|
'PONG'
|
||||||
await cluster.set('aa', 'bb');
|
);
|
||||||
await cluster.get('aa');
|
}, GLOBAL.CLUSTERS.OPEN);
|
||||||
await cluster.get('aa');
|
|
||||||
await cluster.get('aa');
|
testUtils.testWithCluster('isOpen', async cluster => {
|
||||||
await cluster.get('aa');
|
assert.equal(cluster.isOpen, true);
|
||||||
|
await cluster.disconnect();
|
||||||
|
assert.equal(cluster.isOpen, false);
|
||||||
|
}, GLOBAL.CLUSTERS.OPEN);
|
||||||
|
|
||||||
|
testUtils.testWithCluster('connect should throw if already connected', async cluster => {
|
||||||
|
await assert.rejects(cluster.connect());
|
||||||
}, GLOBAL.CLUSTERS.OPEN);
|
}, GLOBAL.CLUSTERS.OPEN);
|
||||||
|
|
||||||
testUtils.testWithCluster('multi', async cluster => {
|
testUtils.testWithCluster('multi', async cluster => {
|
||||||
@@ -64,54 +68,279 @@ describe('Cluster', () => {
|
|||||||
});
|
});
|
||||||
|
|
||||||
testUtils.testWithCluster('should handle live resharding', async cluster => {
|
testUtils.testWithCluster('should handle live resharding', async cluster => {
|
||||||
const key = 'key',
|
const slot = 12539,
|
||||||
|
key = 'key',
|
||||||
value = 'value';
|
value = 'value';
|
||||||
await cluster.set(key, value);
|
await cluster.set(key, value);
|
||||||
|
|
||||||
const slot = calculateSlot(key),
|
const importing = cluster.slots[0].master,
|
||||||
source = cluster.getSlotMaster(slot),
|
migrating = cluster.slots[slot].master,
|
||||||
destination = cluster.getMasters().find(node => node.id !== source.id)!;
|
[ importingClient, migratingClient ] = await Promise.all([
|
||||||
|
cluster.nodeClient(importing),
|
||||||
|
cluster.nodeClient(migrating)
|
||||||
|
]);
|
||||||
|
|
||||||
await Promise.all([
|
await Promise.all([
|
||||||
source.client.clusterSetSlot(slot, ClusterSlotStates.MIGRATING, destination.id),
|
importingClient.clusterSetSlot(slot, ClusterSlotStates.IMPORTING, migrating.id),
|
||||||
destination.client.clusterSetSlot(slot, ClusterSlotStates.IMPORTING, destination.id)
|
migratingClient.clusterSetSlot(slot, ClusterSlotStates.MIGRATING, importing.id)
|
||||||
]);
|
]);
|
||||||
|
|
||||||
// should be able to get the key from the source node using "ASKING"
|
// should be able to get the key from the migrating node
|
||||||
|
assert.equal(
|
||||||
|
await cluster.get(key),
|
||||||
|
value
|
||||||
|
);
|
||||||
|
|
||||||
|
await migratingClient.migrate(
|
||||||
|
importing.host,
|
||||||
|
importing.port,
|
||||||
|
key,
|
||||||
|
0,
|
||||||
|
10
|
||||||
|
);
|
||||||
|
|
||||||
|
// should be able to get the key from the importing node using `ASKING`
|
||||||
assert.equal(
|
assert.equal(
|
||||||
await cluster.get(key),
|
await cluster.get(key),
|
||||||
value
|
value
|
||||||
);
|
);
|
||||||
|
|
||||||
await Promise.all([
|
await Promise.all([
|
||||||
source.client.migrate(
|
importingClient.clusterSetSlot(slot, ClusterSlotStates.NODE, importing.id),
|
||||||
'127.0.0.1',
|
migratingClient.clusterSetSlot(slot, ClusterSlotStates.NODE, importing.id),
|
||||||
(<any>destination.client.options).socket.port,
|
|
||||||
key,
|
|
||||||
0,
|
|
||||||
10
|
|
||||||
)
|
|
||||||
]);
|
]);
|
||||||
|
|
||||||
// should be able to get the key from the destination node using the "ASKING" command
|
// should handle `MOVED` errors
|
||||||
assert.equal(
|
|
||||||
await cluster.get(key),
|
|
||||||
value
|
|
||||||
);
|
|
||||||
|
|
||||||
await Promise.all(
|
|
||||||
cluster.getMasters().map(({ client }) => {
|
|
||||||
return client.clusterSetSlot(slot, ClusterSlotStates.NODE, destination.id);
|
|
||||||
})
|
|
||||||
);
|
|
||||||
|
|
||||||
// should handle "MOVED" errors
|
|
||||||
assert.equal(
|
assert.equal(
|
||||||
await cluster.get(key),
|
await cluster.get(key),
|
||||||
value
|
value
|
||||||
);
|
);
|
||||||
}, {
|
}, {
|
||||||
serverArguments: [],
|
serverArguments: [],
|
||||||
numberOfNodes: 2
|
numberOfMasters: 2
|
||||||
|
});
|
||||||
|
|
||||||
|
testUtils.testWithCluster('getRandomNode should spread the the load evenly', async cluster => {
|
||||||
|
const totalNodes = cluster.masters.length + cluster.replicas.length,
|
||||||
|
ids = new Set<string>();
|
||||||
|
for (let i = 0; i < totalNodes; i++) {
|
||||||
|
ids.add(cluster.getRandomNode().id);
|
||||||
|
}
|
||||||
|
|
||||||
|
assert.equal(ids.size, totalNodes);
|
||||||
|
}, GLOBAL.CLUSTERS.WITH_REPLICAS);
|
||||||
|
|
||||||
|
testUtils.testWithCluster('getSlotRandomNode should spread the the load evenly', async cluster => {
|
||||||
|
const totalNodes = 1 + cluster.slots[0].replicas!.length,
|
||||||
|
ids = new Set<string>();
|
||||||
|
for (let i = 0; i < totalNodes; i++) {
|
||||||
|
ids.add(cluster.getSlotRandomNode(0).id);
|
||||||
|
}
|
||||||
|
|
||||||
|
assert.equal(ids.size, totalNodes);
|
||||||
|
}, GLOBAL.CLUSTERS.WITH_REPLICAS);
|
||||||
|
|
||||||
|
testUtils.testWithCluster('cluster topology', async cluster => {
|
||||||
|
assert.equal(cluster.slots.length, 16384);
|
||||||
|
const { numberOfMasters, numberOfReplicas } = GLOBAL.CLUSTERS.WITH_REPLICAS;
|
||||||
|
assert.equal(cluster.shards.length, numberOfMasters);
|
||||||
|
assert.equal(cluster.masters.length, numberOfMasters);
|
||||||
|
assert.equal(cluster.replicas.length, numberOfReplicas * numberOfMasters);
|
||||||
|
assert.equal(cluster.nodeByAddress.size, numberOfMasters + numberOfMasters * numberOfReplicas);
|
||||||
|
}, GLOBAL.CLUSTERS.WITH_REPLICAS);
|
||||||
|
|
||||||
|
testUtils.testWithCluster('getMasters should be backwards competiable (without `minimizeConnections`)', async cluster => {
|
||||||
|
const masters = cluster.getMasters();
|
||||||
|
assert.ok(Array.isArray(masters));
|
||||||
|
for (const master of masters) {
|
||||||
|
assert.equal(typeof master.id, 'string');
|
||||||
|
assert.ok(master.client instanceof RedisClient);
|
||||||
|
}
|
||||||
|
}, {
|
||||||
|
...GLOBAL.CLUSTERS.OPEN,
|
||||||
|
clusterConfiguration: {
|
||||||
|
minimizeConnections: undefined // reset to default
|
||||||
|
}
|
||||||
|
});
|
||||||
|
|
||||||
|
testUtils.testWithCluster('getSlotMaster should be backwards competiable (without `minimizeConnections`)', async cluster => {
|
||||||
|
const master = cluster.getSlotMaster(0);
|
||||||
|
assert.equal(typeof master.id, 'string');
|
||||||
|
assert.ok(master.client instanceof RedisClient);
|
||||||
|
}, {
|
||||||
|
...GLOBAL.CLUSTERS.OPEN,
|
||||||
|
clusterConfiguration: {
|
||||||
|
minimizeConnections: undefined // reset to default
|
||||||
|
}
|
||||||
|
});
|
||||||
|
|
||||||
|
testUtils.testWithCluster('should throw CROSSSLOT error', async cluster => {
|
||||||
|
await assert.rejects(cluster.mGet(['a', 'b']));
|
||||||
|
}, GLOBAL.CLUSTERS.OPEN);
|
||||||
|
|
||||||
|
describe('minimizeConnections', () => {
|
||||||
|
testUtils.testWithCluster('false', async cluster => {
|
||||||
|
for (const master of cluster.masters) {
|
||||||
|
assert.ok(master.client instanceof RedisClient);
|
||||||
|
}
|
||||||
|
}, {
|
||||||
|
...GLOBAL.CLUSTERS.OPEN,
|
||||||
|
clusterConfiguration: {
|
||||||
|
minimizeConnections: false
|
||||||
|
}
|
||||||
|
});
|
||||||
|
|
||||||
|
testUtils.testWithCluster('true', async cluster => {
|
||||||
|
for (const master of cluster.masters) {
|
||||||
|
assert.equal(master.client, undefined);
|
||||||
|
}
|
||||||
|
}, {
|
||||||
|
...GLOBAL.CLUSTERS.OPEN,
|
||||||
|
clusterConfiguration: {
|
||||||
|
minimizeConnections: true
|
||||||
|
}
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('PubSub', () => {
|
||||||
|
testUtils.testWithCluster('subscribe & unsubscribe', async cluster => {
|
||||||
|
const listener = spy();
|
||||||
|
|
||||||
|
await cluster.subscribe('channel', listener);
|
||||||
|
|
||||||
|
await Promise.all([
|
||||||
|
waitTillBeenCalled(listener),
|
||||||
|
cluster.publish('channel', 'message')
|
||||||
|
]);
|
||||||
|
|
||||||
|
assert.ok(listener.calledOnceWithExactly('message', 'channel'));
|
||||||
|
|
||||||
|
await cluster.unsubscribe('channel', listener);
|
||||||
|
|
||||||
|
assert.equal(cluster.pubSubNode, undefined);
|
||||||
|
}, GLOBAL.CLUSTERS.OPEN);
|
||||||
|
|
||||||
|
testUtils.testWithCluster('psubscribe & punsubscribe', async cluster => {
|
||||||
|
const listener = spy();
|
||||||
|
|
||||||
|
await cluster.pSubscribe('channe*', listener);
|
||||||
|
|
||||||
|
await Promise.all([
|
||||||
|
waitTillBeenCalled(listener),
|
||||||
|
cluster.publish('channel', 'message')
|
||||||
|
]);
|
||||||
|
|
||||||
|
assert.ok(listener.calledOnceWithExactly('message', 'channel'));
|
||||||
|
|
||||||
|
await cluster.pUnsubscribe('channe*', listener);
|
||||||
|
|
||||||
|
assert.equal(cluster.pubSubNode, undefined);
|
||||||
|
}, GLOBAL.CLUSTERS.OPEN);
|
||||||
|
|
||||||
|
testUtils.testWithCluster('should move listeners when PubSub node disconnects from the cluster', async cluster => {
|
||||||
|
const listener = spy();
|
||||||
|
await cluster.subscribe('channel', listener);
|
||||||
|
|
||||||
|
assert.ok(cluster.pubSubNode);
|
||||||
|
const [ migrating, importing ] = cluster.masters[0].address === cluster.pubSubNode.address ?
|
||||||
|
cluster.masters :
|
||||||
|
[cluster.masters[1], cluster.masters[0]],
|
||||||
|
[ migratingClient, importingClient ] = await Promise.all([
|
||||||
|
cluster.nodeClient(migrating),
|
||||||
|
cluster.nodeClient(importing)
|
||||||
|
]);
|
||||||
|
|
||||||
|
const range = cluster.slots[0].master === migrating ? {
|
||||||
|
key: 'bar', // 5061
|
||||||
|
start: 0,
|
||||||
|
end: 8191
|
||||||
|
} : {
|
||||||
|
key: 'foo', // 12182
|
||||||
|
start: 8192,
|
||||||
|
end: 16383
|
||||||
|
};
|
||||||
|
|
||||||
|
await Promise.all([
|
||||||
|
migratingClient.clusterDelSlotsRange(range),
|
||||||
|
importingClient.clusterDelSlotsRange(range),
|
||||||
|
importingClient.clusterAddSlotsRange(range)
|
||||||
|
]);
|
||||||
|
|
||||||
|
// wait for migrating node to be notified about the new topology
|
||||||
|
while ((await migratingClient.clusterInfo()).state !== 'ok') {
|
||||||
|
await promiseTimeout(50);
|
||||||
|
}
|
||||||
|
|
||||||
|
// make sure to cause `MOVED` error
|
||||||
|
await cluster.get(range.key);
|
||||||
|
|
||||||
|
await Promise.all([
|
||||||
|
cluster.publish('channel', 'message'),
|
||||||
|
waitTillBeenCalled(listener)
|
||||||
|
]);
|
||||||
|
|
||||||
|
assert.ok(listener.calledOnceWithExactly('message', 'channel'));
|
||||||
|
}, {
|
||||||
|
serverArguments: [],
|
||||||
|
numberOfMasters: 2,
|
||||||
|
minimumDockerVersion: [7]
|
||||||
|
});
|
||||||
|
|
||||||
|
testUtils.testWithCluster('ssubscribe & sunsubscribe', async cluster => {
|
||||||
|
const listener = spy();
|
||||||
|
|
||||||
|
await cluster.sSubscribe('channel', listener);
|
||||||
|
|
||||||
|
await Promise.all([
|
||||||
|
waitTillBeenCalled(listener),
|
||||||
|
cluster.sPublish('channel', 'message')
|
||||||
|
]);
|
||||||
|
|
||||||
|
assert.ok(listener.calledOnceWithExactly('message', 'channel'));
|
||||||
|
|
||||||
|
await cluster.sUnsubscribe('channel', listener);
|
||||||
|
|
||||||
|
// 10328 is the slot of `channel`
|
||||||
|
assert.equal(cluster.slots[10328].master.pubSubClient, undefined);
|
||||||
|
}, {
|
||||||
|
...GLOBAL.CLUSTERS.OPEN,
|
||||||
|
minimumDockerVersion: [7]
|
||||||
|
});
|
||||||
|
|
||||||
|
testUtils.testWithCluster('should handle sharded-channel-moved events', async cluster => {
|
||||||
|
const SLOT = 10328,
|
||||||
|
migrating = cluster.slots[SLOT].master,
|
||||||
|
importing = cluster.masters.find(master => master !== migrating)!,
|
||||||
|
[ migratingClient, importingClient ] = await Promise.all([
|
||||||
|
cluster.nodeClient(migrating),
|
||||||
|
cluster.nodeClient(importing)
|
||||||
|
]);
|
||||||
|
|
||||||
|
await Promise.all([
|
||||||
|
migratingClient.clusterDelSlots(SLOT),
|
||||||
|
importingClient.clusterDelSlots(SLOT),
|
||||||
|
importingClient.clusterAddSlots(SLOT)
|
||||||
|
]);
|
||||||
|
|
||||||
|
// wait for migrating node to be notified about the new topology
|
||||||
|
while ((await migratingClient.clusterInfo()).state !== 'ok') {
|
||||||
|
await promiseTimeout(50);
|
||||||
|
}
|
||||||
|
|
||||||
|
const listener = spy();
|
||||||
|
|
||||||
|
// will trigger `MOVED` error
|
||||||
|
await cluster.sSubscribe('channel', listener);
|
||||||
|
|
||||||
|
await Promise.all([
|
||||||
|
waitTillBeenCalled(listener),
|
||||||
|
cluster.sPublish('channel', 'message')
|
||||||
|
]);
|
||||||
|
|
||||||
|
assert.ok(listener.calledOnceWithExactly('message', 'channel'));
|
||||||
|
}, {
|
||||||
|
serverArguments: [],
|
||||||
|
minimumDockerVersion: [7]
|
||||||
|
});
|
||||||
});
|
});
|
||||||
});
|
});
|
||||||
|
@@ -1,11 +1,13 @@
|
|||||||
import COMMANDS from './commands';
|
import COMMANDS from './commands';
|
||||||
import { RedisCommand, RedisCommandArgument, RedisCommandArguments, RedisCommandRawReply, RedisCommandReply, RedisFunctions, RedisModules, RedisExtensions, RedisScript, RedisScripts, RedisCommandSignature, RedisFunction } from '../commands';
|
import { RedisCommand, RedisCommandArgument, RedisCommandArguments, RedisCommandRawReply, RedisCommandReply, RedisFunctions, RedisModules, RedisExtensions, RedisScript, RedisScripts, RedisCommandSignature, RedisFunction } from '../commands';
|
||||||
import { ClientCommandOptions, RedisClientOptions, RedisClientType, WithFunctions, WithModules, WithScripts } from '../client';
|
import { ClientCommandOptions, RedisClientOptions, RedisClientType, WithFunctions, WithModules, WithScripts } from '../client';
|
||||||
import RedisClusterSlots, { ClusterNode, NodeAddressMap } from './cluster-slots';
|
import RedisClusterSlots, { NodeAddressMap, ShardNode } from './cluster-slots';
|
||||||
import { attachExtensions, transformCommandReply, attachCommands, transformCommandArguments } from '../commander';
|
import { attachExtensions, transformCommandReply, attachCommands, transformCommandArguments } from '../commander';
|
||||||
import { EventEmitter } from 'events';
|
import { EventEmitter } from 'events';
|
||||||
import RedisClusterMultiCommand, { InstantiableRedisClusterMultiCommandType, RedisClusterMultiCommandType } from './multi-command';
|
import RedisClusterMultiCommand, { InstantiableRedisClusterMultiCommandType, RedisClusterMultiCommandType } from './multi-command';
|
||||||
import { RedisMultiQueuedCommand } from '../multi-command';
|
import { RedisMultiQueuedCommand } from '../multi-command';
|
||||||
|
import { PubSubListener } from '../client/pub-sub';
|
||||||
|
import { ErrorReply } from '../errors';
|
||||||
|
|
||||||
export type RedisClusterClientOptions = Omit<
|
export type RedisClusterClientOptions = Omit<
|
||||||
RedisClientOptions,
|
RedisClientOptions,
|
||||||
@@ -17,10 +19,34 @@ export interface RedisClusterOptions<
|
|||||||
F extends RedisFunctions = Record<string, never>,
|
F extends RedisFunctions = Record<string, never>,
|
||||||
S extends RedisScripts = Record<string, never>
|
S extends RedisScripts = Record<string, never>
|
||||||
> extends RedisExtensions<M, F, S> {
|
> extends RedisExtensions<M, F, S> {
|
||||||
|
/**
|
||||||
|
* Should contain details for some of the cluster nodes that the client will use to discover
|
||||||
|
* the "cluster topology". We recommend including details for at least 3 nodes here.
|
||||||
|
*/
|
||||||
rootNodes: Array<RedisClusterClientOptions>;
|
rootNodes: Array<RedisClusterClientOptions>;
|
||||||
|
/**
|
||||||
|
* Default values used for every client in the cluster. Use this to specify global values,
|
||||||
|
* for example: ACL credentials, timeouts, TLS configuration etc.
|
||||||
|
*/
|
||||||
defaults?: Partial<RedisClusterClientOptions>;
|
defaults?: Partial<RedisClusterClientOptions>;
|
||||||
|
/**
|
||||||
|
* When `true`, `.connect()` will only discover the cluster topology, without actually connecting to all the nodes.
|
||||||
|
* Useful for short-term or PubSub-only connections.
|
||||||
|
*/
|
||||||
|
minimizeConnections?: boolean;
|
||||||
|
/**
|
||||||
|
* When `true`, distribute load by executing readonly commands (such as `GET`, `GEOSEARCH`, etc.) across all cluster nodes. When `false`, only use master nodes.
|
||||||
|
*/
|
||||||
useReplicas?: boolean;
|
useReplicas?: boolean;
|
||||||
|
/**
|
||||||
|
* The maximum number of times a command will be redirected due to `MOVED` or `ASK` errors.
|
||||||
|
*/
|
||||||
maxCommandRedirections?: number;
|
maxCommandRedirections?: number;
|
||||||
|
/**
|
||||||
|
* Mapping between the addresses in the cluster (see `CLUSTER SHARDS`) and the addresses the client should connect to
|
||||||
|
* Useful when the cluster is running on another network
|
||||||
|
*
|
||||||
|
*/
|
||||||
nodeAddressMap?: NodeAddressMap;
|
nodeAddressMap?: NodeAddressMap;
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -70,14 +96,44 @@ export default class RedisCluster<
|
|||||||
}
|
}
|
||||||
|
|
||||||
readonly #options: RedisClusterOptions<M, F, S>;
|
readonly #options: RedisClusterOptions<M, F, S>;
|
||||||
|
|
||||||
readonly #slots: RedisClusterSlots<M, F, S>;
|
readonly #slots: RedisClusterSlots<M, F, S>;
|
||||||
|
|
||||||
|
get slots() {
|
||||||
|
return this.#slots.slots;
|
||||||
|
}
|
||||||
|
|
||||||
|
get shards() {
|
||||||
|
return this.#slots.shards;
|
||||||
|
}
|
||||||
|
|
||||||
|
get masters() {
|
||||||
|
return this.#slots.masters;
|
||||||
|
}
|
||||||
|
|
||||||
|
get replicas() {
|
||||||
|
return this.#slots.replicas;
|
||||||
|
}
|
||||||
|
|
||||||
|
get nodeByAddress() {
|
||||||
|
return this.#slots.nodeByAddress;
|
||||||
|
}
|
||||||
|
|
||||||
|
get pubSubNode() {
|
||||||
|
return this.#slots.pubSubNode;
|
||||||
|
}
|
||||||
|
|
||||||
readonly #Multi: InstantiableRedisClusterMultiCommandType<M, F, S>;
|
readonly #Multi: InstantiableRedisClusterMultiCommandType<M, F, S>;
|
||||||
|
|
||||||
|
get isOpen() {
|
||||||
|
return this.#slots.isOpen;
|
||||||
|
}
|
||||||
|
|
||||||
constructor(options: RedisClusterOptions<M, F, S>) {
|
constructor(options: RedisClusterOptions<M, F, S>) {
|
||||||
super();
|
super();
|
||||||
|
|
||||||
this.#options = options;
|
this.#options = options;
|
||||||
this.#slots = new RedisClusterSlots(options, err => this.emit('error', err));
|
this.#slots = new RedisClusterSlots(options, this.emit.bind(this));
|
||||||
this.#Multi = RedisClusterMultiCommand.extend(options);
|
this.#Multi = RedisClusterMultiCommand.extend(options);
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -88,7 +144,7 @@ export default class RedisCluster<
|
|||||||
});
|
});
|
||||||
}
|
}
|
||||||
|
|
||||||
async connect(): Promise<void> {
|
connect() {
|
||||||
return this.#slots.connect();
|
return this.#slots.connect();
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -188,34 +244,33 @@ export default class RedisCluster<
|
|||||||
executor: (client: RedisClientType<M, F, S>) => Promise<Reply>
|
executor: (client: RedisClientType<M, F, S>) => Promise<Reply>
|
||||||
): Promise<Reply> {
|
): Promise<Reply> {
|
||||||
const maxCommandRedirections = this.#options.maxCommandRedirections ?? 16;
|
const maxCommandRedirections = this.#options.maxCommandRedirections ?? 16;
|
||||||
let client = this.#slots.getClient(firstKey, isReadonly);
|
let client = await this.#slots.getClient(firstKey, isReadonly);
|
||||||
for (let i = 0;; i++) {
|
for (let i = 0;; i++) {
|
||||||
try {
|
try {
|
||||||
return await executor(client);
|
return await executor(client);
|
||||||
} catch (err) {
|
} catch (err) {
|
||||||
if (++i > maxCommandRedirections || !(err instanceof Error)) {
|
if (++i > maxCommandRedirections || !(err instanceof ErrorReply)) {
|
||||||
throw err;
|
throw err;
|
||||||
}
|
}
|
||||||
|
|
||||||
if (err.message.startsWith('ASK')) {
|
if (err.message.startsWith('ASK')) {
|
||||||
const address = err.message.substring(err.message.lastIndexOf(' ') + 1);
|
const address = err.message.substring(err.message.lastIndexOf(' ') + 1);
|
||||||
if (this.#slots.getNodeByAddress(address)?.client === client) {
|
let redirectTo = await this.#slots.getMasterByAddress(address);
|
||||||
await client.asking();
|
if (!redirectTo) {
|
||||||
continue;
|
await this.#slots.rediscover(client);
|
||||||
|
redirectTo = await this.#slots.getMasterByAddress(address);
|
||||||
}
|
}
|
||||||
|
|
||||||
await this.#slots.rediscover(client);
|
|
||||||
const redirectTo = this.#slots.getNodeByAddress(address);
|
|
||||||
if (!redirectTo) {
|
if (!redirectTo) {
|
||||||
throw new Error(`Cannot find node ${address}`);
|
throw new Error(`Cannot find node ${address}`);
|
||||||
}
|
}
|
||||||
|
|
||||||
await redirectTo.client.asking();
|
await redirectTo.asking();
|
||||||
client = redirectTo.client;
|
client = redirectTo;
|
||||||
continue;
|
continue;
|
||||||
} else if (err.message.startsWith('MOVED')) {
|
} else if (err.message.startsWith('MOVED')) {
|
||||||
await this.#slots.rediscover(client);
|
await this.#slots.rediscover(client);
|
||||||
client = this.#slots.getClient(firstKey, isReadonly);
|
client = await this.#slots.getClient(firstKey, isReadonly);
|
||||||
continue;
|
continue;
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -239,14 +294,94 @@ export default class RedisCluster<
|
|||||||
|
|
||||||
multi = this.MULTI;
|
multi = this.MULTI;
|
||||||
|
|
||||||
getMasters(): Array<ClusterNode<M, F, S>> {
|
async SUBSCRIBE<T extends boolean = false>(
|
||||||
return this.#slots.getMasters();
|
channels: string | Array<string>,
|
||||||
|
listener: PubSubListener<T>,
|
||||||
|
bufferMode?: T
|
||||||
|
) {
|
||||||
|
return (await this.#slots.getPubSubClient())
|
||||||
|
.SUBSCRIBE(channels, listener, bufferMode);
|
||||||
}
|
}
|
||||||
|
|
||||||
getSlotMaster(slot: number): ClusterNode<M, F, S> {
|
subscribe = this.SUBSCRIBE;
|
||||||
return this.#slots.getSlotMaster(slot);
|
|
||||||
|
async UNSUBSCRIBE<T extends boolean = false>(
|
||||||
|
channels?: string | Array<string>,
|
||||||
|
listener?: PubSubListener<boolean>,
|
||||||
|
bufferMode?: T
|
||||||
|
) {
|
||||||
|
return this.#slots.executeUnsubscribeCommand(client =>
|
||||||
|
client.UNSUBSCRIBE(channels, listener, bufferMode)
|
||||||
|
);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
unsubscribe = this.UNSUBSCRIBE;
|
||||||
|
|
||||||
|
async PSUBSCRIBE<T extends boolean = false>(
|
||||||
|
patterns: string | Array<string>,
|
||||||
|
listener: PubSubListener<T>,
|
||||||
|
bufferMode?: T
|
||||||
|
) {
|
||||||
|
return (await this.#slots.getPubSubClient())
|
||||||
|
.PSUBSCRIBE(patterns, listener, bufferMode);
|
||||||
|
}
|
||||||
|
|
||||||
|
pSubscribe = this.PSUBSCRIBE;
|
||||||
|
|
||||||
|
async PUNSUBSCRIBE<T extends boolean = false>(
|
||||||
|
patterns?: string | Array<string>,
|
||||||
|
listener?: PubSubListener<T>,
|
||||||
|
bufferMode?: T
|
||||||
|
) {
|
||||||
|
return this.#slots.executeUnsubscribeCommand(client =>
|
||||||
|
client.PUNSUBSCRIBE(patterns, listener, bufferMode)
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
pUnsubscribe = this.PUNSUBSCRIBE;
|
||||||
|
|
||||||
|
async SSUBSCRIBE<T extends boolean = false>(
|
||||||
|
channels: string | Array<string>,
|
||||||
|
listener: PubSubListener<T>,
|
||||||
|
bufferMode?: T
|
||||||
|
) {
|
||||||
|
const maxCommandRedirections = this.#options.maxCommandRedirections ?? 16,
|
||||||
|
firstChannel = Array.isArray(channels) ? channels[0] : channels;
|
||||||
|
let client = await this.#slots.getShardedPubSubClient(firstChannel);
|
||||||
|
for (let i = 0;; i++) {
|
||||||
|
try {
|
||||||
|
return await client.SSUBSCRIBE(channels, listener, bufferMode);
|
||||||
|
} catch (err) {
|
||||||
|
if (++i > maxCommandRedirections || !(err instanceof ErrorReply)) {
|
||||||
|
throw err;
|
||||||
|
}
|
||||||
|
|
||||||
|
if (err.message.startsWith('MOVED')) {
|
||||||
|
await this.#slots.rediscover(client);
|
||||||
|
client = await this.#slots.getShardedPubSubClient(firstChannel);
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
|
||||||
|
throw err;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
sSubscribe = this.SSUBSCRIBE;
|
||||||
|
|
||||||
|
SUNSUBSCRIBE<T extends boolean = false>(
|
||||||
|
channels: string | Array<string>,
|
||||||
|
listener: PubSubListener<T>,
|
||||||
|
bufferMode?: T
|
||||||
|
) {
|
||||||
|
return this.#slots.executeShardedUnsubscribeCommand(
|
||||||
|
Array.isArray(channels) ? channels[0] : channels,
|
||||||
|
client => client.SUNSUBSCRIBE(channels, listener, bufferMode)
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
sUnsubscribe = this.SUNSUBSCRIBE;
|
||||||
|
|
||||||
quit(): Promise<void> {
|
quit(): Promise<void> {
|
||||||
return this.#slots.quit();
|
return this.#slots.quit();
|
||||||
}
|
}
|
||||||
@@ -254,6 +389,32 @@ export default class RedisCluster<
|
|||||||
disconnect(): Promise<void> {
|
disconnect(): Promise<void> {
|
||||||
return this.#slots.disconnect();
|
return this.#slots.disconnect();
|
||||||
}
|
}
|
||||||
|
|
||||||
|
nodeClient(node: ShardNode<M, F, S>) {
|
||||||
|
return this.#slots.nodeClient(node);
|
||||||
|
}
|
||||||
|
|
||||||
|
getRandomNode() {
|
||||||
|
return this.#slots.getRandomNode();
|
||||||
|
}
|
||||||
|
|
||||||
|
getSlotRandomNode(slot: number) {
|
||||||
|
return this.#slots.getSlotRandomNode(slot);
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* @deprecated use `.masters` instead
|
||||||
|
*/
|
||||||
|
getMasters() {
|
||||||
|
return this.masters;
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* @deprecated use `.slots[<SLOT>]` instead
|
||||||
|
*/
|
||||||
|
getSlotMaster(slot: number) {
|
||||||
|
return this.slots[slot].master;
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
attachCommands({
|
attachCommands({
|
||||||
|
@@ -11,8 +11,9 @@ describe('CLUSTER BUMPEPOCH', () => {
|
|||||||
});
|
});
|
||||||
|
|
||||||
testUtils.testWithCluster('clusterNode.clusterBumpEpoch', async cluster => {
|
testUtils.testWithCluster('clusterNode.clusterBumpEpoch', async cluster => {
|
||||||
|
const client = await cluster.nodeClient(cluster.masters[0]);
|
||||||
assert.equal(
|
assert.equal(
|
||||||
typeof await cluster.getSlotMaster(0).client.clusterBumpEpoch(),
|
typeof await client.clusterBumpEpoch(),
|
||||||
'string'
|
'string'
|
||||||
);
|
);
|
||||||
}, GLOBAL.SERVERS.OPEN);
|
}, GLOBAL.SERVERS.OPEN);
|
||||||
|
@@ -11,7 +11,7 @@ describe('CLUSTER COUNT-FAILURE-REPORTS', () => {
|
|||||||
});
|
});
|
||||||
|
|
||||||
testUtils.testWithCluster('clusterNode.clusterCountFailureReports', async cluster => {
|
testUtils.testWithCluster('clusterNode.clusterCountFailureReports', async cluster => {
|
||||||
const { client } = cluster.getSlotMaster(0);
|
const client = await cluster.nodeClient(cluster.masters[0]);
|
||||||
assert.equal(
|
assert.equal(
|
||||||
typeof await client.clusterCountFailureReports(
|
typeof await client.clusterCountFailureReports(
|
||||||
await client.clusterMyId()
|
await client.clusterMyId()
|
||||||
|
@@ -11,8 +11,9 @@ describe('CLUSTER COUNTKEYSINSLOT', () => {
|
|||||||
});
|
});
|
||||||
|
|
||||||
testUtils.testWithCluster('clusterNode.clusterCountKeysInSlot', async cluster => {
|
testUtils.testWithCluster('clusterNode.clusterCountKeysInSlot', async cluster => {
|
||||||
|
const client = await cluster.nodeClient(cluster.masters[0]);
|
||||||
assert.equal(
|
assert.equal(
|
||||||
typeof await cluster.getSlotMaster(0).client.clusterCountKeysInSlot(0),
|
typeof await client.clusterCountKeysInSlot(0),
|
||||||
'number'
|
'number'
|
||||||
);
|
);
|
||||||
}, GLOBAL.CLUSTERS.OPEN);
|
}, GLOBAL.CLUSTERS.OPEN);
|
||||||
|
@@ -11,7 +11,8 @@ describe('CLUSTER GETKEYSINSLOT', () => {
|
|||||||
});
|
});
|
||||||
|
|
||||||
testUtils.testWithCluster('clusterNode.clusterGetKeysInSlot', async cluster => {
|
testUtils.testWithCluster('clusterNode.clusterGetKeysInSlot', async cluster => {
|
||||||
const reply = await cluster.getSlotMaster(0).client.clusterGetKeysInSlot(0, 1);
|
const client = await cluster.nodeClient(cluster.masters[0]),
|
||||||
|
reply = await client.clusterGetKeysInSlot(0, 1);
|
||||||
assert.ok(Array.isArray(reply));
|
assert.ok(Array.isArray(reply));
|
||||||
for (const item of reply) {
|
for (const item of reply) {
|
||||||
assert.equal(typeof item, 'string');
|
assert.equal(typeof item, 'string');
|
||||||
|
@@ -46,8 +46,9 @@ describe('CLUSTER INFO', () => {
|
|||||||
});
|
});
|
||||||
|
|
||||||
testUtils.testWithCluster('clusterNode.clusterInfo', async cluster => {
|
testUtils.testWithCluster('clusterNode.clusterInfo', async cluster => {
|
||||||
|
const client = await cluster.nodeClient(cluster.masters[0]);
|
||||||
assert.notEqual(
|
assert.notEqual(
|
||||||
await cluster.getSlotMaster(0).client.clusterInfo(),
|
await client.clusterInfo(),
|
||||||
null
|
null
|
||||||
);
|
);
|
||||||
}, GLOBAL.CLUSTERS.OPEN);
|
}, GLOBAL.CLUSTERS.OPEN);
|
||||||
|
@@ -11,8 +11,9 @@ describe('CLUSTER KEYSLOT', () => {
|
|||||||
});
|
});
|
||||||
|
|
||||||
testUtils.testWithCluster('clusterNode.clusterKeySlot', async cluster => {
|
testUtils.testWithCluster('clusterNode.clusterKeySlot', async cluster => {
|
||||||
|
const client = await cluster.nodeClient(cluster.masters[0]);
|
||||||
assert.equal(
|
assert.equal(
|
||||||
typeof await cluster.getSlotMaster(0).client.clusterKeySlot('key'),
|
typeof await client.clusterKeySlot('key'),
|
||||||
'number'
|
'number'
|
||||||
);
|
);
|
||||||
}, GLOBAL.CLUSTERS.OPEN);
|
}, GLOBAL.CLUSTERS.OPEN);
|
||||||
|
@@ -13,7 +13,8 @@ describe('CLUSTER LINKS', () => {
|
|||||||
});
|
});
|
||||||
|
|
||||||
testUtils.testWithCluster('clusterNode.clusterLinks', async cluster => {
|
testUtils.testWithCluster('clusterNode.clusterLinks', async cluster => {
|
||||||
const links = await cluster.getSlotMaster(0).client.clusterLinks();
|
const client = await cluster.nodeClient(cluster.masters[0]),
|
||||||
|
links = await client.clusterLinks();
|
||||||
assert.ok(Array.isArray(links));
|
assert.ok(Array.isArray(links));
|
||||||
for (const link of links) {
|
for (const link of links) {
|
||||||
assert.equal(typeof link.direction, 'string');
|
assert.equal(typeof link.direction, 'string');
|
||||||
|
@@ -11,9 +11,11 @@ describe('CLUSTER MYID', () => {
|
|||||||
});
|
});
|
||||||
|
|
||||||
testUtils.testWithCluster('clusterNode.clusterMyId', async cluster => {
|
testUtils.testWithCluster('clusterNode.clusterMyId', async cluster => {
|
||||||
|
const [master] = cluster.masters,
|
||||||
|
client = await cluster.nodeClient(master);
|
||||||
assert.equal(
|
assert.equal(
|
||||||
typeof await cluster.getSlotMaster(0).client.clusterMyId(),
|
await client.clusterMyId(),
|
||||||
'string'
|
master.id
|
||||||
);
|
);
|
||||||
}, GLOBAL.CLUSTERS.OPEN);
|
}, GLOBAL.CLUSTERS.OPEN);
|
||||||
});
|
});
|
||||||
|
@@ -11,8 +11,9 @@ describe('CLUSTER SAVECONFIG', () => {
|
|||||||
});
|
});
|
||||||
|
|
||||||
testUtils.testWithCluster('clusterNode.clusterSaveConfig', async cluster => {
|
testUtils.testWithCluster('clusterNode.clusterSaveConfig', async cluster => {
|
||||||
|
const client = await cluster.nodeClient(cluster.masters[0]);
|
||||||
assert.equal(
|
assert.equal(
|
||||||
await cluster.getSlotMaster(0).client.clusterSaveConfig(),
|
await client.clusterSaveConfig(),
|
||||||
'OK'
|
'OK'
|
||||||
);
|
);
|
||||||
}, GLOBAL.CLUSTERS.OPEN);
|
}, GLOBAL.CLUSTERS.OPEN);
|
||||||
|
@@ -13,7 +13,7 @@ type ClusterSlotsRawReply = Array<[
|
|||||||
...replicas: Array<ClusterSlotsRawNode>
|
...replicas: Array<ClusterSlotsRawNode>
|
||||||
]>;
|
]>;
|
||||||
|
|
||||||
type ClusterSlotsNode = {
|
export interface ClusterSlotsNode {
|
||||||
ip: string;
|
ip: string;
|
||||||
port: number;
|
port: number;
|
||||||
id: string;
|
id: string;
|
||||||
|
@@ -1,8 +1,24 @@
|
|||||||
import { strict as assert } from 'assert';
|
import { strict as assert } from 'assert';
|
||||||
import testUtils, { GLOBAL } from '../test-utils';
|
import testUtils, { GLOBAL } from '../test-utils';
|
||||||
import RedisClient from '../client';
|
import { transformArguments } from './PING';
|
||||||
|
|
||||||
describe('PING', () => {
|
describe('PING', () => {
|
||||||
|
describe('transformArguments', () => {
|
||||||
|
it('default', () => {
|
||||||
|
assert.deepEqual(
|
||||||
|
transformArguments(),
|
||||||
|
['PING']
|
||||||
|
);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('with message', () => {
|
||||||
|
assert.deepEqual(
|
||||||
|
transformArguments('message'),
|
||||||
|
['PING', 'message']
|
||||||
|
);
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
describe('client.ping', () => {
|
describe('client.ping', () => {
|
||||||
testUtils.testWithClient('string', async client => {
|
testUtils.testWithClient('string', async client => {
|
||||||
assert.equal(
|
assert.equal(
|
||||||
@@ -13,7 +29,7 @@ describe('PING', () => {
|
|||||||
|
|
||||||
testUtils.testWithClient('buffer', async client => {
|
testUtils.testWithClient('buffer', async client => {
|
||||||
assert.deepEqual(
|
assert.deepEqual(
|
||||||
await client.ping(RedisClient.commandOptions({ returnBuffers: true })),
|
await client.ping(client.commandOptions({ returnBuffers: true })),
|
||||||
Buffer.from('PONG')
|
Buffer.from('PONG')
|
||||||
);
|
);
|
||||||
}, GLOBAL.SERVERS.OPEN);
|
}, GLOBAL.SERVERS.OPEN);
|
||||||
|
@@ -1,7 +1,12 @@
|
|||||||
import { RedisCommandArgument } from '.';
|
import { RedisCommandArgument, RedisCommandArguments } from '.';
|
||||||
|
|
||||||
export function transformArguments(): Array<string> {
|
export function transformArguments(message?: RedisCommandArgument): RedisCommandArguments {
|
||||||
return ['PING'];
|
const args: RedisCommandArguments = ['PING'];
|
||||||
|
if (message) {
|
||||||
|
args.push(message);
|
||||||
|
}
|
||||||
|
|
||||||
|
return args;
|
||||||
}
|
}
|
||||||
|
|
||||||
export declare function transformReply(): RedisCommandArgument;
|
export declare function transformReply(): RedisCommandArgument;
|
||||||
|
@@ -1,5 +1,7 @@
|
|||||||
import { RedisCommandArgument, RedisCommandArguments } from '.';
|
import { RedisCommandArgument, RedisCommandArguments } from '.';
|
||||||
|
|
||||||
|
export const IS_READ_ONLY = true;
|
||||||
|
|
||||||
export function transformArguments(
|
export function transformArguments(
|
||||||
channel: RedisCommandArgument,
|
channel: RedisCommandArgument,
|
||||||
message: RedisCommandArgument
|
message: RedisCommandArgument
|
||||||
|
30
packages/client/lib/commands/PUBSUB_SHARDCHANNELS.spec.ts
Normal file
30
packages/client/lib/commands/PUBSUB_SHARDCHANNELS.spec.ts
Normal file
@@ -0,0 +1,30 @@
|
|||||||
|
import { strict as assert } from 'assert';
|
||||||
|
import testUtils, { GLOBAL } from '../test-utils';
|
||||||
|
import { transformArguments } from './PUBSUB_SHARDCHANNELS';
|
||||||
|
|
||||||
|
describe('PUBSUB SHARDCHANNELS', () => {
|
||||||
|
testUtils.isVersionGreaterThanHook([7]);
|
||||||
|
|
||||||
|
describe('transformArguments', () => {
|
||||||
|
it('without pattern', () => {
|
||||||
|
assert.deepEqual(
|
||||||
|
transformArguments(),
|
||||||
|
['PUBSUB', 'SHARDCHANNELS']
|
||||||
|
);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('with pattern', () => {
|
||||||
|
assert.deepEqual(
|
||||||
|
transformArguments('patter*'),
|
||||||
|
['PUBSUB', 'SHARDCHANNELS', 'patter*']
|
||||||
|
);
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
testUtils.testWithClient('client.pubSubShardChannels', async client => {
|
||||||
|
assert.deepEqual(
|
||||||
|
await client.pubSubShardChannels(),
|
||||||
|
[]
|
||||||
|
);
|
||||||
|
}, GLOBAL.SERVERS.OPEN);
|
||||||
|
});
|
13
packages/client/lib/commands/PUBSUB_SHARDCHANNELS.ts
Normal file
13
packages/client/lib/commands/PUBSUB_SHARDCHANNELS.ts
Normal file
@@ -0,0 +1,13 @@
|
|||||||
|
import { RedisCommandArgument, RedisCommandArguments } from '.';
|
||||||
|
|
||||||
|
export const IS_READ_ONLY = true;
|
||||||
|
|
||||||
|
export function transformArguments(
|
||||||
|
pattern?: RedisCommandArgument
|
||||||
|
): RedisCommandArguments {
|
||||||
|
const args: RedisCommandArguments = ['PUBSUB', 'SHARDCHANNELS'];
|
||||||
|
if (pattern) args.push(pattern);
|
||||||
|
return args;
|
||||||
|
}
|
||||||
|
|
||||||
|
export declare function transformReply(): Array<RedisCommandArgument>;
|
21
packages/client/lib/commands/SPUBLISH.spec.ts
Normal file
21
packages/client/lib/commands/SPUBLISH.spec.ts
Normal file
@@ -0,0 +1,21 @@
|
|||||||
|
import { strict as assert } from 'assert';
|
||||||
|
import testUtils, { GLOBAL } from '../test-utils';
|
||||||
|
import { transformArguments } from './SPUBLISH';
|
||||||
|
|
||||||
|
describe('SPUBLISH', () => {
|
||||||
|
testUtils.isVersionGreaterThanHook([7]);
|
||||||
|
|
||||||
|
it('transformArguments', () => {
|
||||||
|
assert.deepEqual(
|
||||||
|
transformArguments('channel', 'message'),
|
||||||
|
['SPUBLISH', 'channel', 'message']
|
||||||
|
);
|
||||||
|
});
|
||||||
|
|
||||||
|
testUtils.testWithClient('client.sPublish', async client => {
|
||||||
|
assert.equal(
|
||||||
|
await client.sPublish('channel', 'message'),
|
||||||
|
0
|
||||||
|
);
|
||||||
|
}, GLOBAL.SERVERS.OPEN);
|
||||||
|
});
|
14
packages/client/lib/commands/SPUBLISH.ts
Normal file
14
packages/client/lib/commands/SPUBLISH.ts
Normal file
@@ -0,0 +1,14 @@
|
|||||||
|
import { RedisCommandArgument, RedisCommandArguments } from '.';
|
||||||
|
|
||||||
|
export const IS_READ_ONLY = true;
|
||||||
|
|
||||||
|
export const FIRST_KEY_INDEX = 1;
|
||||||
|
|
||||||
|
export function transformArguments(
|
||||||
|
channel: RedisCommandArgument,
|
||||||
|
message: RedisCommandArgument
|
||||||
|
): RedisCommandArguments {
|
||||||
|
return ['SPUBLISH', channel, message];
|
||||||
|
}
|
||||||
|
|
||||||
|
export declare function transformReply(): number;
|
@@ -137,7 +137,6 @@ export function transformSortedSetMemberNullReply(
|
|||||||
export function transformSortedSetMemberReply(
|
export function transformSortedSetMemberReply(
|
||||||
reply: [RedisCommandArgument, RedisCommandArgument]
|
reply: [RedisCommandArgument, RedisCommandArgument]
|
||||||
): ZMember {
|
): ZMember {
|
||||||
|
|
||||||
return {
|
return {
|
||||||
value: reply[0],
|
value: reply[0],
|
||||||
score: transformNumberInfinityReply(reply[1])
|
score: transformNumberInfinityReply(reply[1])
|
||||||
|
@@ -3,7 +3,6 @@ import { SinonSpy } from 'sinon';
|
|||||||
import { promiseTimeout } from './utils';
|
import { promiseTimeout } from './utils';
|
||||||
|
|
||||||
export default new TestUtils({
|
export default new TestUtils({
|
||||||
defaultDockerVersion: '7.0.2',
|
|
||||||
dockerImageName: 'redis',
|
dockerImageName: 'redis',
|
||||||
dockerImageVersionArgument: 'redis-version'
|
dockerImageVersionArgument: 'redis-version'
|
||||||
});
|
});
|
||||||
@@ -31,6 +30,14 @@ export const GLOBAL = {
|
|||||||
password: 'password'
|
password: 'password'
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
},
|
||||||
|
WITH_REPLICAS: {
|
||||||
|
serverArguments: [],
|
||||||
|
numberOfMasters: 2,
|
||||||
|
numberOfReplicas: 1,
|
||||||
|
clusterConfiguration: {
|
||||||
|
useReplicas: true
|
||||||
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
};
|
};
|
||||||
|
@@ -1,6 +1,6 @@
|
|||||||
{
|
{
|
||||||
"name": "@redis/client",
|
"name": "@redis/client",
|
||||||
"version": "1.4.2",
|
"version": "1.5.0",
|
||||||
"license": "MIT",
|
"license": "MIT",
|
||||||
"main": "./dist/index.js",
|
"main": "./dist/index.js",
|
||||||
"types": "./dist/index.d.ts",
|
"types": "./dist/index.d.ts",
|
||||||
|
@@ -3,8 +3,7 @@ import RedisGraph from '.';
|
|||||||
|
|
||||||
export default new TestUtils({
|
export default new TestUtils({
|
||||||
dockerImageName: 'redislabs/redisgraph',
|
dockerImageName: 'redislabs/redisgraph',
|
||||||
dockerImageVersionArgument: 'redisgraph-version',
|
dockerImageVersionArgument: 'redisgraph-version'
|
||||||
defaultDockerVersion: '2.8.15'
|
|
||||||
});
|
});
|
||||||
|
|
||||||
export const GLOBAL = {
|
export const GLOBAL = {
|
||||||
|
@@ -3,8 +3,7 @@ import RedisJSON from '.';
|
|||||||
|
|
||||||
export default new TestUtils({
|
export default new TestUtils({
|
||||||
dockerImageName: 'redislabs/rejson',
|
dockerImageName: 'redislabs/rejson',
|
||||||
dockerImageVersionArgument: 'rejson-version',
|
dockerImageVersionArgument: 'rejson-version'
|
||||||
defaultDockerVersion: '2.0.9'
|
|
||||||
});
|
});
|
||||||
|
|
||||||
export const GLOBAL = {
|
export const GLOBAL = {
|
||||||
|
@@ -1,8 +1,8 @@
|
|||||||
import { createConnection } from 'net';
|
import { createConnection } from 'net';
|
||||||
import { once } from 'events';
|
import { once } from 'events';
|
||||||
import { RedisModules, RedisFunctions, RedisScripts } from '@redis/client/dist/lib/commands';
|
import RedisClient from '@redis/client/dist/lib/client';
|
||||||
import RedisClient, { RedisClientType } from '@redis/client/dist/lib/client';
|
|
||||||
import { promiseTimeout } from '@redis/client/dist/lib/utils';
|
import { promiseTimeout } from '@redis/client/dist/lib/utils';
|
||||||
|
import { ClusterSlotsReply } from '@redis/client/dist/lib/commands/CLUSTER_SLOTS';
|
||||||
import * as path from 'path';
|
import * as path from 'path';
|
||||||
import { promisify } from 'util';
|
import { promisify } from 'util';
|
||||||
import { exec } from 'child_process';
|
import { exec } from 'child_process';
|
||||||
@@ -64,7 +64,7 @@ async function spawnRedisServerDocker({ image, version }: RedisServerDockerConfi
|
|||||||
}
|
}
|
||||||
|
|
||||||
while (await isPortAvailable(port)) {
|
while (await isPortAvailable(port)) {
|
||||||
await promiseTimeout(500);
|
await promiseTimeout(50);
|
||||||
}
|
}
|
||||||
|
|
||||||
return {
|
return {
|
||||||
@@ -102,17 +102,65 @@ after(() => {
|
|||||||
});
|
});
|
||||||
|
|
||||||
export interface RedisClusterDockersConfig extends RedisServerDockerConfig {
|
export interface RedisClusterDockersConfig extends RedisServerDockerConfig {
|
||||||
numberOfNodes?: number;
|
numberOfMasters?: number;
|
||||||
|
numberOfReplicas?: number;
|
||||||
|
}
|
||||||
|
|
||||||
|
async function spawnRedisClusterNodeDockers(
|
||||||
|
dockersConfig: RedisClusterDockersConfig,
|
||||||
|
serverArguments: Array<string>,
|
||||||
|
fromSlot: number,
|
||||||
|
toSlot: number
|
||||||
|
) {
|
||||||
|
const range: Array<number> = [];
|
||||||
|
for (let i = fromSlot; i < toSlot; i++) {
|
||||||
|
range.push(i);
|
||||||
|
}
|
||||||
|
|
||||||
|
const master = await spawnRedisClusterNodeDocker(
|
||||||
|
dockersConfig,
|
||||||
|
serverArguments
|
||||||
|
);
|
||||||
|
|
||||||
|
await master.client.clusterAddSlots(range);
|
||||||
|
|
||||||
|
if (!dockersConfig.numberOfReplicas) return [master];
|
||||||
|
|
||||||
|
const replicasPromises: Array<ReturnType<typeof spawnRedisClusterNodeDocker>> = [];
|
||||||
|
for (let i = 0; i < (dockersConfig.numberOfReplicas ?? 0); i++) {
|
||||||
|
replicasPromises.push(
|
||||||
|
spawnRedisClusterNodeDocker(dockersConfig, [
|
||||||
|
...serverArguments,
|
||||||
|
'--cluster-enabled',
|
||||||
|
'yes',
|
||||||
|
'--cluster-node-timeout',
|
||||||
|
'5000'
|
||||||
|
]).then(async replica => {
|
||||||
|
await replica.client.clusterMeet('127.0.0.1', master.docker.port);
|
||||||
|
|
||||||
|
while ((await replica.client.clusterSlots()).length === 0) {
|
||||||
|
await promiseTimeout(50);
|
||||||
|
}
|
||||||
|
|
||||||
|
await replica.client.clusterReplicate(
|
||||||
|
await master.client.clusterMyId()
|
||||||
|
);
|
||||||
|
|
||||||
|
return replica;
|
||||||
|
})
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
return [
|
||||||
|
master,
|
||||||
|
...await Promise.all(replicasPromises)
|
||||||
|
];
|
||||||
}
|
}
|
||||||
|
|
||||||
async function spawnRedisClusterNodeDocker(
|
async function spawnRedisClusterNodeDocker(
|
||||||
dockersConfig: RedisClusterDockersConfig,
|
dockersConfig: RedisClusterDockersConfig,
|
||||||
serverArguments: Array<string>,
|
serverArguments: Array<string>
|
||||||
fromSlot: number,
|
) {
|
||||||
toSlot: number,
|
|
||||||
waitForState: boolean,
|
|
||||||
meetPort?: number
|
|
||||||
): Promise<RedisServerDocker> {
|
|
||||||
const docker = await spawnRedisServerDocker(dockersConfig, [
|
const docker = await spawnRedisServerDocker(dockersConfig, [
|
||||||
...serverArguments,
|
...serverArguments,
|
||||||
'--cluster-enabled',
|
'--cluster-enabled',
|
||||||
@@ -128,78 +176,64 @@ async function spawnRedisClusterNodeDocker(
|
|||||||
|
|
||||||
await client.connect();
|
await client.connect();
|
||||||
|
|
||||||
try {
|
return {
|
||||||
const range = [];
|
docker,
|
||||||
for (let i = fromSlot; i < toSlot; i++) {
|
client
|
||||||
range.push(i);
|
};
|
||||||
}
|
|
||||||
|
|
||||||
const promises: Array<Promise<unknown>> = [client.clusterAddSlots(range)];
|
|
||||||
|
|
||||||
if (meetPort) {
|
|
||||||
promises.push(client.clusterMeet('127.0.0.1', meetPort));
|
|
||||||
}
|
|
||||||
|
|
||||||
if (waitForState) {
|
|
||||||
promises.push(waitForClusterState(client));
|
|
||||||
}
|
|
||||||
|
|
||||||
await Promise.all(promises);
|
|
||||||
|
|
||||||
return docker;
|
|
||||||
} finally {
|
|
||||||
await client.disconnect();
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
async function waitForClusterState<
|
|
||||||
M extends RedisModules,
|
|
||||||
F extends RedisFunctions,
|
|
||||||
S extends RedisScripts
|
|
||||||
>(client: RedisClientType<M, F, S>): Promise<void> {
|
|
||||||
while ((await client.clusterInfo()).state !== 'ok') {
|
|
||||||
await promiseTimeout(500);
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
const SLOTS = 16384;
|
const SLOTS = 16384;
|
||||||
|
|
||||||
async function spawnRedisClusterDockers(dockersConfig: RedisClusterDockersConfig, serverArguments: Array<string>): Promise<Array<RedisServerDocker>> {
|
async function spawnRedisClusterDockers(
|
||||||
const numberOfNodes = dockersConfig.numberOfNodes ?? 3,
|
dockersConfig: RedisClusterDockersConfig,
|
||||||
slotsPerNode = Math.floor(SLOTS / numberOfNodes),
|
serverArguments: Array<string>
|
||||||
dockers: Array<RedisServerDocker> = [];
|
): Promise<Array<RedisServerDocker>> {
|
||||||
for (let i = 0; i < numberOfNodes; i++) {
|
const numberOfMasters = dockersConfig.numberOfMasters ?? 2,
|
||||||
|
slotsPerNode = Math.floor(SLOTS / numberOfMasters),
|
||||||
|
spawnPromises: Array<ReturnType<typeof spawnRedisClusterNodeDockers>> = [];
|
||||||
|
for (let i = 0; i < numberOfMasters; i++) {
|
||||||
const fromSlot = i * slotsPerNode,
|
const fromSlot = i * slotsPerNode,
|
||||||
[ toSlot, waitForState ] = i === numberOfNodes - 1 ? [SLOTS, true] : [fromSlot + slotsPerNode, false];
|
toSlot = i === numberOfMasters - 1 ? SLOTS : fromSlot + slotsPerNode;
|
||||||
dockers.push(
|
spawnPromises.push(
|
||||||
await spawnRedisClusterNodeDocker(
|
spawnRedisClusterNodeDockers(
|
||||||
dockersConfig,
|
dockersConfig,
|
||||||
serverArguments,
|
serverArguments,
|
||||||
fromSlot,
|
fromSlot,
|
||||||
toSlot,
|
toSlot
|
||||||
waitForState,
|
|
||||||
i === 0 ? undefined : dockers[i - 1].port
|
|
||||||
)
|
)
|
||||||
);
|
);
|
||||||
}
|
}
|
||||||
|
|
||||||
const client = RedisClient.create({
|
const nodes = (await Promise.all(spawnPromises)).flat(),
|
||||||
socket: {
|
meetPromises: Array<Promise<unknown>> = [];
|
||||||
port: dockers[0].port
|
for (let i = 1; i < nodes.length; i++) {
|
||||||
}
|
meetPromises.push(
|
||||||
});
|
nodes[i].client.clusterMeet('127.0.0.1', nodes[0].docker.port)
|
||||||
|
);
|
||||||
await client.connect();
|
|
||||||
|
|
||||||
try {
|
|
||||||
while ((await client.clusterInfo()).state !== 'ok') {
|
|
||||||
await promiseTimeout(500);
|
|
||||||
}
|
|
||||||
} finally {
|
|
||||||
await client.disconnect();
|
|
||||||
}
|
}
|
||||||
|
|
||||||
return dockers;
|
await Promise.all(meetPromises);
|
||||||
|
|
||||||
|
await Promise.all(
|
||||||
|
nodes.map(async ({ client }) => {
|
||||||
|
while (totalNodes(await client.clusterSlots()) !== nodes.length) {
|
||||||
|
await promiseTimeout(50);
|
||||||
|
}
|
||||||
|
|
||||||
|
return client.disconnect();
|
||||||
|
})
|
||||||
|
);
|
||||||
|
|
||||||
|
return nodes.map(({ docker }) => docker);
|
||||||
|
}
|
||||||
|
|
||||||
|
function totalNodes(slots: ClusterSlotsReply) {
|
||||||
|
let total = slots.length;
|
||||||
|
for (const slot of slots) {
|
||||||
|
total += slot.replicas.length;
|
||||||
|
}
|
||||||
|
|
||||||
|
return total;
|
||||||
}
|
}
|
||||||
|
|
||||||
const RUNNING_CLUSTERS = new Map<Array<string>, ReturnType<typeof spawnRedisClusterDockers>>();
|
const RUNNING_CLUSTERS = new Map<Array<string>, ReturnType<typeof spawnRedisClusterDockers>>();
|
||||||
|
@@ -9,7 +9,7 @@ import { hideBin } from 'yargs/helpers';
|
|||||||
interface TestUtilsConfig {
|
interface TestUtilsConfig {
|
||||||
dockerImageName: string;
|
dockerImageName: string;
|
||||||
dockerImageVersionArgument: string;
|
dockerImageVersionArgument: string;
|
||||||
defaultDockerVersion: string;
|
defaultDockerVersion?: string;
|
||||||
}
|
}
|
||||||
|
|
||||||
interface CommonTestOptions {
|
interface CommonTestOptions {
|
||||||
@@ -33,7 +33,8 @@ interface ClusterTestOptions<
|
|||||||
> extends CommonTestOptions {
|
> extends CommonTestOptions {
|
||||||
serverArguments: Array<string>;
|
serverArguments: Array<string>;
|
||||||
clusterConfiguration?: Partial<RedisClusterOptions<M, F, S>>;
|
clusterConfiguration?: Partial<RedisClusterOptions<M, F, S>>;
|
||||||
numberOfNodes?: number;
|
numberOfMasters?: number;
|
||||||
|
numberOfReplicas?: number;
|
||||||
}
|
}
|
||||||
|
|
||||||
interface Version {
|
interface Version {
|
||||||
@@ -43,7 +44,7 @@ interface Version {
|
|||||||
|
|
||||||
export default class TestUtils {
|
export default class TestUtils {
|
||||||
static #parseVersionNumber(version: string): Array<number> {
|
static #parseVersionNumber(version: string): Array<number> {
|
||||||
if (version === 'edge') return [Infinity];
|
if (version === 'latest' || version === 'edge') return [Infinity];
|
||||||
|
|
||||||
const dashIndex = version.indexOf('-');
|
const dashIndex = version.indexOf('-');
|
||||||
return (dashIndex === -1 ? version : version.substring(0, dashIndex))
|
return (dashIndex === -1 ? version : version.substring(0, dashIndex))
|
||||||
@@ -58,7 +59,7 @@ export default class TestUtils {
|
|||||||
});
|
});
|
||||||
}
|
}
|
||||||
|
|
||||||
static #getVersion(argumentName: string, defaultVersion: string): Version {
|
static #getVersion(argumentName: string, defaultVersion = 'latest'): Version {
|
||||||
return yargs(hideBin(process.argv))
|
return yargs(hideBin(process.argv))
|
||||||
.option(argumentName, {
|
.option(argumentName, {
|
||||||
type: 'string',
|
type: 'string',
|
||||||
@@ -163,9 +164,13 @@ export default class TestUtils {
|
|||||||
M extends RedisModules,
|
M extends RedisModules,
|
||||||
F extends RedisFunctions,
|
F extends RedisFunctions,
|
||||||
S extends RedisScripts
|
S extends RedisScripts
|
||||||
>(cluster: RedisClusterType<M, F, S>): Promise<void> {
|
>(cluster: RedisClusterType<M, F, S>): Promise<unknown> {
|
||||||
await Promise.all(
|
return Promise.all(
|
||||||
cluster.getMasters().map(({ client }) => client.flushAll())
|
cluster.masters.map(async ({ client }) => {
|
||||||
|
if (client) {
|
||||||
|
await (await client).flushAll();
|
||||||
|
}
|
||||||
|
})
|
||||||
);
|
);
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -186,7 +191,8 @@ export default class TestUtils {
|
|||||||
|
|
||||||
dockersPromise = spawnRedisCluster({
|
dockersPromise = spawnRedisCluster({
|
||||||
...dockerImage,
|
...dockerImage,
|
||||||
numberOfNodes: options?.numberOfNodes
|
numberOfMasters: options?.numberOfMasters,
|
||||||
|
numberOfReplicas: options?.numberOfReplicas
|
||||||
}, options.serverArguments);
|
}, options.serverArguments);
|
||||||
return dockersPromise;
|
return dockersPromise;
|
||||||
});
|
});
|
||||||
@@ -197,15 +203,15 @@ export default class TestUtils {
|
|||||||
|
|
||||||
const dockers = await dockersPromise,
|
const dockers = await dockersPromise,
|
||||||
cluster = RedisCluster.create({
|
cluster = RedisCluster.create({
|
||||||
...options.clusterConfiguration,
|
|
||||||
rootNodes: dockers.map(({ port }) => ({
|
rootNodes: dockers.map(({ port }) => ({
|
||||||
socket: {
|
socket: {
|
||||||
port
|
port
|
||||||
}
|
}
|
||||||
}))
|
})),
|
||||||
|
minimizeConnections: true,
|
||||||
|
...options.clusterConfiguration
|
||||||
});
|
});
|
||||||
|
|
||||||
|
|
||||||
await cluster.connect();
|
await cluster.connect();
|
||||||
|
|
||||||
try {
|
try {
|
||||||
|
@@ -3,8 +3,7 @@ import TimeSeries from '.';
|
|||||||
|
|
||||||
export default new TestUtils({
|
export default new TestUtils({
|
||||||
dockerImageName: 'redislabs/redistimeseries',
|
dockerImageName: 'redislabs/redistimeseries',
|
||||||
dockerImageVersionArgument: 'timeseries-version',
|
dockerImageVersionArgument: 'timeseries-version'
|
||||||
defaultDockerVersion: '1.8.0'
|
|
||||||
});
|
});
|
||||||
|
|
||||||
export const GLOBAL = {
|
export const GLOBAL = {
|
||||||
|
Reference in New Issue
Block a user