1
0
mirror of https://github.com/redis/node-redis.git synced 2025-08-07 13:22:56 +03:00
* init v4

* add .gitignore to benchmark

* spawn redis-servers for tests,
add some tests,
fix client auth on connect

* add tests coverage report

* add tests workflow, replace nyc text reporter with text-summary

* run tests with node 16.x & redis 6.x only (for now)

* add socket events on client,
stop reconnectiong when manually calling disconnect,
remove abort signal listener when a command is written on the socket

* add isOpen boolean getter on client, add maxLength option to command queue, add test for client.multi

* move to use CommonJS

* add MULTI and EXEC commands to when executing multi command, make client.multi return type innerit the module commands, clean some tests, exclute spec files from coverage report

* missing file from commit 61edd4f1b5

* exclude spec files from coverage report

* add support for options in a command function (.get, .set, ...), add support for the SELECT command, implement a couple of commands, fix client socket reconnection strategy, add support for using replicas (RO) in cluster, and more..

* fix client.blPop test

* use which to find redis-server path

* change command options to work with Symbol rather then WeakSet

* implement more commands

* Add support for lua scripts in client & muilti, fix client socket initiator, implement simple cluster nodes discovery strategy

* replace `callbackify` with `legacyMode`

* add the SCAN command and client.scanIterator

* rename scanIterator

* init benchmark workflow

* fix benchmark workflow

* fix benchmark workflow

* fix benchmark workflow

* push coverage report to Coveralls

* fix Coveralls

* generator lcov (for Coveralls)

* fix .nycrc.json

* PubSub

* add support for all set commands (including sScanIterator)

* support pipeline

* fix KEEPTTL in SET

* remove console.log

* add HyperLogLog commands

* update README.md (thanks to @guyroyse)

* add support for most of the "keys commands"

* fix EXPIREAT.spec.ts

* add support for date in both EXPIREAT & EXPIRE

* add tests

* better cluster nodes discorvery strategy after MOVED error, add PubSub test

* fix PubSub UNSUBSCRIBE/PUNSUBSCRIBE without channel and/or listener

* fix PubSub

* add release-it to dev dependencies

* Release 4.0.0-next.0

* fix .npmignore

* Release 4.0.0-next.1

* fix links in README.md

* fix .npmignore

* Release 4.0.0-next.2

* add support for all sorted set commands

* add support for most stream commands

* add missing file from commit 53de279afe

* lots of todo commends

* make PubSub test more stable

* clean ZPOPMAX

* add support for lua scripts and modules in cluster, spawn cluster for tests, add some cluster tests, fix pubsub listener arguments

* GET.spec.ts

* add support for List commands, fix some Sorted Set commands, add some cluster commands, spawn cluster for testing, add support for command options in cluster, and more

* add missing file from commit faab94fab2

* clean ZRANK and ZREVRANK

* add XREAD and XREADGROUP commands

* remove unused files

* implement a couple of more commands, make cluster random iterator be per node (instead of per slot)

* Release 4.0.0-next.3

* app spec files to npmignore

* fix some code analyzers (LGTM, deepsource, codeclimate) issues

* fix CLUSTER_NODES, add some tests

* add HSCAN, clean some commands, add tests for generic transformers

* add missing files from 0feb35a1fb

* update README.md (thanks to @guyroyse)

* handle ASK errors, add some commands and tests

* Release 4.0.0-next.4

* replace "modern" with "v4"

* remove unused imports

* add all ACL subcommands, all MODULE subcommands, and some other commands

* remove 2 unused imports

* fix BITFIELD command

* fix XTRIM spec file

* clean code

* fix package.json types field

* better modules support, fix some bugs in legacy mode, add some tests

* remove unused function

* add test for hScanIterator

* change node mimimum version to 12 (latest LTS)

* update tsconfig.json to support node 12, run tests on Redis 5 & 6 and on all node live versions

* remove future node releases :P

* remove "lib" from ts compiler options

* Update tsconfig.json

* fix build

* run some tests only on supported redis versions, use coveralls parallel mode

* fix tests

* Do not use "timers/promises", fix "isRedisVersionGreaterThan"

* skip AbortController tests when not available

* use 'fs'.promises instead of 'fs/promises'

* add some missing commands

* run GETDEL tests only if the redis version is greater than 6.2

* implement some GEO commands, improve scan generic transformer, expose RPUSHX

* fix GEOSEARCH & GEOSEARCHSTORE

* use socket.setNoDelay and queueMicrotask to improve latency

* commands-queue.ts: String length / byte length counting issue (#1630)

* Update commands-queue.ts

Hopefully fixing #1628

* Reverted 2fa5ea6, and implemented test for byte length check

* Changed back to Buffer.byteLength, due to issue author input. Updated test to look for 4 bytes.

* Fixed. There were two places that length was calculated.

* Removed redundant string assignment

* add 2 bytes test as well

Co-authored-by: Leibale Eidelman <leibale1998@gmail.com>

* fix scripts in multi

* do not hide bugs in redis

* fix for e7bf09644b

* remove unused import

* implement WATCH command, fix ZRANGESTORE & GEOSEARCHSTORE tests

* update README.md

Co-authored-by: @GuyRoyse

* use typedoc to auto generate documentation

* run "npm install" before "npm run documentation"

* clean documentation workflow

* fix WATCH spec file

* increase "CLUSTER_NODE_TIMEOUT" to 5000ms to avoid "CLUSTERDOWN" errors in tests

* pull cluster state every 100 ms

* await meetPromises before pulling the cluster state

* enhance the way commanders (client/multi/cluster) get extended with modules and scripts

* add test for socket retry strategy

* implement more commands

* set GETEX minimum version to 6.2

* remove unused imports

* add support for multi in cluster

* upgrade dependencies

* Release 4.0.0-next.5

* remove unused imports

* improve benchmarking

* use the same Multi with duplicated clients

* exclude some files from the documentation, add some exports, clean code

* fix #1636 - handle null in multi.exec

* remove unused import

* add supoprt for tuples in HSET

* add FIRST_KEY_INDEX to HSET

* add a bunch of missing commands, fix MSET and HELLO, add some tests

* add FIRST_KEY_INDEX to MSET and MSETNX

* upgrade actions

* fix coverallsapp/github-action version

* Update documentation.yml

* Update documentation.yml

* clean code

* remove unused imports

* use "npm ci" instead of "npm install"

* fix `self` binding on client modules, use connection pool for `duplicateConnection`

* add client.executeIsolated, rename "duplicateConnection" to "isolated", update README.md (thanks to @GuyRoyse and @SimonPrickett)

* update README (thanks to @GuyRoyse), add some tests

* try to fix "cluster is down" errors in tests

* try to fix "cluster is down" errors in tests

* upgrade dependencies

* update package-lock

* Release 4.0.0-next.6

* fix #1636 - fix WatchError

* fix for f1bf0beebf - remove .only from multi tests

* Release 4.0.0-next.7

* update README and other markdown files

Co-authored-by: @GuyRoyse & @SimonPrickett

* Doc updates. (#1640)

* update docs, upgrade dependencies

* fix README

* Release 4.0.0-rc.0

* Update README.md

* update docs, add `connectTimeout` options, fix tls

Co-authored-by: Guy Royse <guy@guyroyse.com>

* npm update, "fix" some tests, clean code

* fix AssertionError import

* fix #1642 - fix XREAD, XREADGROUP and XTRIM

* fix #1644 - add the QUIT command

* add socket.noDelay and socket.keepAlive configurations

* Update README.md (#1645)

* Update README.md

Fixed issue with how connection string was specified.
Now you can have user@host without having to specify a password, which just makes more sense

* Update client-configuration.md as well

Co-authored-by: Leibale Eidelman <leibale1998@gmail.com>

* update socket.reconnectStrategy description

* fix borken link in v3-to-v4.md

* increase test coverage, fix bug in cluster redirection strategy, implement CLIENT_ID, remove unused EXEC command

Co-authored-by: Nova <novaw@warrenservices.co.uk>
Co-authored-by: Simon Prickett <simon@crudworks.org>
Co-authored-by: Guy Royse <guy@guyroyse.com>
This commit is contained in:
Leibale Eidelman
2021-09-02 10:04:48 -04:00
committed by GitHub
parent 4f85030e42
commit 4e6d018d77
661 changed files with 28847 additions and 14559 deletions

View File

@@ -1,7 +0,0 @@
'use strict';
var redis = require('redis');
// The client stashes the password and will reauthenticate on every connect.
redis.createClient({
password: 'somepass'
});

View File

@@ -1,34 +0,0 @@
'use strict';
var redis = require('../index');
var client = redis.createClient();
var remaining_ops = 100000;
var paused = false;
function op () {
if (remaining_ops <= 0) {
console.error('Finished.');
process.exit(0);
}
remaining_ops--;
client.hset('test hash', 'val ' + remaining_ops, remaining_ops);
if (client.should_buffer === true) {
console.log('Pausing at ' + remaining_ops);
paused = true;
} else {
setTimeout(op, 1);
}
}
client.on('drain', function () {
if (paused) {
console.log('Resuming at ' + remaining_ops);
paused = false;
process.nextTick(op);
} else {
console.log('Got drain while not paused at ' + remaining_ops);
}
});
op();

View File

@@ -1,14 +0,0 @@
'use strict';
var redis = require('../index');
var client = redis.createClient();
client.eval('return 100.5', 0, function (err, res) {
console.dir(err);
console.dir(res);
});
client.eval([ 'return 100.5', 0 ], function (err, res) {
console.dir(err);
console.dir(res);
});

View File

@@ -1,26 +0,0 @@
'use strict';
var redis = require('redis');
var client = redis.createClient();
// Extend the RedisClient prototype to add a custom method
// This one converts the results from 'INFO' into a JavaScript Object
redis.RedisClient.prototype.parse_info = function (callback) {
this.info(function (err, res) {
var lines = res.toString().split('\r\n').sort();
var obj = {};
lines.forEach(function (line) {
var parts = line.split(':');
if (parts[1]) {
obj[parts[0]] = parts[1];
}
});
callback(obj);
});
};
client.parse_info(function (info) {
console.dir(info);
client.quit();
});

View File

@@ -1,38 +0,0 @@
'use strict';
// Read a file from disk, store it in Redis, then read it back from Redis.
var redis = require('redis');
var client = redis.createClient({
return_buffers: true
});
var fs = require('fs');
var assert = require('assert');
var filename = 'grumpyCat.jpg';
// Get the file I use for testing like this:
// curl http://media4.popsugar-assets.com/files/2014/08/08/878/n/1922507/caef16ec354ca23b_thumb_temp_cover_file32304521407524949.xxxlarge/i/Funny-Cat-GIFs.jpg -o grumpyCat.jpg
// or just use your own file.
// Read a file from fs, store it in Redis, get it back from Redis, write it back to fs.
fs.readFile(filename, function (err, data) {
if (err) throw err;
console.log('Read ' + data.length + ' bytes from filesystem.');
client.set(filename, data, redis.print); // set entire file
client.get(filename, function (err, reply) { // get entire file
if (err) {
console.log('Get error: ' + err);
} else {
assert.strictEqual(data.inspect(), reply.inspect());
fs.writeFile('duplicate_' + filename, reply, function (err) {
if (err) {
console.log('Error on write: ' + err);
} else {
console.log('File written.');
}
client.end();
});
}
});
});

View File

@@ -1,7 +0,0 @@
'use strict';
var client = require('redis').createClient();
client.mget(['sessions started', 'sessions started', 'foo'], function (err, res) {
console.dir(res);
});

View File

@@ -1,12 +0,0 @@
'use strict';
var client = require('../index').createClient();
var util = require('util');
client.monitor(function (err, res) {
console.log('Entering monitoring mode.');
});
client.on('monitor', function (time, args) {
console.log(time + ': ' + util.inspect(args));
});

View File

@@ -1,49 +0,0 @@
'use strict';
var redis = require('redis');
var client = redis.createClient();
var set_size = 20;
client.sadd('bigset', 'a member');
client.sadd('bigset', 'another member');
while (set_size > 0) {
client.sadd('bigset', 'member ' + set_size);
set_size -= 1;
}
// multi chain with an individual callback
client.multi()
.scard('bigset')
.smembers('bigset')
.keys('*', function (err, replies) {
client.mget(replies, redis.print);
})
.dbsize()
.exec(function (err, replies) {
console.log('MULTI got ' + replies.length + ' replies');
replies.forEach(function (reply, index) {
console.log('Reply ' + index + ': ' + reply.toString());
});
});
client.mset('incr thing', 100, 'incr other thing', 1, redis.print);
// start a separate multi command queue
var multi = client.multi();
multi.incr('incr thing', redis.print);
multi.incr('incr other thing', redis.print);
// runs immediately
client.get('incr thing', redis.print); // 100
// drains multi queue and runs atomically
multi.exec(function (err, replies) {
console.log(replies); // 101, 2
});
// you can re-run the same transaction if you like
multi.exec(function (err, replies) {
console.log(replies); // 102, 3
client.quit();
});

View File

@@ -1,31 +0,0 @@
'use strict';
var redis = require('redis');
var client = redis.createClient();
// start a separate command queue for multi
var multi = client.multi();
multi.incr('incr thing', redis.print);
multi.incr('incr other thing', redis.print);
// runs immediately
client.mset('incr thing', 100, 'incr other thing', 1, redis.print);
// drains multi queue and runs atomically
multi.exec(function (err, replies) {
console.log(replies); // 101, 2
});
// you can re-run the same transaction if you like
multi.exec(function (err, replies) {
console.log(replies); // 102, 3
client.quit();
});
client.multi([
['mget', 'multifoo', 'multibar', redis.print],
['incr', 'multifoo'],
['incr', 'multibar']
]).exec(function (err, replies) {
console.log(replies.toString());
});

View File

@@ -1,33 +0,0 @@
'use strict';
var redis = require('redis');
var client1 = redis.createClient();
var client2 = redis.createClient();
var client3 = redis.createClient();
var client4 = redis.createClient();
var msg_count = 0;
client1.on('psubscribe', function (pattern, count) {
console.log('client1 psubscribed to ' + pattern + ', ' + count + ' total subscriptions');
client2.publish('channeltwo', 'Me!');
client3.publish('channelthree', 'Me too!');
client4.publish('channelfour', 'And me too!');
});
client1.on('punsubscribe', function (pattern, count) {
console.log('client1 punsubscribed from ' + pattern + ', ' + count + ' total subscriptions');
client4.end();
client3.end();
client2.end();
client1.end();
});
client1.on('pmessage', function (pattern, channel, message) {
console.log('(' + pattern + ') client1 received message on ' + channel + ': ' + message);
msg_count += 1;
if (msg_count === 3) {
client1.punsubscribe();
}
});
client1.psubscribe('channel*');

View File

@@ -1,42 +0,0 @@
'use strict';
var redis = require('redis');
var client1 = redis.createClient();
var msg_count = 0;
var client2 = redis.createClient();
// Most clients probably don't do much on 'subscribe'. This example uses it to coordinate things within one program.
client1.on('subscribe', function (channel, count) {
console.log('client1 subscribed to ' + channel + ', ' + count + ' total subscriptions');
if (count === 2) {
client2.publish('a nice channel', 'I am sending a message.');
client2.publish('another one', 'I am sending a second message.');
client2.publish('a nice channel', 'I am sending my last message.');
}
});
client1.on('unsubscribe', function (channel, count) {
console.log('client1 unsubscribed from ' + channel + ', ' + count + ' total subscriptions');
if (count === 0) {
client2.end();
client1.end();
}
});
client1.on('message', function (channel, message) {
console.log('client1 channel ' + channel + ': ' + message);
msg_count += 1;
if (msg_count === 3) {
client1.unsubscribe();
}
});
client1.on('ready', function () {
// if you need auth, do it here
client1.incr('did a thing');
client1.subscribe('a nice channel', 'another one');
});
client2.on('ready', function () {
// if you need auth, do it here
});

View File

@@ -1,51 +0,0 @@
'use strict';
var redis = require('redis');
var client = redis.createClient();
var cursor = '0';
function scan () {
client.scan(
cursor,
'MATCH', 'q:job:*',
'COUNT', '10',
function (err, res) {
if (err) throw err;
// Update the cursor position for the next scan
cursor = res[0];
// get the SCAN result for this iteration
var keys = res[1];
// Remember: more or less than COUNT or no keys may be returned
// See http://redis.io/commands/scan#the-count-option
// Also, SCAN may return the same key multiple times
// See http://redis.io/commands/scan#scan-guarantees
// Additionally, you should always have the code that uses the keys
// before the code checking the cursor.
if (keys.length > 0) {
console.log('Array of matching keys', keys);
}
// It's important to note that the cursor and returned keys
// vary independently. The scan is never complete until redis
// returns a non-zero cursor. However, with MATCH and large
// collections, most iterations will return an empty keys array.
// Still, a cursor of zero DOES NOT mean that there are no keys.
// A zero cursor just means that the SCAN is complete, but there
// might be one last batch of results to process.
// From <http://redis.io/commands/scan>:
// 'An iteration starts when the cursor is set to 0,
// and terminates when the cursor returned by the server is 0.'
if (cursor === '0') {
return console.log('Iteration complete');
}
return scan();
}
);
}
scan();

View File

@@ -1,26 +0,0 @@
'use strict';
var redis = require('redis');
var client = redis.createClient();
client.on('error', function (err) {
console.log('error event - ' + client.host + ':' + client.port + ' - ' + err);
});
client.set('string key', 'string val', redis.print);
client.hset('hash key', 'hashtest 1', 'some value', redis.print);
client.hset(['hash key', 'hashtest 2', 'some other value'], redis.print);
client.hkeys('hash key', function (err, replies) {
if (err) {
return console.error('error response - ' + err);
}
console.log(replies.length + ' replies:');
replies.forEach(function (reply, i) {
console.log(' ' + i + ': ' + reply);
});
});
client.quit(function (err, res) {
console.log('Exiting from quit command.');
});

View File

@@ -1,19 +0,0 @@
'use strict';
var redis = require('redis');
var client = redis.createClient();
client.sadd('mylist', 1);
client.sadd('mylist', 2);
client.sadd('mylist', 3);
client.set('weight_1', 5);
client.set('weight_2', 500);
client.set('weight_3', 1);
client.set('object_1', 'foo');
client.set('object_2', 'bar');
client.set('object_3', 'qux');
client.sort('mylist', 'by', 'weight_*', 'get', 'object_*', redis.print);
// Prints Reply: qux,foo,bar

View File

@@ -1,47 +0,0 @@
'use strict';
var redis = require('redis');
var client1 = redis.createClient();
var client2 = redis.createClient();
var client3 = redis.createClient();
client1.xadd('mystream', '*', 'field1', 'm1', function (err) {
if (err) {
return console.error(err);
}
client1.xgroup('CREATE', 'mystream', 'mygroup', '$', function (err) {
if (err) {
return console.error(err);
}
});
client2.xreadgroup('GROUP', 'mygroup', 'consumer', 'Block', 1000, 'NOACK',
'STREAMS', 'mystream', '>', function (err, stream) {
if (err) {
return console.error(err);
}
console.log('client2 ' + stream);
});
client3.xreadgroup('GROUP', 'mygroup', 'consumer', 'Block', 1000, 'NOACK',
'STREAMS', 'mystream', '>', function (err, stream) {
if (err) {
return console.error(err);
}
console.log('client3 ' + stream);
});
client1.xadd('mystream', '*', 'field1', 'm2', function (err) {
if (err) {
return console.error(err);
}
});
client1.xadd('mystream', '*', 'field1', 'm3', function (err) {
if (err) {
return console.error(err);
}
});
});

View File

@@ -1,17 +0,0 @@
'use strict';
// Sending commands in response to other commands.
// This example runs 'type' against every key in the database
//
var client = require('redis').createClient();
client.keys('*', function (err, keys) {
keys.forEach(function (key, pos) {
client.type(key, function (err, keytype) {
console.log(key + ' is ' + keytype);
if (pos === (keys.length - 1)) {
client.quit();
}
});
});
});

View File

@@ -1,17 +0,0 @@
'use strict';
var client = require('redis').createClient();
// build a map of all keys and their types
client.keys('*', function (err, all_keys) {
var key_types = {};
all_keys.forEach(function (key, pos) { // use second arg of forEach to get pos
client.type(key, function (err, type) {
key_types[key] = type;
if (pos === all_keys.length - 1) { // callbacks all run in order
console.dir(key_types);
}
});
});
});

View File

@@ -1,32 +0,0 @@
'use strict';
var redis = require('redis');
var client = redis.createClient('/tmp/redis.sock');
var profiler = require('v8-profiler');
client.on('connect', function () {
console.log('Got Unix socket connection.');
});
client.on('error', function (err) {
console.log(err.message);
});
client.set('space chars', 'space value');
setInterval(function () {
client.get('space chars');
}, 100);
function done () {
client.info(function (err, reply) {
console.log(reply.toString());
client.quit();
});
}
setTimeout(function () {
console.log('Taking snapshot.');
profiler.takeSnapshot();
done();
}, 5000);

View File

@@ -1,33 +0,0 @@
'use strict';
// A simple web server that generates dyanmic content based on responses from Redis
var http = require('http');
var redis_client = require('redis').createClient();
http.createServer(function (request, response) { // The server
response.writeHead(200, {
'Content-Type': 'text/plain'
});
var redis_info, total_requests;
redis_client.info(function (err, reply) {
redis_info = reply; // stash response in outer scope
});
redis_client.incr('requests', function (err, reply) {
total_requests = reply; // stash response in outer scope
});
redis_client.hincrby('ip', request.connection.remoteAddress, 1);
redis_client.hgetall('ip', function (err, reply) {
// This is the last reply, so all of the previous replies must have completed already
response.write('This page was generated after talking to redis.\n\n' +
'Redis info:\n' + redis_info + '\n' +
'Total requests: ' + total_requests + '\n\n' +
'IP count: \n');
Object.keys(reply).forEach(function (ip) {
response.write(' ' + ip + ': ' + reply[ip] + '\n');
});
response.end();
});
}).listen(80);