to reduce confusion with the term "block".
-B# remains supported for existing scripts,
but it's no longer documented, so it's effectively a hidden shortcut.
to reduce confusion with the concept of "blocks" inside a Zstandard frame.
We are now talking about "independent chunks" being produced by a `split` operation.
updated documentation accordingly.
Note: old commands "-B#` and `--blocksize=#` remain supported,
to maintain compatibility with existing scripts.
the purpose of --ultra is to make the user explicitly opt-in
to generate very large window size (> 8 MB).
The agreement to generate very large window size is already implicit
when selecting --long or --patch-from.
Consequently, `--ultra ` is automatically enabled when `--long` or `--patch-from` is set.
* Change CLI to employ multithreading by default
* Document changes to benchmarking, print number of threads for display level >= 4, and add lower bound of 1 for the default number of threads
update the man page in troff format,
and the README with latest `--help` content and complementary details about benchmark mode.
also: display level 0 when doing decompression benchmark
After a regrettable update,
the benchmark module ended up reloading sources for every compression level.
While the delay itself is likely torelable,
the main issue is that the `--quiet` mode now also displays a loading summary between each compression line.
This wasn't the original intention, which is to produce a compact view of all compressions.
This is fixed in this version,
where sources are loaded only once, for all compression levels,
and loading summary is only displayed once.
only disable `--rm` at end of command line parsing,
so that `-c` only disables `--rm` if it's effectively selected,
and not if it's overriden by a later `-o FILE` command.
this generator replaces the statistical generator
for the general case when no statistic is requested.
Generated data features a compression level speed / ratio curve
which is more in line with expectation.
such scenario can happen, for example,
when trying a decompression-only benchmark on invalid data.
Other possibilities include an allocation error in an intermediate step.
So far, the benchmark would return immediately, but still return 0.
On command line, this would be confusing, as the program appears successful (though it does not display any successful message).
Now it returns !0, which can be interpreted as an error by command line.
[Bugfix] CLI row hash flags set the wrong values
`--[no-]row-match-finder` do the opposite of what they are supposed to.
In effect the no option would activate row hash while the other option will disable it.
This commit fixes the issue and changes the code to use the more readable enum values.
`zstd` CLI has progressively moved to the policy of
ignoring `--rm` command when the output is `stdout`.
The primary drive is to feature a behavior more consistent with `gzip`,
when `--rm` is the default, but is also ignored when output is `stdout`.
Other policies are certainly possible, but would break from this `gzip` convention.
The new policy was inconsistenly enforced, depending on the exact list of commands.
For example, it was possible to circumvent it by using `-c --rm` in this order,
which would re-establish source removal.
- Update the CLI so that it necessarily catch these situations and ensure that `--rm` is always disabled when output is `stdout`.
- Added a warning message in this case (for verbosity 3 `-v`).
- Added an `assert()`, which controls that `--rm` is no longer active with `stdout`
- Added tests, which control the behavior, even when `--rm` is added after `-c`
- Removed some legacy code which where trying to apply a specific policy for the `stdout` + `--rm` case, which is no longer possible