The previous code had an issue when `bitsConsumed == 32` it would read 0
bits for the `ofBits` read, which violates the precondition of
`BIT_readBitsFast()`. This can happen when the stream is corrupted.
Fix thie issue by always reading the maximum possible number of extra
bits. I've measured neutral decoding performance, likely because this
branch is unlikely, but this should be faster anyways. And if not, it is
only 32-bit decoding, so performance isn't as critical.
Credit to OSS-Fuzz
Delete all unused FSE functions, now that we are no longer syncing
to/from upstream.
This avoids confusion about Zstd's stack usage like in Issue #3453.
It also removes dead code, which is always a plus.
The input bounds checks were buggy because they were only breaking from
the inner loop, not the outer loop. The fuzzers found this immediately.
The fix is to use `goto _out` instead of `break`.
This condition can happen on corrupted inputs.
I've benchmarked before and after on x86-64 and there were small changes
in performance, some positive, and some negative, and they end up about
balacing out.
Credit to OSS-Fuzz
Previously, cli-test would, by default, check that a stderr output is strictly identical to a saved outcome.
When there was no instructions on how to interpret stderr, it would default to requiring it to be empty.
There are many tests cases though where stderr content doesn't matter, and we are mainly interested in the return code of the cli.
For these cases, it was possible to set a .ignore document, which would instruct to ignore stderr content.
This PR update the logic, to make .ignore the default.
When willing to check that stderr content is empty, one must now add an empty .strict file.
This will allow status message to evolve without triggering many cli-tests errors.
This is especially important when some of these status include compression results, which may change as a result of compression optimizations.
It also makes it easier to add new tests which only care about the CLI's return code.
[Bugfix] CLI row hash flags set the wrong values
`--[no-]row-match-finder` do the opposite of what they are supposed to.
In effect the no option would activate row hash while the other option will disable it.
This commit fixes the issue and changes the code to use the more readable enum values.
Before calling a dictionary good, make sure that it can compress an
input. If v0.7.3 rejects v0.7.3's dictionary, fall back to the v1.0
dictionary. This is not the job of the verison test to test it, because
we cannot fix this code.
Add generic C versions of the fast decoding loops to serve architectures
that don't have an assembly implementation. Also allow selecting the C
decoding loop over the assembly decoding loop through a zstd
decompression parameter `ZSTD_d_disableHuffmanAssembly`.
I benchmarked on my Intel i9-9900K and my Macbook Air with an M1 processor.
The benchmark command forces zstd to compress without any matches, using
only literals compression, and measures only Huffman decompression speed:
```
zstd -b1e1 --compress-literals --zstd=tlen=131072 silesia.tar
```
The new fast decoding loops outperform the previous implementation uniformly,
but don't beat the x86-64 assembly. Additionally, the fast C decoding loops suffer
from the same stability problems that we've seen in the past, where the assembly
version doesn't. So even though clang gets close to assembly on x86-64, it still
has stability issues.
| Arch | Function | Compiler | Default (MB/s) | Assembly (MB/s) | Fast (MB/s) |
|---------|----------------|--------------|----------------|-----------------|-------------|
| x86-64 | decompress 4X1 | gcc-12.2.0 | 1029.6 | 1308.1 | 1208.1 |
| x86-64 | decompress 4X1 | clang-14.0.6 | 1019.3 | 1305.6 | 1276.3 |
| x86-64 | decompress 4X2 | gcc-12.2.0 | 1348.5 | 1657.0 | 1374.1 |
| x86-64 | decompress 4X2 | clang-14.0.6 | 1027.6 | 1659.9 | 1468.1 |
| aarch64 | decompress 4X1 | clang-12.0.5 | 1081.0 | N/A | 1234.9 |
| aarch64 | decompress 4X2 | clang-12.0.5 | 1270.0 | N/A | 1516.6 |
Fix the following documentation bugs:
* Note that `ZSTD_estimate*` functions are not compatible with the external matchfinder API
* Note that `ZSTD_estimateCStreamSize_usingCCtxParams()` is not compatible with `nbWorkers >= 1`
* Remove incorrect warning that the legacy streaming API is incompatible with advanced parameters and/or dictionary compression
* Note that `ZSTD_initCStream()` is incompatible with dictionary compression
* Warn that