1
0
mirror of https://sourceware.org/git/glibc.git synced 2025-06-10 21:01:45 +03:00
Commit Graph

7 Commits

Author SHA1 Message Date
587426d499 benchtests: keep comparing even if function timings do not match
Allows other functions to be processed, making the script a bit more fault
tolerant thus useful.

	* benchtests/scripts/compare_bench.py (compare_runs): Continue instead of return.
2018-12-12 11:05:22 -06:00
c892ae04f4 benchtests: Set float type on --threshold argument
Otherwise, we see the following runtime error when using the parameter:

  File "./glibc/benchtests/scripts/compare_bench.py", line 46, in do_compare
    if d > threshold:
TypeError: '>' not supported between instances of 'float' and 'str'

	* benchtests/scripts/compare_bench.py (main): set float type on
	threshold argument.
2018-10-08 09:11:30 -05:00
1cf4ae7fe6 benchtests: improve argument parsing through argparse library
The argparse library is used on compare_bench script to improve command line
argument parsing. The 'schema validation file' is now optional, reducing by
one the number of required parameters.

	* benchtests/scripts/compare_bench.py (__main__): use the argparse
	library to improve command line parsing.
	(__main__): make schema file as optional parameter (--schema),
	defaulting to benchtests/scripts/benchout.schema.json.
	(main): move out of the parsing stuff to __main_  and leave it
	only as caller of main comparison functions.
2018-07-19 14:53:37 -05:00
688903eb3e Update copyright dates with scripts/update-copyrights.
* All files with FSF copyright notices: Update copyright dates
	using scripts/update-copyrights.
	* locale/programs/charmap-kw.h: Regenerated.
	* locale/programs/locfile-kw.h: Likewise.
2018-01-01 00:32:25 +00:00
bfff8b1bec Update copyright dates with scripts/update-copyrights. 2017-01-01 00:14:16 +00:00
f7a9f785e5 Update copyright dates with scripts/update-copyrights. 2016-01-04 16:05:18 +00:00
0cd2828695 benchtest: script to compare two benchmarks
This script is a sample implementation that uses import_bench to
construct two benchmark objects and compare them.  If detailed timing
information is available (when one does `make DETAILED=1 bench`), it
writes out graphs for all functions it benchmarks and prints
significant differences in timings of the two benchmark runs.  If
detailed timing information is not available, it points out
significant differences in aggregate times.

Call this script as follows:

  compare_bench.py schema_file.json bench1.out bench2.out

Alternatively, if one wants to set a different threshold for warnings
(default is a 10% difference):

  compare_bench.py schema_file.json bench1.out bench2.out 25

The threshold in the example above is 25%.  schema_file.json is the
JSON schema (which is $srcdir/benchtests/scripts/benchout.schema.json
for the benchmark output file) and bench1.out and bench2.out are the
two benchmark output files to compare.

The key functionality here is the compress_timings function which
groups together points that are close together into a single point
that is the mean of all its representative points.  Any point in such
a group is at most 1.5x the smallest point in that group.  The
detailed derivation is a comment in the function.

	* benchtests/scripts/compare_bench.py: New file.
	* benchtests/scripts/import_bench.py (mean): New function.
	(split_list): Likewise.
	(do_for_all_timings): Likewise.
	(compress_timings): Likewise.
2015-06-01 23:14:11 +05:30