Skip to content

tiagovignatti/ezbench

Repository files navigation

= EzBench =

This repo contains a collection of tools to benchmark graphics-related
patch-series.

== Installation ==

WIP

== Core.sh ==

WARNING: This tool can be used directly from the CLI but it is recommenced that
you use ezbench in conjunction with ezbenchd which together support testing
kernels and which can recover upon errors.

This tool is responsible for collecting the data and generating logs that will
be used by another tool to generate a visual report.

To operate, this script requires a git repo, the ability to compile and deploy
a commit and then benchmarks to be run. To simplify the usage, a profile should
be written for each repo you want to test that will allow ezbench to check the
current version that is deployed, to compile and install a new version and set
some default parameters to avoid typing very-long command lines every time a
set of benchmarks needs to be run.

By default, the logs will be outputed in logs/<date of the run>/ and are stored
mostly as csv files. The main report is found under the name results and needs
to read with "less -r" to get the colours out! The list of commits tested is
found under the name commit_list. A comprehensive documentation of the file
structure will be written really soon.

You may specify whatever name you want by adding -N <name> to the command line.
This is very useful when testing kernel-related stuff as we need to reboot on
a new kernel to test a new commit.

=== Dependencies ===

The real core of ezbench requires:
 - A recent-enough version of bash
 - awk
 - all the other typical binutils binaries

If you plan on recompiling drivers (such as mesa or the kernel), you will need:
 - autotools
 - gcc/clang
 - git
 - make

To run the xserver, you will need the following applications:
 - chvt
 - sudo (with the testuser setup not to ask for passwords)
 - Xorg
 - xrandr
 - xset

If you want to use specify which compositor should be run, you will need to
install the following applications:
 - unbuffer
 - wmctrl

=== Configuration ===

The tests configuration file is named user_parameters.sh. A sample file called
user_parameters.sh.sample comes with the repo and is a good basis for your first
configuration file.

You will need to adjust this file to give the location of the base directory of
all the benchmark folders and repositories for the provided profiles.

Another important note about core.sh is that it is highly modular and
hook-based. Have a look at profiles.d/$profile/conf.d/README for the
documentation about the different hooks.

It is possible to adjust the execution environnement of the benchmarks by asking
core.sh to execute one or multiple scripts before running anything else. This
for example allows users to specify which compositor should be used.

If core.sh is currently running and you want to stop its execution after the next
test, you can create a file named 'requestExit' in the log folder.

=== Examples ===

==== Testing every patchset of a series ====

The following command will test all the GLB27:Egypt cases but the ones
containing the word cpu in them. It will run all the benchmarks 5 times on
the 10 commits before HEAD~~.

    ./core.sh -p ~/repos/mesa -B cpu -b GLB27:Egypt -r 5 -n 10 -H HEAD~~

The following command run the synmark:Gl21Batch2 benchmark (note the $ at the
end that indicates that we do not want the :cpu variant). It will run all the
benchmarks 3 times on 3 commits (in this order), HEAD~5 HEAD~2 HEAD~10.

    ./core.sh -p ~/repos/mesa -b synmark:Gl21Batch2$ -r 3 HEAD~5 HEAD~2 HEAD~10

To use the mesa profile, which has the advantage of checking that the deployment
was successful, you may achieve the same result by running:

    ./core.sh -P mesa -b synmark:Gl21Batch2$ -r 3 HEAD~5 HEAD~2 HEAD~10

You may also tell core.sh that you want to use a certain compositor, set
environnement variables for the run or set any other option exposed by the
profile you are using. To do so, use -c like in the following example:

    ./core.sh -P mesa -b synmark:Gl21Batch2$ -r 3 -c confs.d/x11_comp/kwin HEAD

==== Retrospectives ====

Here is an example of how to generate a retrospective. The interesting part is
the call to utils/get_commit_list.py which generates a list of commits

    ./core.sh -p ~/repos/mesa -B cpu -b GLB27:Egypt:offscreen \
                 -b GLB27:Trex:offscreen -b GLB30:Manhattan:offscreen \
                 -b GLB30:Trex:offscreen -b unigine:heaven:1080p:fullscreen \
                 -b unigine:valley:1080p:fullscreen -r 3 \
                 -m "./recompile-release.sh" \
                 `utils/get_commit_list.py -p ~/repos/mesa -s 2014-12-01 -i "1 week"`

==== Giving a large list of tests to core.sh ====

The command line argument's maximum length is limited. If you ever feel the need
to pass a long list of tests in a reliable way, you may use the character '-' as
a test name. This will make core.sh read the list of tests from stdin, until EOF.
Here is an example:

    ./core ./core.sh -P mesa -b - -r 1 HEAD
    test1
    ...
    test5000
    EOF (Ctrl + D)

== ezbench ==

This tool is meant to make the usage of core.sh easy and support testing
performance across reboots.

It allows creating a new performance report, scheduling benchmark
runs, changing the execution rounds on the fly and then start, pause or halt
the execution of this report.

This tool uses core.sh as a backend for checking that the commits SHA1 and tests
do exist so you are sure that the work can be executed when the time comes.

=== Dependencies ===

 - python3
 - numpy
 - scipy

=== Examples ===

==== Creating a report ====

The ezbench command allows you to create a new performance report. To create
a performance report named 'mesa-tracking-pub-benchmarks', using the core.sh
profile 'mesa', you need to run the following command:

    ./ezbench -p mesa mesa-tracking-pub-benchmarks

==== Adding benchmarks runs ====

Adding the 2 rounds of benchmark GLB27:Egypt:offscreen to the report
mesa-tracking-pub-benchmarks for the commit HEAD can be done using the following
command:

    ./ezbench -r 2 -b GLB27:Egypt:offscreen -c HEAD mesa-tracking-pub-benchmarks

A retrospective can be made in the same fashion as with core.sh at the exception
made that it would also work across reboots which is good when testing kernels:

    ./ezbench -r 3 -b GLB27:Egypt:offscreen -b GLB27:Trex:offscreen \
              -b GLB30:Manhattan:offscreen -b GLB30:Trex:offscreen \
              -b unigine:heaven:1080p:fullscreen -b unigine:valley:1080p:fullscreen \
              -c "`utils/get_commit_list.py -p ~/repos/mesa -s 2014-12-01 -i "1 week"`"
              mesa-tracking-pub-benchmarks

==== Checking the status of a report ====

You can check the status of the 'mesa-tracking-pub-benchmarks' report by calling
the following command:

    ./ezbench mesa-tracking-pub-benchmarks status

==== Changing the execution status of the report ====

When creating a report, the default state of the report is "initial" which means
that nothing will happen until the state is changed. To change the state, you
need to run the following command:

    ./ezbench mesa-tracking-pub-benchmarks (run|pause|abort)

 - The "run" state says that the report is ready to be run by ezbenchd.py.

 - The "pause" and "abort" states indicate that ezbenchd.py should not be
 executing any benchmarks from this report. The difference between the "pause"
 and "abort" states is mostly for humans, to convey the actual intent.

==== Starting collecting data without ezbenchd.py ====

If you are not using ezbenchd.py, you may simply run the following command to
start collecting data:

    ./ezbench mesa-tracking-pub-benchmarks start

This command will automatically change the state of the report to "run".

== utils/ezbenchd.py ==

This tool monitors the ezbench reports created using the "ezbench" command and
runs them if their state is right (state == RUN). It also schedules improvements
when the run is over before moving on to the next report. It thus allow for
collaborative sharing of the machine by different reports, as long as they
make sure not to change the global state.

=== Dependencies ===

 - ezbench.py

=== Examples ===

Simply running utils/ezbenchd.py is enough to run all reports which are
currently in the state "RUN".

    ./ezbench -p mesa mesa-tracking-pub-benchmarks
    ./ezbench -r 2 -b GLB27:Egypt:offscreen -c HEAD mesa-tracking-pub-benchmarks
    ./ezbench mesa-tracking-pub-benchmarks run
    utils/ezbenchd.py

=== Systemd integration ===

Here is an example of systemd unit you can use to run ezbenchd as a daemon. This is
needed if you want to test the kernel as you need to reboot.

/etc/systemd/system/ezbenchd.service:
	[Unit]
	Description=Ezbench's daemon that executes the runs
	Conflicts=getty@tty5.service
	After=systemd-user-sessions.service getty@tty1.service plymouth-quit.service

	[Service]
	ExecStart=/.../ezbench/utils/ezbenchd.py
	User=$USER
	Group=$USER

	[Install]
	WantedBy=multi-user.target

=== Nightly tests ===

There is currently no real support for nightly tests on ezbench. However, it is
quite trivial to use systemd to schedule updates to the repos and schedule runs
using ezbench's CLI interface.

/etc/systemd/system/ezbenchd_repo_update.timer:
	[Unit]
	Description=Run this timer every day to update repos and schedule runs
	[Timer]
	OnCalendar=*-*-* 20:30:00
	Persistent=true

	[Install]
	WantedBy=timers.target

/etc/systemd/system/ezbenchd_repo_update.service:
	[Unit]
	Description=Task to update the source repos and schedule more runs

	[Service]
	Type=oneshot
	WorkingDirectory=/.../ezbench
	ExecStart=/bin/bash repo_update.sh
	User=$USER
	Group=$USER

/.../ezbench/repo_update.sh:
	#!/bin/bash

	ezbench_dir=$(pwd)
	source user_parameters.sh

	set -x

	if [ -n "$REPO_MESA" ]
	then
			cd "$REPO_MESA"
			git fetch
			git reset --hard origin/master

			cd $ezbench_dir
			./ezbench -t gl_render -c HEAD -p mesa mesa_nightly run
	fi

== stats/gen_report.py ==

WARNING: This tool is deprecated, compare_reports.py is the prefered way now even
if the single-report mode is not as advanced as the gen_report.py.

The goal of this tool is to read the reports from ezbench and make them
presentable to engineers and managers.

Commits can be renamed by having a file named 'commit_labels' in the logs
folder. The format is to have one label per line. The short SHA1 first, a space
and then the label. Here is an example:
    bb19f2c 2014-12-01

If you want to generate date labels for commits, you can use the tool
utils/gen_date_labels.py to generates the 'commit_labels' file. Example:
    utils/gen_date_labels.py -p ~/repos/mesa logs/seekreet_stuff/

It is also possible to add notes to the HTML report by adding a file called
'notes' in the report folder. Every line of the note file will be added in
an unordered list. It is possible to use HTML inside the file.

=== Dependencies ===

 - python3
 - matplotlib
 - scipy
 - mako
 - an internet connection to read the report

=== Example ===

This command will create an HTML report named
logs/public_benchmarks_trend_broadwell/index.html. Nothing more, nothing less.

    ./stats/gen_report.py logs/public_benchmarks_trend_broadwell/


== utils/perf_bisect.py ==

WARNING: The introduction of smart ezbench made this tool absolutely useless

The perf_bisect.py tool allows bisecting performance issues. It is quite trivial
to use, so just check out the example.

=== Examples ===

The following command will bisect a performance difference between commit
HEAD~100 and HEAD. The -p, -b, -r and -m arguments are the same as core.sh.

    utils/perf_bisect.py -p ~/repos/mesa -b glxgears:window -r 1 HEAD~100 HEAD

About

Graphics stack benchmarking utility (master is at https://cgit.freedesktop.org/ezbench)

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published