Benchmarking Guidelines
When to run benchmarking:
PR's should include results and logs from a new benchmarking run whenever:
A PR includes a new
Explainer
class. In this case, a new benchmarkChallenge
class must also be added (see below).A PR includes significant changes to an existing
Explainer
class (when in doubt, ask reviewers).A PR includes significant changes to the general fit-produce explanation workflow.
It's a good idea to run the benchmarking procedure for all PRs, as it can catch subtle bugs that may be missed by other tests (if this happens, it should be reported in a Github issue, so more tests can be added). However, unless a PR falls under one of the categories listed above, results and logs should not be pushed to the repo.
How to run benchmarking
The benchmarking process can be run using:
This will run the process, and save the results to pyreal/benchmark/results
.
To run the benchmarking process without leaving a results directory (ie, for testing):
This will run the process, and delete the results directory at the end.
To run the benchmarking process while downloading the benchmark datasets locally (this will speed up future runs):
Adding challenges
If your PR adds a new Explainer
class, you must add a corresponding Challenge
class in the same PR. To do so, follow these steps:
Add a file called
[$explainer_name]_challenge.py
to the corresponding place inpyreal/benchmark/challenges
.Fill this file out using this template, following the example of the others::
3. Add the new challenge to pyreal/benchmark/main.get_challenges()
Last updated