A Test Block is a modular unit defining a Workflow and Datasets to execute on, and a set of assertions to verify against.
Overall Configuration
Choose one or more workflow variants and dataset groups to execute this test on. For example, a Smoke test block may execute on one very small dataset to quickly ensure basic execution is successful, whereas a Regression test block may test on a larger set of full-size samples with detailed checks on the results and analytical performance.
Assertions
Assertions allow you to determine whether a test passed or failed, based on specific criteria or thresholds. Assertions can be checked against the most recent version (e.g. for longitudinal consistency) or against a baseline version (e.g. a released version) or gold standard (e.g. checking for accuracy or known properties of the samples).
Assertions can be configured to Fail or Warn if their thresholds are no met. In failure-type assertions, the test block will fail overall and any downstream test blocks in the parent test chain will not be executed. If the parent test chain is linked to CI, a failure signal will be sent to the triggering pull request. Warning assertions allow the tests to continue and to pass even certain thresholds are not met, but notifies the user of the deviations.
Assertion Types
The following test assertion types are available by default:
For all pipeline types:
Timing
File Checks
Output Meta-results
For result-generating pipeline types:
The above assertion types plus...
Concordance
Resultset-level
Field-level
Results
For truth-containing pipeline types:
The above assertion types plus...
Accuracy
Comparison Types
For comparison-based assertions, the comparison can be made either against a particular version's results or against the latest results (e.g. for ongoing concordance or timing tracking).
These can be configured in the Test Block configuration UI, or in the JSON editor by specifying either the version number or "-1" (for latest) in the versions field (array-type). You may specify multiple versions or a combination of specific versions and latest, e.g. [-1], [14,15], or [14,15,-1].
Comparing against "latest"
By default, when comparing against latest versions the software only compares against the latest matching Test Block Run from the previous run for that test chain. If the matching test block run from the latest chain has a "fail" status, you can optionally choose to instead compare against the latest matching Test Block Run with a passing status (i.e. from an earlier test chain run). This can be configured at the test block level, by changing the "on_latest_failed" setting in the Advanced configuration section of the Test Block page, as described below.
Please note that test block runs with failed executions will always be excluded from comparison.
The results that are used as baseline for latest-comparisons will be shown in the test block run description.
Modifying "on_latest_failed" setting
From the Test Block page, scroll to and expand the Advanced section at the bottom. Click Edit to add or edit the on_latest_failed advanced setting. The setting takes the form of a JSON key-value pair, e.g.
{"on_latest_failed":"compare"}
The possible values for this setting are listed below.
Setting Value | Description |
compare | Use the latest test block run for comparison, regardless of its pass/fail status. This is the default behavior for test blocks created before v1.2.4 (March 2023). |
get_latest_success | Compare against the latest matching Test Block Run with a passing status (i.e. from an earlier test chain run) if default baseline run has "fail" status. This is the default behavior. |
fail | If default baseline run has "fail" status, do not perform comparison, and fail any latest-comparison-based checks. |
More Information
For more information on how to use test blocks in automated testing, check out Test Automation - Basics.
For more information on assertion types, check Test Block Assertion Types.