A Test Block is a modular unit defining a Workflow and Datasets to execute on, and a set of assertions to verify against.
Overall Configuration
Choose one or more workflow variants and dataset groups to execute this test on. For example, a Smoke test block may execute on one very small dataset to quickly ensure basic execution is successful, whereas a Regression test block may test on a larger set of full-size samples with detailed checks on the results and analytical performance.
Updating Configuration
Changing the Test Block Name or Description
From the Test Block page, click the name or description fields in the blue box on the left. You can edit these fields inline. When you move away from the field (i.e. click away, or hit tab or enter on your keyboard), a popup will ask if you would like to update the name/description. Click Confirm to finalize your update.
Changing the Datasets or Workflow Variations
Click on the Edit button in the Basic Configuration section to edit the datasets, pipeline, or workflow variants for a Test Block.
Adding to a Test Chain
For a new Test Block that hasn't yet been used in a Test Chain, you can quickly create a one-step chain by clicking on Quick-Create Chain under the Linked Chains section of the blue box on the left.
Or, if this block is already part of one or more test chains, you can still quick-create a chain by going to the Quick Actions menu and clicking Quick-Create Chain.
You can also add the block to an existing chain from the Test Chain view, by clicking the Edit button under the Test Blocks section and then clicking Add.
Duplicating a Test Block
To copy the settings and assertions for an existing Test Block into a new Test Block, go to the Quick Actions menu and click Duplicate.
Assertions
Assertions allow you to determine whether a test passed or failed, based on specific criteria or thresholds. Assertions can be checked against the most recent version (e.g. for longitudinal consistency) or against a baseline version (e.g. a released version) or gold standard (e.g. checking for accuracy or known properties of the samples).
Assertions can be configured to Fail or Warn if their thresholds are no met. In failure-type assertions, the test block will fail overall and any downstream test blocks in the parent test chain will not be executed. If the parent test chain is linked to CI, a failure signal will be sent to the triggering pull request. Warning assertions allow the tests to continue and to pass even certain thresholds are not met, but notifies the user of the deviations.
Assertion Types
The following test assertion types are available by default:
For all pipeline types:
Timing
Basic File Checks
File Counts (comparative, threshold-based)
Exact Match
File Comparison
File Search
Manual Comparison
Output Postprocessor-based Tests
Metadata Comparison
Parsed
File Comparison
Manual Comparison
For result-generating pipeline types:
The above assertion types plus...
Concordance
Resultset-level
Field-level
Overlapping Results
Results Meeting Rules
For truth-containing pipeline types:
The above assertion types plus...
Accuracy
Delta-Accuracy
Comparison Types
For comparison-based assertions, the comparison can be made either against a particular version's results or against the latest results (e.g. for ongoing concordance or timing tracking).
These can be configured in the Test Block configuration UI, or in the JSON editor by specifying either the version number or "-1" (for latest) in the versions field (array-type). You may specify multiple versions or a combination of specific versions and latest, e.g. [-1], [14,15], or [14,15,-1].
Comparing against "latest"
By default, when comparing against latest versions the software only compares against the latest matching Test Block Run from the previous run for that test chain. If the matching test block run from the latest chain has a "fail" status, you can optionally choose to instead compare against the latest matching Test Block Run with a passing status (i.e. from an earlier test chain run). This can be configured at the test block level, by changing the "on_latest_failed" setting in the Advanced configuration section of the Test Block page, as described below.
Please note that test block runs with failed executions will always be excluded from comparison.
The results that are used as baseline for latest-comparisons will be shown in the test block run description.
Modifying "on_latest_failed" setting
From the Test Block page, scroll to and expand the Advanced section at the bottom. Click Edit to add or edit the on_latest_failed advanced setting. The setting takes the form of a JSON key-value pair, e.g.
{"on_latest_failed":"compare"}
The possible values for this setting are listed below.
Setting Value | Description |
compare | Use the latest test block run for comparison, regardless of its pass/fail status. This is the default behavior for test blocks created before v1.2.4 (March 2023). |
get_latest_success | Compare against the latest matching Test Block Run with a passing status (i.e. from an earlier test chain run) if default baseline run has "fail" status. This is the default behavior. |
fail | If default baseline run has "fail" status, do not perform comparison, and fail any latest-comparison-based checks. |
More Information
For more information on how to use test blocks in automated testing, check out Test Automation - Basics.
For more information on assertion types, check Test Block Assertion Types.