Skip to main content
Executing a Pipeline

How do I execute my pipeline on one or many datasources?

Gwenn Berry avatar
Written by Gwenn Berry
Updated over a year ago

There are a few ways you can execute your pipeline (specifically, an Executor of your pipeline) on your datasources of choice:

Automatic Execution

Using dataset groups and workflow triggers, you can kick off automatic executions of a workflow on a selected group of datasets each time an event is triggered. For example, you may choose to kick off a reproducibility and benchmarking analysis on 3 small datasets with each update to a component within your workflow. See Workflow Automations for more.

Ad Hoc Execution

Execute on a single datasource

From the individual dataset view, click the Execute button (top right) to run the latest workflow version, or choose a specific version from the Versions dropdown. To see this in a Tour, type Execute into the Help bot.

Batch Execute on multiple datasets

From the Dataset list view, select all datasets you want to execute on. You may want to use the table search and filtering tools and "select all" checkbox if you're dealing with a lot of datasets.

Click the Perform Action on Selected button to kick off an execution. In the modal, click the Execute tab to choose a specific workflow version to run on. To see this in a Tour, type Batch Execute into the help bot, or check out the video below.

Planned/Batch Execution

If you're planning on executing many datasets across a variety of pipelines, and comparing their results or performance, we recommend using Batch Analyses so that all of your analyses are easily tracked. Check out this article for more information: Batch Analyses.

Did this answer your question?