Control Flow Validation: Using the Validate Data Block with Halted Outputs

Modified on Mon, 20 Oct at 2:49 PM

Workflows in Omniscope often rely on data that may or may not be complete, valid, or even present when the workflow runs. The Validate Data block gives you the tools to ensure that only correct datasets are processed, but when combined with Control Flow, it can also decide dynamically whether or not a workflow branch should run at all.


This article explores how the Validate Data block behaves when configured with Halt failure actions, and how it can be used to create adaptive workflows that gracefully handle missing or invalid inputs. Two example workflows demonstrate these concepts in practice and you can download them as ioz attached to this article.


The Validate Data Block as a Control Flow Element

The Validate Data block is normally used to check schema, field values, and record counts before data proceeds downstream. When configured with a Halt action, it becomes a Control Flow component: if validation fails, execution of that branch is stopped — no downstream blocks execute, and no further processing occurs on that path.

Unlike an Error action, which fails the entire workflow, Halt lets the rest of the workflow continue normally. This makes it ideal for scenarios where multiple data sources or iterations exist, and you want to skip invalid inputs rather than terminate the whole process.


For additional information on Control Flow, please take a look at this article.



1. Control Flow Validation


In this demo we process three regional sales datasets (Europe, USA, APAC) if the data is present and valid.
If a dataset is missing or fails validation, it should be replaced with an empty schema so that the report still runs.


Workflow Overview




Each region’s dataset passes through its own Validate Data block.

Validation checks are configured for:

  • Expected fields (schema)

  • Minimum record count (>0)


Each Validate Data block’s failure action is set to Halt. When validation fails:

  • The dataset does not flow further downstream.

  • Downstream blocks for that branch are not executed.

  • The Problems output captures validation issues.

  • The Input Router automatically switches to a schema-only placeholder dataset.

This ensures that the report always renders — even if some datasets are unavailable



Behaviour

  • If all datasets pass validation, the report shows all regions.

  • If one or more fail (for example, have 0 records), those datasets are replaced by their schema placeholders.

  • Validation problems are written to a separate file for later inspection.

  • The workflow continues and successfully generates the report for valid datasets.



2. Control Flow Validation with For-Each


The previous workflow works well, but it involves repeating the same pattern — one Validate Data block per dataset, plus routers and filters for each region.


The same logic can be implemented more efficiently using a For Each block.
Here, all datasets are iterated through the same validation logic dynamically.


Automatically validate multiple datasets (Europe, USA, APAC), process only those that pass, and log any validation failures — all using a single workflow branch.


Workflow Overview



The workflow uses two For Each blocks:

  • One For Each loop processes valid datasets.

  • The other collects validation problems.


Each iteration selects one regional dataset using an Input Router, validates it, and either:

  • Passes it forward (if valid), or

  • Halts (if invalid).

Because the Validate Data block is configured to Halt, failed iterations are skipped automatically by the For Each block.


Behaviour

  • Each region is processed in turn.

  • If a dataset fails validation, that iteration halts — no further blocks execute for that dataset.

  • The For Each block appends results from all successful iterations into one output.

  • A separate Problems For Each loop collects all validation issues into a single file.


Benefits

  • Compact — one validation logic instead of three branches.

  • Automatic skipping — invalid datasets are silently ignored, no errors stop the workflow.

  • Reusable — easily extended to more datasets without adding new blocks.

  • Consistent logging — all validation issues consolidated in one output.


Was this article helpful?

That’s Great!

Thank you for your feedback

Sorry! We couldn't be helpful

Thank you for your feedback

Let us know how can we improve this article!

Select at least one of the reasons
CAPTCHA verification is required.

Feedback sent

We appreciate your effort and will try to fix the article