AI Completion Block: Four Key Use Cases

Modified on Fri, 3 Oct at 4:47 PM

TABLE OF CONTENTS


Overview

The AI Completion block in Omniscope uses large language models (LLMs) to generate, transform, or analyse text. You can guide the model with a System Prompt to define its role and tone, a User Prompt to provide task-specific instructions, and an optional Context Input to inject external datasets for reference or analysis. This flexibility supports a wide range of applications, from producing natural-language descriptions to carrying out dataset-level investigations.


It can:

  • Generate or transform text fields for each record in a dataset

  • Leverage external datasets as context to improve answers

  • Create synthetic datasets from scratch

  • Analyse entire datasets as a whole


Below are four practical use cases that illustrate the different ways you can apply the AI Completion block.


Prerequisites

This block requires AI-features to be enabled. Please consult this knowledge-base article on how to enable them: https://help.visokio.com/a/solutions/articles/42000111598 


After AI-features are enabled and an AI provider configured, make sure to select a default model in the AI integration "Workflow executions".


Demo files

IOZ demo files are attached to this article to be downloaded and imported into Omniscope.




1. Processing Individual Records Without Context


Scenario:
A UK estate agency needs polished property descriptions for its listings. Each record has structured fields (bedrooms, location, price, features), but no descriptive copy.

Setup:

  • System Prompt:

    “You are a professional UK-based property copywriter. Write concise and engaging property descriptions.”
  • User Prompt:  

    Bedrooms, Bathrooms, Size, Location, Price, Features


Sample input data:

Property IDBedroomsBathroomsSize (sq ft)LocationPrice (GBP)Features
101321450Bristol£475,000Garden, garage, newly fitted kitchen
10221900Brighton£320,000Balcony, sea view
103431750Oxford£650,000Conservatory, driveway, study


Workflow:


Example AI Output:

“Located in vibrant Bristol, this spacious three-bedroom, two-bathroom home offers a newly fitted kitchen, private garden, and secure garage—perfect for modern family living.”




2. Processing Individual Records With Context


Scenario:
An IT support team wants to automatically draft customer responses to incoming tickets. Each ticket should be solved with the help of a knowledge base of troubleshooting articles.


Setup:

  • System Prompt:

    “You are an IT support agent. Write clear, precise, and solution-focused responses.”
  • User Prompt fields: Issue Category, Issue Description

  • Context dataset: Knowledge base articles


Sample input tickets:

Ticket IDCustomer NameIssue CategoryIssue DescriptionPriority
101Alice JohnsonBillingCharged twice for last month’s billHigh
102Bob SmithTechnicalUnable to connect to the serverHigh
103Charlie BrownAccountForgot my account passwordMedium


Sample knowledge base:

Article IDTitleContentCategory
201Resolving Duplicate BillingContact support with your invoice number…Billing
202Fixing Server Connection IssuesCheck internet, restart router, clear cache…Technical
203Resetting Account PasswordUse “Forgot Password” on login screen…Account



Workflow:


Example AI Output (Ticket 101):

“Hi Alice, we’ve identified a duplicate charge and issued a refund to your original payment method. It will appear within 3–5 business days.”




3. Generating New Data Without Inputs


Scenario:
You need a synthetic dataset for testing travel booking scenarios. No input dataset is provided—the AI generates fresh records based solely on prompts.


Setup:

  • System Prompt:

    “You are a world-class synthetic data generator. Always create realistic, quirky, and internally consistent datasets in JSON format.”
  • User Prompt:

    “Generate 20 synthetic Cold War–era tourism bookings to spy-thriller destinations.”


Workflow:




Sample AI Output (3 rows extracted):

Booking IDOrigin YearDestination YearHistorical EventRisk RatingTicket PriceTraveler NameTraveler Feedback
BKG-000119601961Berlin Wall tour at Checkpoint Charlie64,200Alexei PetrovBorder guards surprisingly polite
BKG-000219591962Cuban Missile Crisis vantage trip89,800Maria LopezHavana buzzing with tension
BKG-000319611961East Berlin photo tour53,200Hans MüllerSouvenir stamps oddly complex




4. Processing the Whole Dataset Using Only Context


Scenario:
A compliance team wants to detect possible financial fraud in an entire dataset of transactions. Instead of row-by-row processing, the whole dataset is injected into the AI as context.


Setup:

  • System Prompt:

    “You are an expert fraud analyst. Review the dataset as a whole and classify it as Normal, Suspicious, or High Risk.”
  • Context dataset fields: Amount, Date, Merchant, Description

  • No main input dataset


Sample transactions:

Account IDAmountDateDescriptionMerchantTransaction ID
A100112.752025-07-01Latte and pastryCafeBrewT1
A10028.502025-07-01Short taxi rideCityTaxiT2
A100145.202025-07-02Weekly groceriesGreenGrocersT3



Workflow:


Example AI Output:

“High Risk”




Additional Tips

  • Custom tone: Adjust the System Prompt to change tone—e.g., more formal for luxury homes or more casual for student lettings.

  • Multilingual listings: Instruct the model to return the output in other languages (e.g., Welsh, French) using the system prompt.

  • Record-level prompts: Use different system or user prompts in each record to vary the description style based on Property Type or Target Audience.


Was this article helpful?

That’s Great!

Thank you for your feedback

Sorry! We couldn't be helpful

Thank you for your feedback

Let us know how can we improve this article!

Select at least one of the reasons
CAPTCHA verification is required.

Feedback sent

We appreciate your effort and will try to fix the article