Auto-refreshing dashboard, accumulating data from online feed.

Posted almost 6 years ago by Antonio Poggi

Antonio Poggi
Antonio Poggi Admin

This is a solution to solve a common use case in Omniscope: building a workflow to collect data from an online feed and visualising the accumulated historical data through a auto-refreshing dashboard.


It may sounds complex, but once you get around the main concepts you'll master this.


Getting the data to create historical snapshots.

In my case the data is provided by an online services through HTTP request, returning a response in CSV format.


Solution A

I can use the File block to point to the online service (URL string) and download the data.

Then use the Scheduler to add a task that periodically modifies the File block URL (to change the query string to use a different dates) and then re-execute the block to get fresh data.


Solution B

Since I am developer who loves coding and scripting in Python, and I know I can augment any workflow in Omniscope with Python scripts, I decided to opt for this solution and write a simple Python script for my use case that will

  1. evaluate the current date and appropriate interval
  2. submit the request to the online service
  3. download the data required and output it to the downstream workflow.


Here is the script:


import pandas as pd
import datetime

today = datetime.datetime.today()
start = today - datetime.timedelta(days=7) #e.g. 7 days difference

startDate = start.strftime("%Y%m%d") #'20181226'
endDate = today.strftime("%Y%m%d") #'20190102'

link = 'http://www.server.com/somequeryendpoint&startdate='+startDate+'&enddate='+endDate

output_data = pd.read_csv(link)


N.B. the part of the script where I specify the 7 days delta could actually use any block data as param (e.g. a Text Input block, a DB block, etc), since the Python block allows input data to flow in.


To create a local historical copy of the weekly data I simply connect my Python input block to a simple workflow to clean up the data (through Field Organiser, and other data preparation blocks), outputting the dataset as csv file through a File output block.

The File output block is configured to publish the data as csv, adding a timestamp to the filename to make sure data data is never overwritten, so every time the workflow is executed new data will be downloaded and stored locally in a folder containing the csv files.


Cleaning up duplicated records

Since the data provided by the online service may contain duplicated records, I have to de-duplicate these.

The second part of the workflow is made of a Batch append folder input block, pointing to the folder containing the produced csv files, connected to a De-duplicate block to filter out duplicated records, appending all records from the several snapshots into a one dataset that I can connect to a Report block for visualisation purposes.



Auto-refreshing dashboard

At this point we have to automate the workflows with a Scheduler task made of 2 actions:

  1. the first one to grab data from the online feed (say weekly), executing the file output block and its sources.
  2. the second one right away after the first, to append all snapshot into a final dataset, implicitly refreshing any connected report with new data.




Here is a sample project hosted on our server that you can download to start with: https://omniscope.me/Forums/refreshing+accumulating+dashboard.iox/ 


From 2022.1 Rock build - you can now also set within the Report to auto-refresh on open. See Report Settings > Global tab > Auto-refresh section.



1 Votes


0 Comments

Login or Sign up to post a comment