Open data sources are everywhere and, in many cases, they come in a format that is not accessible to all potential users (xml, JSON, not just Excel), which means that only patient/ data literate users can interrogate the information and extract meaningful insights.
Let’s take one such example – the EU Open initiative led many governments to publish troves of data on their portals. Here is one of them:
This dataset contains the latest pollutant measurements in Slovenia at 1 hour intervals. The website is updated regularly and new records replace the old.
This presents a challenge – there is no way to compare current values with the previous levels.
The second challenge is to add context – is the measurement low or high, compared to some criteria? Without this evaluation ordinary citizens cannot be sure if the air in their town is reasonably clean or not; whether one should keep the children indoors and close the windows before the rush hour.
1. Creating a data harvesting workflow automation
We will use a File block (the first yellow block's configuration - shown on the right) to collect data directly from the URL location, then rename / format dates and other fields (in the Field Organiser block), before storing the records in an IOD output file in append mode (green block).
We will also use the Scheduler application in the Admin section to automate data collection and storage every 3 hours.
This same archive data file block can be visualised in the same project (the yellow block in the green area), following some routine de-duplication. Scheduled data refresh will also update the blue Report block, so that the new data is immediately reflected in our interactive charts.
2. Building a dynamic dashboard with filters and KPIs views
We've come to the fun bit : the data is collected and cleaned - let's visualise it!
The choice of charts is always a challenge, but in order to minimise the iterations we will add a variable "Measure choice", so that the report viewer can cycle through the KPIs and compare the average levels between the locations, as well as observe the hourly fluctuations that might be obscured by the mean calculation.
All that on a single page!
Clicking on one or more bars will create ‘brushing effect’, where the unselected values will fade but still be present for benchmarking purposes.
This is where the buttons "Move" and "Keep" allow the user to retain only the data points of interest, in addition to the filtering functionality.
The text choice Variable seen in the middle is created in the ‘Report data sources’ area (“Measure choice”), while an IF formula will orchestrate switching between 5 measurements.
If the viewer picks “CO” as their comparison criteria, all the charts on the page will have the ‘Measure’ field pointed to that input, so that lines, bars and markers will compare different locations according to the levels of CO. The Content view will update the narrative to extract the maximum and identify the location where that measurement was recorded.
Here is the Formula block with a 'variable switch formula':
How to build an interactive KPI view
The Content view is a valuable report tool because it can combine dynamic text with responsive formulas. Resulting text will change with every data update and with every viewer’s interaction (as long as the "Query" section is pointed to the "filtered data").
Depending on the viewer’s variable choice the widget will display the guidance - daily limit according to the WHO - and provide a url link to the relevant web page.
Here is a formula behind the number 92 - evaluation of maximum measured KPI across the stations, for the period:
And the formula behind the name of the location with the highest measurement (catering for the possibility that multiple locations can record the same result):
We can also connect the Max value text colour to a function, so it can dynamically respond to the evaluation result, indicating if the level measured is higher than allowed.
Similarly we could create a 'dynamic reference line' in the Line view, and link it to the variable choice.
This whole visualisation method works well if we wish to make comparison between locations in different periods (filter device with time frames 24h, 1 week, 1 month) and evaluate if their level is above or below the prescribed limit.
The viewer might be interested to know which measuring locations suffer from one or more pollutants? This question, where we compare the locations against all 5 KPIs simultaneously, is a job for a Radar chart in paning mode, so we can observe all categories in their own window.
In this mode Radar is not used for precision, but as a profiling instrument to flag which location has a problem with one or more pollutants.
3. Styling and polishing
The final step is to look at the styling section in the Report Settings (the 3-dot menu in each report). It is nice to pick a dashboard background and colours that will work together well. In this case the background is light blue, playing on the theme of clean air. "Create style" is a shortcut to create a two-colour style: 1 background, 1 foreground.
Font can be picked by typing the Google font name or copying from the link – in the example below we've chosen “Questrial”.
On the same page you can pick the default data colours – alternatively you can use random allocation for values in each field (click on the "Distribute colours"), or hand-pick the colours by clicking on the colour square next to the data value in the Data menu, on the left sidebar.
You can bookmark a report block that contains your favourite font, background and data category /range colours, then add it to different projects from the Add block>Bookmark menu.
Finally - here is a link to the project on our Omniscope public server: https://public.omniscope.me/Public/AirQuality/Slovenia.iox/
You can download the whole project by clicking on the 3-dot menu in the workflow and download the IOZ file ( you can import it as a project on your own Omniscope installation).