This article provides an overview on how to use Omniscope Evo to serve reports in a high available/scalable environment: in particular we will focus on AWS services such as AWS Fargate, Amazon Redshift and Elastic Load Balancing.
Although we will describe an environment built on top of Amazon technologies, a similar setup can be obtained using open technologies such as Kubernetes, Apache Nginx and Postgresql for example.
In this configuration not all Omniscope functionalities are supported: the goal is to utilise multiple instances of Omniscope (the number of which can be fixed, or changing depending on the load) to serve reports. If a particular Omniscope instance happens to crash or somehow become unresponsive, the system will take care of shutting it down, and migrate the users session to another instance of the application.
A brief overview
The system we are going is illustrated in the following diagram:
- The first step is to create a dockerimage of an Omniscope app. The containerised application needs to be activated and pre-configured to run the external web server. Although with some limitations, Omniscope can be configured to run in a docker container. A skeleton of the Dockerfile and some of the files required to create the image can be found attached to this article. Once the image has been created, it needs to be pushed in a registry visible from the Fargate cluster (For example an Amazon Elastic Container Registry instance).
- Once the image has been built, we can use AWS Fargate to orchestrate the lifecycle and monitoring of containers running Omniscope. It is possible to configure AWS Fargate to orchestrate a fixed number of containers, or with autoscaling. You can configure this step using AWS web interface or command line tools. You can find more information about AWS Fargate here: https://aws.amazon.com/fargate/
- When configuring the Fargate service an health check can be used to assert Omniscope instances are up and running correctly, or terminate the container execution if something went wrong.
- Note it is Fargate that manages the lifecycle of the container running Omniscope: for this reason it is advisable to mount the Omniscope sharing folder as a volume (For example using Amazon Elastic File System), so that the files can be updated without the need to create and deploy new images.
- Finally, the AWS Fargate cluster can be associated to a AWS Elastic Load Balancer so that requests are evenly distributed across the cluster.
- A DBMS or Data Warehouse can be used in to power Omniscope reports using a database block in live query mode: in this case the system can take advantage of Amazon Redshift, or another DBMS (For example managed via Amazon RDS). This approach is recommended if the reports are powered by a large amount of data.
In this setup, workflows execution, or concurrent edit of the same file is not currently supported. Omniscope will only be able to serve the reports if files consist of a data table block (using Omniscope in-memory data engine), or a database block configured to a in live query mode (for example using Amazon Redshift), directly connected to a report block.
When using Omniscope with project like the ones just described, Omniscope has effectively become a stateless application: the data used to power the report is either immutable and contained in the the iox files themselves (Data table cases), or handled by a DBMS.
In this article we gave a high level overview of an environment where Omniscope is used to serve reports in a scalable, high-available system leveraging AWS technologies.
Similar architectures can be deployed using AWS alternatives. Also the design can be tweaked to accomodate different requirements.
For more information about the architecture described here, changes in its design, alternative use cases and/or bespoke solutions please contact us at firstname.lastname@example.org