OS

Omniscope can be installed and run on Windows / Linux / Mac Os  64 bit operating system.

Windows

We support Omniscope on all Windows operating systems that are not past their Microsoft end-of-life, i.e. from Windows 10 and Windows Server 2008 upwards. 64-bit versions are recommended.

Mac/OSX 

MacOS 10.12Sierra upwards to run smoothly, but Omniscope can run on OS X 10.7 + with additional setup.

Linux 

We officially support Ubuntu Desktop & Server 12.04+, CentOS 5/6/7+, and the Amazon EC2 AMI.


Browser

Latest stable version of Chrome on all platforms (Windows / Mac / Linux / Android), and Safari on iPad.

In future the latest stable versions of Microsoft Edge and Firefox will be supported also.



Hardware 

Executing ETL workflows

Omniscope has been designed to work with 8gb or more. Omniscope will use available memory and spill to temporary disk space where needed. Fast SSDs are recommended. 


For example, to execute a typical workflow (record filter, field transformations), of 1 billion records and 20 fields requires free disk space of at least 200gb, depending on the number and type of blocks in the workflow.


Hosting reports

Using bundled data engine

50 million records with 20 fields requires at least 8gb RAM with 40gb disk.

100 million records with 20 fields requires at least 16gb RAM with 40gb disk.

300 million records with 20 fields requires at least 36gb RAM with 100gb disk..

1 billion records with 20 fields requires at least 100gb RAM with 300gb disk.


N.B. Figures are evaluated using sample datasets which contain a variety of data types and use cases.

Hosting reports using live/direct query against an external database

This is dependent on the external database and these figures are only a rough guide. To support live/direct query (where the visualisations and interactions are translated into SQL-like queries against a 3rd party database), typically a response time of under 10 seconds is desirable.


Example using Amazon Redshift cloud-based database:

1 billion records and 20 fields requires a 16 node cluster of dc1.large instances


Example using Impala on Hadoop (with Parquet file format):

1 billion records and 20 fields requires a 12 node cluster of intermediate commodity hardware.