Omniscope system requirements

Modified on Wed, 6 Dec, 2023 at 12:20 PM


Omniscope can be installed and run on Windows / Linux / Mac Os 64 bit operating system. 32-bit systems are no longer supported.


We support Omniscope on all Windows operating systems that are not past their Microsoft end-of-life, i.e. from Windows 10 and Windows Server 2008 upwards (including Windows Server 2022). 64-bit versions are recommended, required since Omniscope 2023.2. 


MacOS 10.12 Sierra and upwards. Latest versions recommended.


We officially support Ubuntu Desktop & Server 12.04+, and the Amazon EC2 AMI. LTS versions recommended.


Latest stable version of Chrome on all platforms (Windows / Mac / Linux / Android), and Safari on iPad.

In future the latest stable versions of Microsoft Edge and Firefox will be supported also.


Typical recommended machine / vm spec:

CPU : 2 cores (or 4 virtual cores)

RAM: 12 GB

Disk: 250GB SSD

For example, on AWS we recommend  m6i.xlarge on Google Cloud n1-highmem-2


Omniscope runs on 64-bit Intel architecture, preferably on systems with at least 2 physical cores.

On Apple Silicon, Omniscope is supported via Rosetta 2.


Executing ETL workflows

Omniscope has been designed to work with 8gb or more and will use available memory, and spill to temporary disk space where needed. Fast SSDs are recommended. NVMe technology offers the best performance (N.B. NVME disk can typically read at more than 5000MB/s  which is 10x faster than a SSD, almost the speed of a decent RAM).

An example: to execute a typical workflow of 1 billion records (record filter, field transformations) and 20 fields requires free disk space of at least 200gb, depending on the number and type of blocks in the workflow.

In terms of CPUs to achieve parallel workflow executions an Omniscope instance will use 1 CPU core per job, capping the parallelism to the number of CPU cores.

Hosting reports

Using bundled data engine:

50 million records with 20 fields requires at least 8gb RAM with 40gb disk.

100 million records with 20 fields requires at least 16gb RAM with 40gb disk.

300 million records with 20 fields requires at least 36gb RAM with 100gb disk..

1 billion records with 20 fields requires at least 100gb RAM with 300gb disk.

For hosting reports we recommend setting Omniscope (and its Data Engine disk allocation) on a SSD / NVMe disk to avoid any "memory to disk" slow I/O disrupting the overall UX.

N.B. Figures are evaluated using sample datasets, which contain a variety of data types and use cases.

Hosting reports using live/direct query against an external database

This is dependent on the external database and these figures are only a rough guide. To support live/direct query (where the visualisations and interactions are translated into SQL-like queries against a 3rd party database), typically a response time of under 10 seconds is desirable.

Example using Amazon Redshift cloud-based database:

1 billion records and 20 fields requires a 16 node cluster of dc1.large instances

Example using Impala on Hadoop (with Parquet file format):

1 billion records and 20 fields requires a 12 node cluster of intermediate commodity hardware.

Still unsure? Get in touch

By all means, before setting up your server, gives us a shout, providing some variables like the dataset volume sizes, concurrent users expected and your use cases (e.g. whether the instance will be used for hosting reports for visual data exploration or/and data processing etl / analytics). We'll help you evaluate the best specs for your machine or VM.

Was this article helpful?

That’s Great!

Thank you for your feedback

Sorry! We couldn't be helpful

Thank you for your feedback

Let us know how can we improve this article!

Select at least one of the reasons
CAPTCHA verification is required.

Feedback sent

We appreciate your effort and will try to fix the article