Omniscope can be installed and run on Windows / Linux / Mac Os 64 bit operating system.
We support Omniscope on all Windows operating systems that are not past their Microsoft end-of-life, i.e. from Windows 10 and Windows Server 2008 upwards. 64-bit versions are recommended.
MacOS 10.12Sierra upwards to run smoothly, but Omniscope can run on OS X 10.7 + with additional setup.
We officially support Ubuntu Desktop & Server 12.04+, CentOS 5/6/7+, and the Amazon EC2 AMI.
Latest stable version of Chrome on all platforms (Windows / Mac / Linux / Android), and Safari on iPad.
In future the latest stable versions of Microsoft Edge and Firefox will be supported also.
Omniscope runs on 64-bit Intel architecture, preferably on systems with at least 2 cores.
Executing ETL workflows
Omniscope has been designed to work with 8gb or more and will use available memory, and spill to temporary disk space where needed. Fast SSDs are recommended. NVMe technology offers the best performance (N.B. NVME disk can typically read at more than 5000MB/s which is 10x faster than a SSD, almost the speed of a decent RAM).
An example: to execute a typical workflow of 1 billion records (record filter, field transformations) and 20 fields requires free disk space of at least 200gb, depending on the number and type of blocks in the workflow.
In terms of CPUs to achieve parallel workflow executions an Omniscope instance will use 1 CPU core per job, capping the parallelism to the number of CPU cores.
Using bundled data engine:
50 million records with 20 fields requires at least 8gb RAM with 40gb disk.
100 million records with 20 fields requires at least 16gb RAM with 40gb disk.
300 million records with 20 fields requires at least 36gb RAM with 100gb disk..
1 billion records with 20 fields requires at least 100gb RAM with 300gb disk.
For hosting reports we recommend setting Omniscope (and its Data Engine disk allocation) on a SSD / NVMe disk to avoid any "memory to disk" slow I/O disrupting the overall UX.
N.B. Figures are evaluated using sample datasets, which contain a variety of data types and use cases.
Hosting reports using live/direct query against an external database
This is dependent on the external database and these figures are only a rough guide. To support live/direct query (where the visualisations and interactions are translated into SQL-like queries against a 3rd party database), typically a response time of under 10 seconds is desirable.
Example using Amazon Redshift cloud-based database:
1 billion records and 20 fields requires a 16 node cluster of dc1.large instances
Example using Impala on Hadoop (with Parquet file format):
1 billion records and 20 fields requires a 12 node cluster of intermediate commodity hardware.
Still unsure? Get in touch
By all means, before setting up your server, gives us a shout, providing some variables like the dataset volume sizes, concurrent users expected and your use cases (e.g. whether the instance will be used for hosting or/and data processing etl / analytics). We'll help you evaluate the best specs for your machine or VM.