Let's say you have an intensive workload that needs a ton of CPU horsepower and/or memory, and needs to run every night at 1am, and takes 2 hours.
You want to use a cloud compute provider such as Amazon EC2, Google Compute Engine, or MS Azure, but don't want to pay for this horsepower for the remaining 22 hours in the day.
Here's how to optimise your Omniscope Classic and Evo deployments in terms of cloud costs.
First, provision a cloud machine with the CPU and memory requirements needed for the workflow, and sufficient disk to store the OS and the long-term workload configuration. If you need a high volume of temporary file storage, use a separate disk for that; you'll want to decommission that 2nd disk each day.
(In fact, while setting up, you can scale down the CPU to (say) 2 low-spec CPUs, but should set the memory correctly, since Omniscope's configuration will need to state the available memory.)
We recommend using Omniscope Evo on Linux in a cloud environment, since cloud costs are cheaper. But it doesn't affect the process. For Omniscope Classic, you'll want a Windows cloud environment; Linux with a desktop environment is not fully supported.
Next, Install, configure and activate Omniscope as normal. Remember that a cloud VM is no different to your laptop or a local server, it's just another computer somewhere (albeit provided virtually). On a Linux environment, don't forget to configure the _launch.sh script as documented, to specify the necessary memory requirements for your intensive workflow, which should be some fraction related to the cloud instance's allocated memory.
Now deploy and test your workload configuration - Omniscope project files, scheduler tasks, and necessary sysadmin aspects such as mounted network drives or VPN configurations. If using a separate temp disk, configure Omniscope Evo to use it by relocating sharing files and temp folders, or in Classic, be sure to configure your workflow to use it for any interim storage. I'll assume you've also configured a firewall or VPN in your cloud to securely allow access to the Omniscope Evo web-based UI, or a remote desktop session to use Omniscope Classic.
You'll also want to configure Omniscope Scheduler, or boot-time scripts, to execute the workload, unless you're controlling it manually.
Finally, gracefully shut down the Omniscope Evo process / close the Omniscope Classic desktop app, and shut down the cloud instance, being sure to retain the primary OS disk on which Omniscope is installed and activated, and containing your workload configuration. You cannot delete this disk since you would lose your Omniscope activated license (without properly deactivating the license first; we only allow typically 3 deactivate/activate license transfer cycles).
The instance will still exist in your cloud console, but won't be costing you for the CPU and memory, since when normal cloud instances are shut down, behind the scenes, your cloud provider has released the CPU and memory into their pool for others to use. You'll still pay for the primary disk, but that's a tiny fraction of the cost for your high-CPU/memory instance.
When you need to run the workload, e.g. at 1am every night, start the instance. Your cloud provider will allocate CPU and memory, and start billing you. You can automatically start cloud instances using their APIs and/or console SDKs / shell scripting, or manually using their web console.
If you haven't automated starting Omniscope and running the workflow, log into the machine and kick it off.
When it completes, gracefully shutdown Omniscope then the instance. Again, both can be automated using shell scripting.
At time of writing, here is an example cost breakdown for Google Compute Engine, which we primarily use at Visokio. Other cloud providers have similar typical costs but can't be directly compared because the pricing structures are different.
Example cloud VM configuration needed and cost, given 2 hour per day workflow execution time, running 5 days per week:
- Persistent disk (for OS and workload long-term data/configuration): 128gb pd-SSD.
$22/month, always active in terms of billing.
- Machine type: 64 CPUs and 256gb memory (n2-standard-64)
including local temp disk (not retained when shut down): 3x375gb nvme SSD
$140/month (for 43.5 monthly active hours)
(If you were to provision this machine full-time, it'd cost you nearly $2k/month.)
To cut costs further, explore using ephemeral instances (a GCE concept) or other cloud provider equivalents. You have to be OK with the risk of your workload being terminated unexpectedly, however.
Drop us a line at firstname.lastname@example.org.