Let's say you have an intensive workload that needs a ton of CPU horsepower and/or memory, and needs to run every night at 1am, and takes 2 hours.
You want to use a cloud compute provider such as Amazon EC2, Google Compute Engine, or MS Azure, but don't want to pay for this horsepower for the remaining 22 hours in the day.
Here's how to optimise your Omniscope Classic and Evo deployments in terms of cloud costs.
First, provision a cloud machine with the CPU and memory requirements needed for the workflow, and sufficient disk to store the OS and the long-term workload configuration. If you need a high volume of temporary file storage, use a separate disk for that; you'll want to decommission that 2nd disk each day. Consider local storage ("local ssd" in GCE), but be aware of restrictions when stopping vs suspending instances.
(In fact, while setting up, you can scale down the CPU and memory down, then scale up in production.)
We recommend using Omniscope Evo on Linux in a cloud environment, since cloud costs are cheaper. But it doesn't affect the process. For Omniscope Classic, you'll want a Windows cloud environment; Linux with a desktop environment is not supported.
Next, Install, configure and activate Omniscope as normal. Remember that a cloud VM is no different to your laptop or a local server, it's just another computer somewhere (albeit provided virtually). On a Linux environment, consider customising the _launch.sh script, if you wish to change the default max memory allocation of 50% of VM memory.
Now deploy and test your workload configuration - Omniscope project files, scheduler tasks, and necessary sysadmin aspects such as mounted network drives or VPN configurations. If using a separate temp disk, configure Omniscope Evo to use it by relocating sharing files and temp folders, or in Classic, be sure to configure your workflow to use it for any interim storage. I'll assume you've also configured a firewall or VPN in your cloud to securely allow access to the Omniscope Evo web-based UI, or a remote desktop session to use Omniscope Classic.
You'll also want to configure Omniscope Scheduler, or boot-time scripts, to execute the workload, unless you're controlling it manually.
Finally, gracefully shut down the Omniscope Evo process / close the Omniscope Classic desktop app (unless you're suspending the instance), then shut down (or suspend) the cloud instance, being sure not to delete the VM. You cannot delete the VM and its disk since you would lose your Omniscope activated license (without properly deactivating the license first; we only allow typically 3 deactivate/activate license transfer cycles).
The instance will still exist in your cloud console, but won't be costing you for the CPU and memory, since when normal cloud instances are shut down, behind the scenes, your cloud provider has released the CPU and memory into their pool for others to use. You'll still pay for storage (for disk + memory if suspended), but that's a tiny fraction of the cost for your high-CPU/memory instance.
When you need to run the workload, e.g. at 1am every night, start or resume the instance. Your cloud provider will allocate CPU and memory, and start billing you. You can automatically start cloud instances using their APIs and/or console SDKs / shell scripting, or manually using their web console.
If you haven't automated starting Omniscope and running the workflow, log into the machine and kick it off.
When it completes, if shutting down the instance, first gracefully shutdown Omniscope. Then shutdown or suspend the instance. Again, both can be automated using shell scripting.
Suspend vs shutdown?
Cloud providers typically let you choose whether to suspend or shutdown an instance.
Suspending means pausing the machine in its running state, so it can be restored later. It's somewhat like hibernation on a desktop PC. The memory state is saved to storage (incurring small storage costs), and resumed automatically. You won't need to shut down Omniscope or wait for Omniscope or any other services you're running to start up.
Stopping means gracefully shutting down the instance, identical in concept to shutting down your PC. It'll take longer to start up again, since the OS will need to boot, and Omniscope will need to be set up to start automatically. The cloud provider should preserve VM metadata, and your license should remain valid after a shutdown and restart cycle
At time of writing, here is an example cost breakdown for Google Compute Engine, which we primarily use at Visokio. Other cloud providers have similar typical costs but can't be directly compared because the pricing structures are different.
Example cloud VM configuration needed and cost, given 2 hour per day workflow execution time, running 5 days per week:
- Persistent disk (for OS and workload long-term data/configuration): 128gb pd-SSD.
$22/month, always active in terms of billing.
- Machine type: 64 CPUs and 256gb memory (n2-standard-64)
including local temp disk (not retained when shut down): 3x375gb nvme SSD
$140/month (for 43.5 monthly active hours)
(If you were to provision this machine full-time, it'd cost you nearly $2k/month.)
To cut costs further, explore using ephemeral instances (a GCE concept) or other cloud provider equivalents. You have to be OK with the risk of your workload being terminated unexpectedly, however. Also consider adjusting the disk type (slower disks are much cheaper) and using disks (local or otherwise) only while running for temporary data.
Drop us a line at firstname.lastname@example.org.