Setting up an Azure OpenAI provider

Modified on Mon, 12 May at 12:17 PM

A short guide to configuring Omniscope's AI Settings with an Azure OpenAI provider.  (Microsoft Azure OpenAI is a service providing the same models as OpenAI but within the Azure portal and with a more Enterprise slant. )


Instructions



Azure model deployment in AI Foundry


Go to the Azure AI Foundry:


Now "Create a deployment". 

In this case I've chosen "gpt-4.1", at time of writing the flagship regular model (non-reasoning).


Click "Confirm", and you'll see...


Review the deployment name. Make a note of it. Now "Create resource and deploy". This may take a minute.

Once the resource is created, you'll see the required properties for Omniscope:


You'll need the Endpoint "Target URI" and "Key", and the Deployment info "Name".



Omniscope setup


Now open Omniscope. 


Open Admin > AI Settings:


You'll see an empty AI Providers section initially.


Click "Add provider" and select "Azure OpenAI". 

From the last screenshot in the previous section, the Azure AI Foundry deployment page, copy the Endpoint Target URI and Key into the Azure endpoint and API key settings in Omniscope, respectively.

Also paste the "Deployment Info Name" into the "List/restrict models" setting and press Enter.


Now continue How to enable AI in Omniscope "Integrations: Report Ninja".


In particular, you will need to select the Azure OpenAI model you configured above, in the integration's "Default model" menu, before saving settings, and trying the Instant Dashboard, for example.

Was this article helpful?

That’s Great!

Thank you for your feedback

Sorry! We couldn't be helpful

Thank you for your feedback

Let us know how can we improve this article!

Select at least one of the reasons
CAPTCHA verification is required.

Feedback sent

We appreciate your effort and will try to fix the article