AI Block Setup

Modified on Tue, 16 Apr 2024 at 06:09 PM

TABLE OF CONTENTS


Python Environment


The AI Block is a special Python block which lets a user create custom Python code by specifying the functionality of the block in natural language. This means that for the block to work, your Python environment needs to be setup correctly. For guidance on how to set up the Python environment please check this article: Setting up the Custom Block.



OpenAI


Disclaimer

In its current form, the AI Block utilises ChatGPT, an OpenAI product. ChatGPT is a service hosted on OpenAI servers, which means that certain data, like the data schema, field names, and the result of certain analyses are transmitted to OpenAI. If you work with sensitive data, please make sure this poses no problems. In the future, we will support AI services hosted directly on your machines so that you can work safely also with sensitive data.


API key

In order to utilise OpenAI's ChatGPT functionality within Omniscope, you need to generate an OpenAI Api key. This requires a paid OpenAI subscription. The usage costs are calculated on a per-token (more or less a word, or a syllable) basis and depend on the exact model used within the block. Currently ChatGPT 4 and ChatGPT 3.5 are supported and you can choose which to use before, but also during the duration of the chat. ChatGPT being the qualitatively superior model incurs higher costs.



To obtain an OpenAI api key, please follow this steps:


  1. Open an account on the OpenAI website.
  2. Setup a payment method on this page.
  3. Generate an api key here.




Omniscope OpenAI API key configuration


Once an api key is obtained, you can use it within Omniscope in three ways:



a) Global server configuration


In the AI-integration section of the admin menu you can specify an OpenAI api key, a model, and some advanced settings specifying how many re-tries Omniscope should perform in case recoverable problems occur during a chat with ChatGPT.





Please notice, that the only obligatory setting in order to use OpenAI functionality is the OpenAI key. It is not required to choose a model. In fact, choosing a model here will limit the functionality to this one model server-wide. Use it only in case of troubleshooting, otherwise it is best to let the user (and Omniscope) decide which model to use in a given context.


The OpenAi key as well as the other settings are server-wide, which means that everyone with access to the server will be able to use AI-integration using this specific API key. 


If Diagnostic mode is enable, then additional, verbose information will be printed to the AI Interpreter and main logs. Unless prompted to do so, it should generally not be necessary to turn it on.





b) Folder configuration


It is also possible to specify OpenAI keys (and model overrides) for folders and their subfolders. To do so, click on "Edit Permissions" in the 3-dots menu within the File list page (see screenshot).



Scrolling down the dialog that opens reveals input options for both an OpenAI api key, as well as again an input to fix the model. Each project in this folder and its subfolders will use the api key and the model, if specified.





c) AI Chat Dialog


The third way to set an OpenAI api key is directly inside of the AI chat. Even in case no server wide or no folder wide OpenAI settings are defined, it is possible to use the AI Block by configuring the api key directly in the chat settings. Please note that the key is stored inside the block, so if you share the workflow with another person, this person will be able to use the block, and therefore the api key, possibly incurring costs.






Azure OpenAI


Disclaimer

In its current form, the AI Block utilises Azure ChatGPT, an OpenAI product that is hosted on Microsoft Azure servers, which means that certain data, like the data schema, field names, and the result of certain analyses are transmitted to OpenAI and Microsoft. If you work with sensitive data, please make sure this poses no problems. In the future, we will support AI services hosted directly on your machines so that you can work safely also with sensitive data.


API key

In order to utilise Azure OpenAI functionality within Omniscope, you need 4 things:

  1. An Azure subscription and access to the Azure OpenAI portal
  2. An Azure OpenAI API key
  3. An Azure OpenAI endpoint
  4. A deployed model


Azure Portal


In the Azure Portal, locate Azure OpenAI services and create a new service.



In the service, locate Keys and Endpointm and setup your keys and the corresponding endpoint. The endpoint will be used later within Omniscope to point the AI to the right deployment.



Now locate Model deployments and click on the button "Manage Deployments".



Click on "Create new deployment" in order to create an OpenAI-based model to use within Omniscope.




Finally select a model to deploy, and give it a name. It is this name you select here for the model that you need to configure in the Omniscope Azure integration settings.

Omniscope Azure deployment configuration


Once the Azure deployment is fully configured, you can use it within Omniscope in three ways:



a) Global server configuration


In the AI-integration section of the admin menu you can specify an Azure OpenAI api key, an endpoint, and a deployed model, and some advanced settings specifying how many re-tries Omniscope should perform in case recoverable problems occur during a chat with ChatGPT.





Please notice, that in order for the Azure integration to work properly, all 3 Azure OpenAI settings are obligatory. 


The Azure OpenAi key as well as the other settings are server-wide, which means that everyone with access to the server will be able to use AI-integration using this specific API key, endpoint, and model. 




b) Folder configuration


It is also possible to specify Azure OpenAI settings for folders and their subfolders. To do so, click on "Edit Permissions" in the 3-dots menu within the File list page (see screenshot).



Scrolling down the dialog that opens reveals input options for both an Azure OpenAI api key, as well as again an input to specify the endpoint and the model. All Azure settings need to be configured for the integration to work.



c) AI Chat Dialog


The third way to set up Azure is directly inside of the AI chat. In the chat configuration, you can set an Azure API key. Note, that you cannot specify an endpoint, nor a deployed model. These must be defined either in the admin config or the folder settings.





Local LLMs


Omniscope provides preliminary support to connect to local LLMs. A local LLM is one resides within your organisation, either directly on your computer, on a local company server, or one that is situated in the cloud. The advantage of running your own local LLM is that you are in complete control over costs and security of your data. No data will leave your company's premises, so to speak, as opposed to services such as Azure or OpenAI for which data will be transmitted to their data centres.


Current support

Current support of local LLMs is limited to those which follow OpenAIs API schema and specification. This means, the local LLM server must accept and respond to the API calls Omniscope performs in exactly the same way the OpenAI service does.

Luckily, OpenAI's API schema is followed by many services and models and it is easy to find one that supports it. As an example, we have experimented with Llamafile and local.ai. The models made available by these tools support chatting with your data, however, as a current limitation, they do not support some advanced OpenAI functionality out-of-the-box, which is why the "build and execute" button in the AI block chat dialog will not work with them.




Omniscope Local LLM configuration


Once your local server is set up, you usually need to know two things:

  • The address of the server
  • The name of the model



a) Global server configuration


In the AI-integration section of the admin menu you can specify the address of the server, a.k.a. the endpoint, and a model name. The model name depends on the model(s) your LLM server supports. Usually they are mentioned in the documentation.








b) Folder configuration


Just as with OpenAI and Azure services, you can also specify a local LLM in the folder settings. Similarly, you can specify an endpoint address, as well as a model name. This local model will then be available in all projects which are in the same folder, or in any subfolder.


In the example we are pointing Omniscope to LLamafile running on a VM in our network, using the Bakllava LLM.










Was this article helpful?

That’s Great!

Thank you for your feedback

Sorry! We couldn't be helpful

Thank you for your feedback

Let us know how can we improve this article!

Select atleast one of the reasons
CAPTCHA verification is required.

Feedback sent

We appreciate your effort and will try to fix the article