Skip to main content

Configure LLM model

Witty needs a valid LLM model. This section will describe how to add a LLM configuration.

LLM structure

A LLM model is composed of these fields:

  • provider: LLM provider;
  • api_key: api key of the provider;
  • endpoint: URL where the LLM model is located;
  • api_version: api version defined by the provider;
  • model: LLM model name;
  • deployment: deployment name. It could be different from model.

Here's an example of a LLM configuration:

{
"provider": "azure_openai",
"api_key": "xxx",
"endpoint": "https://xxx.cognitiveservices.azure.com/",
"api_version": "2025-01-01-preview",
"model": "gpt-4o",
"deployment": "gpt-4o"
}

Interacting with LLM

Currently, there are the following APIs to interact with LLM configuration:

  • GET /witty/v1/llm/config: retrieve the LLM configuration;
  • POST /witty/v1/llm/config: create/edit a LLM configuration. The body is a JSON in the LLM structure seen before;
  • POST /witty/v1/llm/chat: chat with LLM. The body is a JSON with this format
{
"query": "Some text"
}

Supported models

Currently Witty microservice has been tested against the following models/providers:

LLM ProviderModel
azure_openaigpt-35-turbo
azure_openaigpt-4.5-preview
azure_openaigpt-4o
azure_openaio1
azure_openaio3-mini

Model o1-mini is currently not supported due to OpenAI limitation