# LiteLLM Compatible Backend

LiteLLM is a unified interface for various LLM providers, offering a consistent API across different model providers. Supported model providers can be found at <https://docs.litellm.ai/docs/providers>.&#x20;

LiteLLM compatible backends are initialized by using string identifiers for models in the format `{provider}/{model}`, such as:

* `openai/gpt-4o-mini`
* `anthropic/claude-3.5-sonnet`
* `gemini/gemini-1.5-flash`

In the code, these models are initialized and stored as string identifiers in the `llms` array:

```python
# During initialization
if config.backend == "google":
    config.backend = "gemini"  # Convert backend name for compatibility

prefix = f"{config.backend}/"
if not config.name.startswith(prefix):
    self.llms.append(prefix + config.name)
```

**Standard Text Input**

For standard text generation, LiteLLM backends process requests using the `completion()` function:

```python
completion_kwargs = {
    "messages": messages,
    "temperature": temperature,
    "max_tokens": max_tokens
}

completed_response = completion(model=model, **completion_kwargs)
return completed_response.choices[0].message.content, True
```

The system passes the messages array, temperature, and max\_tokens parameters to the completion function.

**Tool Calls**

When processing requests with tools, the tool definitions are added to the completion parameters:

```python
if tools:
    tools = slash_to_double_underscore(tools)  # Prevent invalid tool string
    completion_kwargs["tools"] = tools
    
completed_response = completion(model=model, **completion_kwargs)
completed_response = decode_litellm_tool_calls(completed_response)
return completed_response, True
```

{% hint style="warning" %}
Note that in some providers such as OpenAI and Anthropic, tool names can not contain "/" character when it is passed. Therefore, it is required to use [`slash_to_double_underscore`](https://github.com/agiresearch/AIOS/blob/main/aios/llm_core/utils.py) function to convert tools
{% endhint %}

The [`decode_litellm_tool_calls()`](https://github.com/agiresearch/AIOS/blob/main/aios/llm_core/utils.py) function processes the raw response to extract and format the tool calls.

**JSON-Formatted Responses**

For JSON-formatted responses, the adapter adds the format parameter:

```python
if message_return_type == "json":
    completion_kwargs["format"] = "json"
    if response_format:
        completion_kwargs["response_format"] = response_format
```
