# LLM Core API

Within the LLM Core API, it relies on three important components: LLMQuery, LLMResponse and send\_request function

```python
class LLMQuery(Query):
    query_class: str = "llm"
    llms: Optional[List[Dict[str, Any]]] = Field(default=None)
    messages: List[Dict[str, Union[str, Any]]]
    tools: Optional[List[Dict[str, Any]]] = Field(default_factory=list)
    action_type: Literal["chat", "tool_use", "operate_file"] = Field(default="chat")
    message_return_type: Literal["text", "json"] = Field(default="text")
    response_format: Optional[Dict[str, Any]] = Field(default=None)

    class Config:
        arbitrary_types_allowed = True
```

The LLM's response follows the `Response` class structure:

```python
class LLMResponse(Response):
    response_class: str = "llm"
    response_message: Optional[str] = None
    tool_calls: Optional[List[Dict[str, Any]]] = None
    finished: bool = False
    error: Optional[str] = None
    status_code: int = 200

    class Config:
        arbitrary_types_allowed = True
```

### Common rules for LLM Core APIs

When leveraging LLM Core APIs, you can specify which LLM to use with the `llms` parameter:

```python
llms = [
    {
        "name": "gpt-4o-mini",  # Model name
        "backend": "openai"     # Backend provider
    }
]
```

{% hint style="warning" %}
It is important to mention that if you would like to pass multiple LLMs as the backends, the AIOS will automatically adopt the LLM routing to select a model from the LLMs you passed to address your queries. The selection is optimized on both model capability to solve the task and the cost to solve the task. This feature is detailed in [LLM router.](https://docs.aios.foundation/aios-docs/aios-kernel/llm-cores/llm-routing)&#x20;
{% endhint %}
