LLM Core API

API for calling LLM Core(s)

Within the LLM Core API, it relies on three important components: LLMQuery, LLMResponse and send_request function

class LLMQuery(Query):
    query_class: str = "llm"
    llms: Optional[List[Dict[str, Any]]] = Field(default=None)
    messages: List[Dict[str, Union[str, Any]]]
    tools: Optional[List[Dict[str, Any]]] = Field(default_factory=list)
    action_type: Literal["chat", "tool_use", "operate_file"] = Field(default="chat")
    message_return_type: Literal["text", "json"] = Field(default="text")
    response_format: Optional[Dict[str, Any]] = Field(default=None)

    class Config:
        arbitrary_types_allowed = True

The LLM's response follows the Response class structure:

class LLMResponse(Response):
    response_class: str = "llm"
    response_message: Optional[str] = None
    tool_calls: Optional[List[Dict[str, Any]]] = None
    finished: bool = False
    error: Optional[str] = None
    status_code: int = 200

    class Config:
        arbitrary_types_allowed = True

Common rules for LLM Core APIs

When leveraging LLM Core APIs, you can specify which LLM to use with the llms parameter:

llms = [
    {
        "name": "gpt-4o-mini",  # Model name
        "backend": "openai"     # Backend provider
    }
]

Last updated