# LLM Core(s)

AIOS wraps different LLMs as LLM Cores to support the unified interface for addressing requests to both cloud LLM apis and locally-hosted LLMs.&#x20;

AIOS supports three main categories of backends:

1. LiteLLM compatible backends (cloud and local)
2. vLLM backends (local)
3. Huggingface backends (local)

Each backend type handles different input scenarios, including standard text generation, tool calling, and JSON-formatted responses. This document explains how each backend processes these inputs and the implementation details.

| Feature        | LiteLLM Compatible                 | vLLM                               | Huggingface                              |
| -------------- | ---------------------------------- | ---------------------------------- | ---------------------------------------- |
| Standard Input | Uses completion function           | Uses OpenAI client                 | Uses generate method                     |
| Tool Calls     | Native support via tools parameter | Native support via tools parameter | Uses message merging and custom decoding |
| JSON Responses | Uses format="json"                 | Uses format="json"                 | Uses message merging                     |

Different backend details are as below

* [Litellm compatible backends](https://docs.aios.foundation/aios-docs/aios-kernel/llm-cores/litellm-compatible-backend)
* [vLLM backends](https://docs.aios.foundation/aios-docs/aios-kernel/llm-cores/vllm-backend)
* [Huggingface local backends](https://docs.aios.foundation/aios-docs/aios-kernel/llm-cores/hugging-face-backend)
