LiteLLM is a unified interface for various LLM providers, offering a consistent API across different model providers. Supported model providers can be found at https://docs.litellm.ai/docs/providers.
LiteLLM compatible backends are initialized by using string identifiers for models in the format {provider}/{model}, such as:
openai/gpt-4o-mini
anthropic/claude-3.5-sonnet
gemini/gemini-1.5-flash
In the code, these models are initialized and stored as string identifiers in the llms array:
# During initializationif config.backend =="google": config.backend ="gemini"# Convert backend name for compatibilityprefix =f"{config.backend}/"ifnot config.name.startswith(prefix):self.llms.append(prefix + config.name)
Standard Text Input
For standard text generation, LiteLLM backends process requests using the completion() function:
The system passes the messages array, temperature, and max_tokens parameters to the completion function.
Tool Calls
When processing requests with tools, the tool definitions are added to the completion parameters:
Note that in some providers such as OpenAI and Anthropic, tool names can not contain "/" character when it is passed. Therefore, it is required to use slash_to_double_underscore function to convert tools