llm_call_tool

llm_call_tool: Direct Tool Invocation

Explicitly instructs the language model to use specified tools with user input.

def llm_call_tool(
    agent_name: str, 
    messages: List[Dict[str, Any]], 
    tools: List[Dict[str, Any]], 
    base_url: str = aios_kernel_url,
    llms: List[Dict[str, Any]] = None
) -> LLMResponse

Parameters:

  • agent_name: Identifier for the agent making the request

  • messages: List of message dictionaries

  • tools: List of available tools and their specifications

  • base_url: API endpoint URL

  • llms: Optional list of LLM configurations

Returns:

  • LLMResponse object containing tool calls and their results

Example:

# Use weather tool to get forecast
response = llm_call_tool(
    "weather_agent",
    messages=[
        {"role": "user", "content": "What's the weather like in New York today?"}
    ],
    tools=[{
        "name": "weather_service/get_forecast",
        "description": "Get weather forecast for a location",
        "parameters": {
            "type": "object",
            "properties": {
                "location": {"type": "string"},
                "units": {"type": "string", "enum": ["metric", "imperial"]}
            },
            "required": ["location"]
        }
    }]
)

# Process the tool response
print(response["response"]["response_message"])

Last updated