llm_chat_with_tool_call_output
llm_chat_with_tool_call_output
: Chat with Tool Integration
llm_chat_with_tool_call_output
: Chat with Tool IntegrationAllows the language model to decide which tools to use based on the conversation context.
def llm_chat_with_tool_call_output(
agent_name: str,
messages: List[Dict[str, Any]],
tools: List[Dict[str, Any]],
base_url: str = aios_kernel_url,
llms: List[Dict[str, Any]] = None
) -> LLMResponse
Parameters:
agent_name
: Identifier for the agent making the requestmessages
: List of message dictionariestools
: List of available tools and their specificationsbase_url
: API endpoint URLllms
: Optional list of LLM configurations
Returns:
LLMResponse
object containing tool calls made by the model
Example:
# Chat that may trigger tool use when needed
response = llm_chat_with_tool_call_output(
"research_assistant",
messages=[
{"role": "system", "content": "Help the user with research tasks."},
{"role": "user", "content": "I need to find recent papers about transformer architectures."}
],
tools=[{
"name": "scholar_search",
"description": "Search for academic papers on a topic",
"parameters": {
"type": "object",
"properties": {
"query": {"type": "string"},
"year_start": {"type": "integer"},
"max_results": {"type": "integer"}
},
"required": ["query"]
}
}]
)
# Check if tool calls were made
tool_calls = json.loads(response["response"]["tool_calls"])
if tool_calls:
for tool_call in tool_calls:
print(f"Tool: {tool_call['name']}")
print(f"Parameters: {tool_call['parameters']}")
Last updated