llm_chat_with_json_output
llm_chat_with_json_output
: Structured JSON Responses
llm_chat_with_json_output
: Structured JSON ResponsesGets structured JSON responses from the language model according to a specified schema.
def llm_chat_with_json_output(
agent_name: str,
messages: List[Dict[str, Any]],
base_url: str = aios_kernel_url,
llms: List[Dict[str, Any]] = None,
response_format: Dict[str, Dict] = None
) -> LLMResponse
Parameters:
agent_name
: Identifier for the agent making the requestmessages
: List of message dictionariesbase_url
: API endpoint URLllms
: Optional list of LLM configurationsresponse_format
: JSON schema specifying the required output format
Returns:
LLMResponse
object containing the structured JSON response
Example:
# Extract keywords from a text
response = llm_chat_with_json_output(
"content_analyzer",
messages=[
{"role": "system", "content": "Extract key information from the text."},
{"role": "user", "content": "AIOS is a new operating system for AI agents."}
],
response_format={
"type": "json_object",
"schema": {
"type": "object",
"properties": {
"keywords": {
"type": "array",
"items": {"type": "string"}
},
"summary": {"type": "string"}
},
"required": ["keywords", "summary"]
}
}
)
print(response["response"]["response_message"]) # JSON string containing keywords and summary
Last updated