AIOS Docs
  • Welcome
  • Getting Started
    • Installation
    • Quickstart
      • Use Terminal
      • Use WebUI
    • Environment Variables Configuration
  • AIOS Kernel
    • Overview
    • LLM Core(s)
      • LiteLLM Compatible Backend
      • vLLM Backend
      • Hugging Face Backend
      • LLM Routing
    • Scheduler
      • FIFOScheduler
      • RRScheduler
    • Context
    • Memory
      • Base Layer
      • Agentic Memory Operations
    • Storage
      • sto_mount
      • sto_create_file
      • sto_create_directory
      • sto_write
      • sto_retrieve
      • sto_rollback
      • sto_share
    • Tools
    • Access
    • Syscalls
    • Terminal
  • AIOS Agent
    • How to Use Agent
    • How to Develop Agents
      • Develop with Native SDK
      • Develop with AutoGen
      • Develop with Open-Interpreter
      • Develop with MetaGPT
    • How to Publish Agents
  • AIOS-Agent SDK
    • Overview
    • LLM Core API
      • llm_chat
      • llm_chat_with_json_output
      • llm_chat_with_tool_call_output
      • llm_call_tool
      • llm_operate_file
    • Memory API
      • create_memory
      • get_memory
      • update_memory
      • delete_memory
      • search_memories
      • create_agentic_memory
    • Storage API
      • mount
      • create_file
      • create_dir
      • write_file
      • retrieve_file
      • rollback_file
      • share_file
    • Tool API
      • How to Develop Tools
    • Access API
    • Post API
    • Agent API
  • Community
    • How to Contribute
Powered by GitBook
On this page
  1. AIOS-Agent SDK
  2. LLM Core API

llm_chat_with_json_output

llm_chat_with_json_output: Structured JSON Responses

Gets structured JSON responses from the language model according to a specified schema.

def llm_chat_with_json_output(
    agent_name: str, 
    messages: List[Dict[str, Any]], 
    base_url: str = aios_kernel_url,
    llms: List[Dict[str, Any]] = None,
    response_format: Dict[str, Dict] = None
) -> LLMResponse

Parameters:

  • agent_name: Identifier for the agent making the request

  • messages: List of message dictionaries

  • base_url: API endpoint URL

  • llms: Optional list of LLM configurations

  • response_format: JSON schema specifying the required output format

Returns:

  • LLMResponse object containing the structured JSON response

Example:

# Extract keywords from a text
response = llm_chat_with_json_output(
    "content_analyzer",
    messages=[
        {"role": "system", "content": "Extract key information from the text."},
        {"role": "user", "content": "AIOS is a new operating system for AI agents."}
    ],
    response_format={
        "type": "json_object",
        "schema": {
            "type": "object",
            "properties": {
                "keywords": {
                    "type": "array",
                    "items": {"type": "string"}
                },
                "summary": {"type": "string"}
            },
            "required": ["keywords", "summary"]
        }
    }
)
print(response["response"]["response_message"])  # JSON string containing keywords and summary
Previousllm_chatNextllm_chat_with_tool_call_output

Last updated 1 month ago