# llm\_chat\_with\_json\_output

#### `llm_chat_with_json_output`: Structured JSON Responses

Gets structured JSON responses from the language model according to a specified schema.

```python
def llm_chat_with_json_output(
    agent_name: str, 
    messages: List[Dict[str, Any]], 
    base_url: str = aios_kernel_url,
    llms: List[Dict[str, Any]] = None,
    response_format: Dict[str, Dict] = None
) -> LLMResponse
```

**Parameters:**

* `agent_name`: Identifier for the agent making the request
* `messages`: List of message dictionaries
* `base_url`: API endpoint URL
* `llms`: Optional list of LLM configurations
* `response_format`: JSON schema specifying the required output format

**Returns:**

* `LLMResponse` object containing the structured JSON response

**Example:**

```python
# Extract keywords from a text
response = llm_chat_with_json_output(
    "content_analyzer",
    messages=[
        {"role": "system", "content": "Extract key information from the text."},
        {"role": "user", "content": "AIOS is a new operating system for AI agents."}
    ],
    response_format={
        "type": "json_object",
        "schema": {
            "type": "object",
            "properties": {
                "keywords": {
                    "type": "array",
                    "items": {"type": "string"}
                },
                "summary": {"type": "string"}
            },
            "required": ["keywords", "summary"]
        }
    }
)
print(response["response"]["response_message"])  # JSON string containing keywords and summary
```
