AIOS Docs
  • Welcome
  • Getting Started
    • Installation
    • Quickstart
      • Use Terminal
      • Use WebUI
    • Environment Variables Configuration
  • AIOS Kernel
    • Overview
    • LLM Core(s)
      • LiteLLM Compatible Backend
      • vLLM Backend
      • Hugging Face Backend
      • LLM Routing
    • Scheduler
      • FIFOScheduler
      • RRScheduler
    • Context
    • Memory
      • Base Layer
      • Agentic Memory Operations
    • Storage
      • sto_mount
      • sto_create_file
      • sto_create_directory
      • sto_write
      • sto_retrieve
      • sto_rollback
      • sto_share
    • Tools
    • Access
    • Syscalls
    • Terminal
  • AIOS Agent
    • How to Use Agent
    • How to Develop Agents
      • Develop with Native SDK
      • Develop with AutoGen
      • Develop with Open-Interpreter
      • Develop with MetaGPT
    • How to Publish Agents
  • AIOS-Agent SDK
    • Overview
    • LLM Core API
      • llm_chat
      • llm_chat_with_json_output
      • llm_chat_with_tool_call_output
      • llm_call_tool
      • llm_operate_file
    • Memory API
      • create_memory
      • get_memory
      • update_memory
      • delete_memory
      • search_memories
      • create_agentic_memory
    • Storage API
      • mount
      • create_file
      • create_dir
      • write_file
      • retrieve_file
      • rollback_file
      • share_file
    • Tool API
      • How to Develop Tools
    • Access API
    • Post API
    • Agent API
  • Community
    • How to Contribute
Powered by GitBook
On this page
  1. AIOS Agent

How to Develop Agents

PreviousHow to Use AgentNextDevelop with Native SDK

Last updated 1 month ago

To develop new agents to run on AIOS, it provides the and various adapters to adapt different agent frameworks into AIOS.

Below shows the agent frameworks that have been integrated and the current integration modules.

The table below shows the agent frameworks we have integrated and different modules that have been integrated, and the details can be found in later sections.

Agent Framework
Integrated Module

AutoGen

LLM

Open-Interpreter

LLM

MetaGPT

LLM

Before developing agents, it is required to follow the guidelines of how to organize the agent files in the following structure.

Agent Structure

First, let's look at how to organize your agent's files. Every agent needs three mandatory components:

author/
└── agent_name/
      │── entry.py        # Your agent's main logic
      │── config.json     # Configuration and metadata
      └── meta_requirements.txt  # Additional dependencies

For example, if your name is demo_author and you're building a demo_agent that searches and summarizes articles, your folder structure would look like this:

demo_author/
   └── demo_agent/
         │── entry.py
         │── config.json
         └── meta_requirements.txt

Note: If your agent needs any libraries beyond AIOS's built-in ones, make sure to list them in meta_requirements.txt. Apart from the above three files, you can have any other files in your folder.

A minimal example of an agent is shown as below

from cerebrum.llm.apis import LLMQuery, LLMResponse
from cerebrum.utils.communication import send_request, aios_kernel_url

class TestAgent:
    def __init__(self, agent_name):
        self.agent_name = agent_name
        self.messages = []

    def run(self, task_input):
        self.messages.append({"role": "user", "content": task_input})
        
        tool_response = llm_chat(
            agent_name=self.agent_name,
            messages=self.messages,
            base_url=aios_kernel_url
        )
        
        final_result = tool_response["response"]["response_message"]
        return final_result
    def __init__(self, agent_name, task_input, config_):
        super().__init__(agent_name, task_input, config_)

    def run(self):
        response = self.send_request(
            agent_name = self.agent_name,
            query=LLMQuery(
                messages=[{"role": "user", "content": "What is the capital of US? "}]
            )
        )
        print(response)
        return response

[Important]: Rules to be strictly followed in the entry.py

  • The agent class must have a run method as the main function to be executed

  • All the interactions with AIOS kernel must be through the send_request API, otherwise the services provided by the AIOS will not support for the agents.

Configure the agent

Your agent needs a config.json file that describes its functionality. Here's what it should include:

{
   "name": "demo_agent",
   "description": [
      "Demo agent that can help search AIOS-related papers"
   ],
   "tools": [
      "demo_author/arxiv"
   ],
   "meta": {
      "author": "demo_author",
      "version": "0.0.1",
      "license": "CC0"
   },
   "build": {
      "entry": "agent.py",
      "module": "DemoAgent"
   }
}

When setting up your agent, you'll need to specify which tools it will use.

View tools available in the AIOS Tool Hub:

list-toolhub-tools

View tools available locally:

list-local-tools

To load a tool from ToolHub in your code:

from cerebrum.interface import AutoTool
tool = AutoTool.from_preloaded("example/arxiv", local=False)

To load a local tool in your code:

from cerebrum.tool import AutoTool
tool = AutoTool.from_preloaded("google/google_search", local=True)

The agent class must inherit the BaseAgent from

To use these tools in your agent, use the provided by the AIOS SDK to load these tools within your agent logic.

If you would like to create your new tools, you can either integrate the tool within your agent code or you can follow the tool examples in the tool folder to develop your standalone tools. The detailed instructions are in .

native Agent-SDK
https://github.com/agiresearch/Cerebrum/blob/main/cerebrum/agents/base.py
AutoTool
How to develop new tools