How to Develop Agents

To develop new agents to run on AIOS, it provides the native Agent-SDK and various adapters to adapt different agent frameworks into AIOS.

Below shows the agent frameworks that have been integrated and the current integration modules.

The table below shows the agent frameworks we have integrated and different modules that have been integrated, and the details can be found in later sections.

Agent Framework
Integrated Module

AutoGen

LLM

Open-Interpreter

LLM

MetaGPT

LLM

Before developing agents, it is required to follow the guidelines of how to organize the agent files in the following structure.

Agent Structure

First, let's look at how to organize your agent's files. Every agent needs three mandatory components:

author/
└── agent_name/
      │── entry.py        # Your agent's main logic
      │── config.json     # Configuration and metadata
      └── meta_requirements.txt  # Additional dependencies

For example, if your name is demo_author and you're building a demo_agent that searches and summarizes articles, your folder structure would look like this:

demo_author/
   └── demo_agent/
         │── entry.py
         │── config.json
         └── meta_requirements.txt

Note: If your agent needs any libraries beyond AIOS's built-in ones, make sure to list them in meta_requirements.txt. Apart from the above three files, you can have any other files in your folder.

A minimal example of an agent is shown as below

from cerebrum.llm.apis import LLMQuery, LLMResponse
from cerebrum.utils.communication import send_request, aios_kernel_url

class TestAgent:
    def __init__(self, agent_name):
        self.agent_name = agent_name
        self.messages = []

    def run(self, task_input):
        self.messages.append({"role": "user", "content": task_input})
        
        tool_response = llm_chat(
            agent_name=self.agent_name,
            messages=self.messages,
            base_url=aios_kernel_url
        )
        
        final_result = tool_response["response"]["response_message"]
        return final_result
    def __init__(self, agent_name, task_input, config_):
        super().__init__(agent_name, task_input, config_)

    def run(self):
        response = self.send_request(
            agent_name = self.agent_name,
            query=LLMQuery(
                messages=[{"role": "user", "content": "What is the capital of US? "}]
            )
        )
        print(response)
        return response

[Important]: Rules to be strictly followed in the entry.py

Configure the agent

Your agent needs a config.json file that describes its functionality. Here's what it should include:

{
   "name": "demo_agent",
   "description": [
      "Demo agent that can help search AIOS-related papers"
   ],
   "tools": [
      "demo_author/arxiv"
   ],
   "meta": {
      "author": "demo_author",
      "version": "0.0.1",
      "license": "CC0"
   },
   "build": {
      "entry": "agent.py",
      "module": "DemoAgent"
   }
}

When setting up your agent, you'll need to specify which tools it will use.

View tools available in the AIOS Tool Hub:

list-toolhub-tools

View tools available locally:

list-local-tools

To use these tools in your agent, use the AutoTool provided by the AIOS SDK to load these tools within your agent logic.

To load a tool from ToolHub in your code:

from cerebrum.interface import AutoTool
tool = AutoTool.from_preloaded("example/arxiv", local=False)

To load a local tool in your code:

from cerebrum.tool import AutoTool
tool = AutoTool.from_preloaded("google/google_search", local=True)

If you would like to create your new tools, you can either integrate the tool within your agent code or you can follow the tool examples in the tool folder to develop your standalone tools. The detailed instructions are in How to develop new tools.

Last updated