AIOS Docs
  • Welcome
  • Getting Started
    • Installation
    • Quickstart
      • Use Terminal
      • Use WebUI
    • Environment Variables Configuration
  • AIOS Kernel
    • Overview
    • LLM Core(s)
      • LiteLLM Compatible Backend
      • vLLM Backend
      • Hugging Face Backend
      • LLM Routing
    • Scheduler
      • FIFOScheduler
      • RRScheduler
    • Context
    • Memory
      • Base Layer
      • Agentic Memory Operations
    • Storage
      • sto_mount
      • sto_create_file
      • sto_create_directory
      • sto_write
      • sto_retrieve
      • sto_rollback
      • sto_share
    • Tools
    • Access
    • Syscalls
    • Terminal
  • AIOS Agent
    • How to Use Agent
    • How to Develop Agents
      • Develop with Native SDK
      • Develop with AutoGen
      • Develop with Open-Interpreter
      • Develop with MetaGPT
    • How to Publish Agents
  • AIOS-Agent SDK
    • Overview
    • LLM Core API
      • llm_chat
      • llm_chat_with_json_output
      • llm_chat_with_tool_call_output
      • llm_call_tool
      • llm_operate_file
    • Memory API
      • create_memory
      • get_memory
      • update_memory
      • delete_memory
      • search_memories
      • create_agentic_memory
    • Storage API
      • mount
      • create_file
      • create_dir
      • write_file
      • retrieve_file
      • rollback_file
      • share_file
    • Tool API
      • How to Develop Tools
    • Access API
    • Post API
    • Agent API
  • Community
    • How to Contribute
Powered by GitBook
On this page
  1. Getting Started

Environment Variables Configuration

The configuration file can be found at the following relative path in your installation directory of AIOS:

aios/config/config.yaml

AIOS supports several API integrations that require configuration. You can use the following commands:

  • aios env list: Show current environment variables, or show available API keys if no variables are set

  • aios env set: Show current environment variables, or show available API keys if no variables are set

  • aios refresh: Refresh AIOS configuration.

    • Reloads the configuration from aios/config/config.yaml.

    • Reinitializes all components without restarting the server.

    • The server must be running.

When no environment variables are set, the following API keys will be shown:

  • OPENAI_API_KEY: OpenAI API key for accessing OpenAI services

  • GEMINI_API_KEY: Google Gemini API key for accessing Google's Gemini services

  • DEEPSEEK_API_KEY: Deepseek API key for accessing Deepseek services

  • ANTHROPIC_API_KEY: Anthropic API key for accessing Anthropic Claude services

  • GROQ_API_KEY: Groq API key for accessing Groq services

  • HF_AUTH_TOKEN: HuggingFace authentication token for accessing models

  • HF_HOME: Optional path to store HuggingFace models

To obtain these API keys:

API Keys:

openai: "your-openai-key"
gemini: "your-gemini-key"
deepseek: "your-deepseek-key"
groq: "your-groq-key"
anthropic: "your-anthropic-key"
huggingface:
  auth_token: "your-huggingface-token"
  cache: "optional-path"

Model Settings:

It is required to follow the following parameters to set up different backends as below

Backend Type
Required Parameters

openai

name, backend

anthropic

name, backend

google

name, backend

ollama

name, backend, host_name

vLLM

name, backend, host_name

huggingface

name, backend, max_gpu_memory, eval_device

The example of how to set up different models are as below

# LLM Configuration
llms:
  models:
    # OpenAI backend
    # - name: "gpt-4o-mini"
    #   backend: "openai"

    # Google Models
    # - name: "gemini-1.5-flash"
    #   backend: "google"

    # Anthropic Models
    # - name: "claude-3-opus"
    #   backend: "anthropic"

    # Ollama backend
    # - name: "qwen2.5:7b"
    #  backend: "ollama"
    #  hostname: "http://localhost:11434" # Make sure to run ollama server

    # HuggingFace backend
    # - name: "meta-llama/Llama-3.1-8B-Instruct"
    #   backend: "huggingface"
    #   max_gpu_memory: {0: "48GB"}  # GPU memory allocation
    #   eval_device: "cuda:0"  # Device for model evaluation
    
    # vLLM Models
    # To use vllm as backend, you need to install vllm and run the vllm server https://docs.vllm.ai/en/latest/serving/openai_compatible_server.html
    # An example command to run the vllm server is:
    # vllm serve meta-llama/Llama-3.2-3B-Instruct --port 8091
    # - name: "meta-llama/Llama-3.1-8B-Instruct"
    #  backend: "vllm"
    #  hostname: "http://localhost:8091"

  log_mode: "console"
  use_context_manager: false

Memory Settings:

log_mode: "console" # can be "console" or "file"

Storage Settings:

root_dir: "root"
use_vector_db: true

Scheduler Settings:

log_mode: "console" # can be "console" or "file"
PreviousUse WebUINextOverview

Last updated 1 month ago

OpenAI API: Visit

Google Gemini API: Visit

Deepseek API: Visit

Anthropic Claude API: Visit

Groq API: Visit

HuggingFace Token: Visit

https://platform.openai.com/api-keys
https://makersuite.google.com/app/apikey
https://api-docs.deepseek.com/
https://console.anthropic.com/settings/keys
https://console.groq.com/keys
https://huggingface.co/settings/tokens