Skip to content

IMPORTANT

AI Assist Note (Knowledge Heritage): This document is part of the "Sovereign Reality" documentation.

  • @docs ARCHITECTURE:Documentation
  • Failure Path: Information drift, legacy terminology, or documentation mismatch.
  • Telemetry Link: Cross-reference with execution/parity_guard.py results.

AI Assist Note ​

Automated governance and architectural tracking.

πŸ” Debugging & Observability ​

Traceability via parity_guard.py.

IMPORTANT

AI Assist Note (Knowledge Heritage): This document is part of the "Sovereign Reality" documentation.

  • @docs ARCHITECTURE:Documentation
  • Failure Path: Information drift, legacy terminology, or documentation mismatch.
  • Telemetry Link: Cross-reference with execution/parity_guard.py results.

πŸš€ Tadpole OS β€” Getting Started Guide ​

Intelligence Level: Super AI-Awakened (Level 5)
Status: Verified Production-Ready
Version: 1.1.13
Last Hardened: 2026-04-17 (Alignment Patch)
Classification: Sovereign


πŸ“š Table of Contents ​


πŸ›‘οΈ Sovereign Configuration (The Zero-Secrets Handshake) ​

To ensure your instance of Tadpole OS remains private and sovereign, you must supply your own API keys. Never commit your keys to version control.

1. Initialize Your Environment ​

  1. Copy the template: cp .env.example .env
  2. Open .env and generate a unique NEURAL_TOKEN (e.g., openssl rand -hex 32). This token secures the connection between your browser and the engine.

2. Supply Your Provider Keys ​

Add your keys to the following variables in .env:

3. Local-First (Zero Cost) Option ​

If you prefer not to use external APIs, install Ollama and set PRIVACY_MODE=true. This forces the engine to use local models for all reasoning tasks.

IMPORTANT

Your keys are only stored in .env. When you save provider configurations in the UI, the engine automatically sanitizes them and only stores metadata (URLs, model names) in the repo-committed JSON files.


πŸ—οΈ Hardware Requirements (Scaling Spec) ​

Tadpole OS is optimized for low-footprint Rust execution. Requirements scale linearly with agent count and mission complexity.

TierAgentsClustersMin RAMvCPUDeployment
Micro (Demo)1-211 GB1Hybrid
Standard (Bunker)2-91-22 GB2Hybrid
Cluster Max10-254+4 GB4Hybrid
ROBUST (PRO)25+Full8 GB+4-8Full Remote

TIP

Robust Recommendation: For high-vocal missions with real-time audio and massive fetch_url research, an 8GB / 4-vCPU instance ensures zero latency in the context bus and allows for full remote rebuilds without OOMs.


Step 1: Connect to the Sovereign Dashboard ​

  1. Open Tadpole OS in your browser:
    • Local Dev: http://localhost:5173 (Vite 6 + React 19)
    • Production (Bunker): http://<bunker-ip>:8000 (Axum 0.8)
  2. Go to βš™οΈ System Configuration from the sidebar.
  3. Under Engine Connection, verify the URL (TadpoleOSUrl) is set to your engine endpoint and the Neural Engine Access Token (formerly Neural Token) matches your .env value.
  4. Click Save Changes β€” the dashboard auto-reconnects immediately using the Lazy Singleton Socket protocol.
  5. In the Multi-Tab Bar, you can now open additional operational contexts (Missions, Hierarchy, etc.) without losing your current view.
  6. The top-tier PageHeader should show 🟒 ONLINE and display real-time engine telemetry.

TIP

Multi-Monitor Setup: Tadpole OS supports State-Preserved Detachment. Click the External Link icon on any tab or high-fidelity components like the Swarm Pulse Visualizer to "pop out" that context into a dedicated portal window. This uses a shared JS heap for zero-latency cross-window synchronization.

Development Note: For reliable database persistence on Windows, ensure DATABASE_URL is an absolute path (e.g., sqlite:D:\TadpoleOS-Dev\tadpole.db).


Step 2: Unlock the Neural Vault & Add Your Groq Provider ​

The Neural Vault is an encrypted vault that stores your API keys. You must unlock it before configuring providers.

  1. Go to 🧠 AI Provider Manager from the sidebar
  2. You'll see the NEURAL VAULT lock screen
  3. Enter a master password (this encrypts your keys locally) β†’ click Commit Authorization
  4. You'll now see the Provider Cards section

Adding Groq as a Provider ​

  1. If Groq is already listed, click the Edit (pencil) icon on the Groq card
  2. If not listed, click + ADD PROVIDER at the bottom:
    • Name: Groq
    • Icon: ⚑ (or any emoji)
    • Click Create
  3. On the Groq provider card:
    • API Key: Paste your Groq API key
    • Base URL: https://api.groq.com/openai/v1
    • Protocol: OpenAI (OpenRT) β€” Groq uses OpenAI-compatible API
  4. Click Save on the provider card.
  5. Test Trace (Handshake): Click the "Test Trace" button to perform a real-time connectivity handshake. This verifies your API Key, Endpoint, and Protocol are valid before deployment.

TIP

The vault auto-locks after inactivity. Your key is encrypted with your master password and stored in the browser β€” it never leaves your machine.


Step 3: Deployment with Sovereign Starter Kits (Optional) ​

For rapid SME deployment, Tadpole OS allows you to choose a Sovereign Starter Kit (Marketing, Customer Success, Finance). See the full Starter Kits Guide for the current built-in kits and install paths.


Step 4: Local Intelligence (Local LLMs via Ollama) ​

  1. Follow the Qwen3.5-9B Local Integration Guide for detailed setup.
  2. Once configured, you can add local models similarly to the steps below.

Step 5: Add Models to the Registry ​

Still on the 🧠 AI Provider Manager page, scroll down to the Model Registry section.

  1. Click + ADD MODEL
  2. Fill in:
    • Model Name: llama-3.3-70b-versatile
    • Provider: Select Groq from the dropdown
    • RPM (optional): e.g., 30 β€” prevents exceeding Groq's free-tier rate limits
    • TPM (optional): e.g., 14000 β€” the engine will throttle automatically
  3. Click the βœ“ checkmark to save

Recommended models to add:

Model NameProviderBest For
llama-3.3-70b-versatileGroqGeneral tasks, tool calling
llama-3.1-8b-instantGroqFast responses, simple tasks
qwen3.5:9bOllamaLocal Power, high logic fidelity

Repeat for each model you want available.


Step 6: Configure an Agent Node ​

  1. Go to πŸ›οΈ Agent Hierarchy Layer from the sidebar
  2. You'll see the Neural Command Hierarchy β€” your agent org chart
  3. Click on any agent card (e.g., Nexus, Cipher, etc.)
  4. The Agent Config Panel slides open on the right

In the Config Panel: ​

  1. Identity Section (top):

    • Name: Give it a descriptive name (e.g., Research Bot)
    • Role: Select from the dropdown (e.g., Researcher, Engineer, Analyst)
  2. Cognition Tab (MCP Tools & Skills):

    • Skills & Workflows: Toggle standard skills like web_search or code_execute.
    • MCP Tools: Select external tools from the high-density grid. These are specifically designed for Model Context Protocol integration.
    • Model Settings: Configure model, provider, and temperature for each slot.
  3. Voice & Governance Tabs:

    • Voice: Configure TTS/STT identity.
    • Governance: Toggle the Requires Oversight flag (Junior Agent mode) and set persistent USD budget caps for this specific node.
  4. Click πŸ’Ύ SAVE CONFIG at the bottom.

IMPORTANT

The save pushes your config to the Rust backend, so it persists across devices and restarts. You'll see Capability Badges appear on the agent's card in the Agent Manager, showing the count of assigned tools.


Step 7: Bulk Capability Assignment (Skills Hub) ​

Instead of configuring agents one-by-one, you can assign tools to multiple agents simultaneously.

  1. Go to πŸ› οΈ Skills & Workflows from the sidebar.
  2. Select any Skill, Workflow, or MCP Tool.
  3. Click the "Assign to Agents" button in the details panel.
  4. Select all agents you want to receive this capability.
  5. Click Commit Assignments. The engine will bulk-sync the configurations live.

Step 8: Importing External Capabilities (.md) ​

Tadpole OS allows you to rapidly build your agent's library by importing existing documentation.

  1. Go to πŸ› οΈ Skills & Workflows.
  2. Click the "Import .md" button in the header.
  3. Select a .md file containing a skill or workflow definition.
  4. Review the Import Preview to verify the parsed logic and ID.
  5. Click Confirm Import. The capability is now available in your User Registry for assignment.

Step 9: Create and Execute a Mission ​

  1. Go to 🎯 Mission Management from the sidebar
  2. Click + NEW MISSION in the top-right of the cluster sidebar
  3. Fill in:
    • Mission Name: e.g., Market Research Sprint
    • Department: Select the relevant department (e.g., Research)
  4. Click Create

Assign Agents to the Mission: ​

  1. Select your new mission in the sidebar (it'll highlight)
  2. In the Available Agents pool on the right, click + Assign next to each agent you want on this mission
  3. Hierarchical Recruitment: High-level agents (Alphas) can recruit ephemeral sub-agents. The engine uses modular specialists from runner/mission_tools.rs to delegate tasks with strategic context handoffs.
  4. Parallel Swarming (PERF-06): Tadpole OS utilizes FuturesUnordered to parallelize tool calls. Recruitment of multiple specialists happens simultaneously, reducing swarm startup latency by up to 80%.
  5. Swarm Pulse Visualizer: Toggle the "Neural Map" icon on the mission dashboard to see a real-time Force-Graph visualization of cluster connectivity, featuring 10Hz binary telemetry pulses.
  6. Neural Swarm Optimization: When you type an objective, the engine proactively suggests mission-specific templates via the Template Discovery Hub.
  7. Recursion Guard: To prevent circular token-burn, the engine enforces a maximum Swarm Depth of 5 (managed in AppState).
  8. Mission Analysis (Agent 99): Toggle the "Analysis" switch next to the Run button to trigger a post-mission debrief powered by LanceDB vector synthesis.

Step 10: Workspace & Cluster Management ​

Tadpole OS allows you to organize your swarm into Mission Clusters.

  • Defaults: The engine starts with 4 predefined clusters (Strategic Command, Strategic Ops, Core Intelligence, Applied Growth).
  • Custom Scaling: You can create new clusters or retire existing ones from the 🎯 Missions page.
  • Persistence: Agent roles and system configurations are stored in the backend SQLite database (tadpole.db). Logical mission clusters are managed by the frontend in LocalStorage.
  • Physical Sandboxes: Each cluster maps to a dedicated directory in the backend ./workspaces/{clusterId} folder, ensuring file isolation.

Step 11: External Adapters & Workspace Tools ​

The engine can now connect to your local environment and external services.

Workspace File Operations ​

Agents with matching skills can read and write files within their cluster sandbox:

  • read_file: Read a file from the workspace (e.g., load a spec document).
  • write_file: Write a file to the workspace (e.g., save generated code).
  • list_files: List files in a workspace directory.
  • delete_file: Delete a file (requires Oversight Gate approval).

Files are stored under workspaces/<cluster-id>/ on the server. Each cluster is fully isolated.

Local Markdown Vault (Obsidian) ​

Enabled agents can now use the archive_to_vault tool.

  1. Create a vault/ directory in the server-rs root.
  2. Agents will automatically append findings to files in this directory when requested.

Discord Notifications ​

  1. Add DISCORD_WEBHOOK="your_webhook_url" to your .env file.
  2. Use the notify_discord tool from an agent to alert your team.

Environment Security (.env) ​

Ensure your .env file in the root directory contains:

VariableDescriptionRequirement
DATABASE_URLPath to tadpole.dbAbsolute path REQUIRED on Windows (e.g., D:\TadpoleOS-Dev\tadpole.db)
AUDIT_PRIVATE_KEYEd25519 Private Key (Hex)REQUIRED for production. Enables non-repudiation and tamper-evident logging.
NEURAL_TOKENEngine Access Token for WebSocket/API accessRequired in production β€” engine panics if not set.
MERKLE_AUDIT_ENABLEDToggle tamper-evident cryptographic loggingDefault: true
RESOURCE_GUARD_ENABLEDToggle real-time RAM/CPU monitoringDefault: true
SANDBOX_AWARENESSEnable Docker/K8s detection statusDefault: true
LIFECYCLE_HOOKS_ENABLEDToggle pre/post execution hooksDefault: true
ANTHROPIC_API_KEYClaude Provider KeyNo
OPENAI_API_KEYOpenAI Provider KeyNo
OLLAMA_HOSTLocal LLM EndpointDefault: http://localhost:11434
DISCORD_WEBHOOKDiscord notification URLRequired only for notify_discord tool
TADPOLE_NULL_PROVIDERSForces graceful provider degradationDev/Test only.
SME_SYNC_INTERVAL_MINSIngestion Worker sync frequency (minutes)Default: 30

Standardized Observability (HATEOAS) ​

All resource endpoints in Tadpole OS implement the HATEOAS pattern. Responses include a _links object, enabling self-discovery of related actions. Error responses strictly follow RFC 9457 (Problem Details) for consistent machine-readable debugging.


Step 12: SME Data Intelligence (Connectors & Workflows) ​

Tadpole OS includes a 4-phase data intelligence layer for SME onboarding.

Phase 1: Hybrid RAG ​

The Neural Memory engine (memory.rs) automatically combines vector similarity with keyword proximity scoring for higher-fidelity context retrieval. This is transparent β€” no configuration required.

Phase 2: Data Connectors (Background Sync) ​

  1. Go to πŸ›οΈ Agent Hierarchy Layer β†’ select an agent β†’ open the Memory tab.
  2. In the Connector Config section, click + Add Source.
  3. Set Type to fs (file system) and URI to the directory to watch (e.g., /data/business-docs/).
  4. Click Save. The Ingestion Worker will begin crawling this directory at the interval set by SME_SYNC_INTERVAL_MINS.
  5. Monitor sync status (idle/syncing/error) in the Memory Section UI.

TIP

The Ingestion Worker uses a SyncManifest to track file modification times. Only new or changed files are re-embedded, minimizing compute costs.

Phase 3: Deterministic SOP Workflows ​

  1. Create a markdown file in data/workflows/ on the server (e.g., data/workflows/onboarding.md).
  2. Format it with numbered steps β€” each step becomes a discrete agent turn.
  3. Assign the workflow to an agent via the Cognition tab in the Agent Config Panel.
  4. When the agent receives a mission, the SOP Engine will execute each step in guaranteed order.

Phase 4: Document Parsing ​

The Data Connectors automatically use the Layout-Aware Parser (parser.rs) for all ingested files. Supported formats: .txt, .md, .csv, .pdf (text-layer). Documents are chunked with 25% overlap for optimal embedding quality.


Step 13: Send a Task & Get Results ​

Option A: From the Terminal Bar ​

The terminal bar is at the bottom of every page.

  1. Click the terminal input field
  2. Type a command:
    /send Research Bot Analyze the top 3 competitors in the AI agent space and summarize their pricing models
    Format: /send <agent-name> <your task message>
  3. Press Enter
  4. Watch the System Log on the dashboard β€” you'll see:
    • πŸ“‘ Task dispatched to Research Bot
    • Live agent status updates
    • The final response from the LLM

Option B: Command Palette (Global Nav) ​

  1. Press Cmd+K (Mac) or Ctrl+K (Windows) anywhere.
  2. Search for an Agent, Cluster, or Directive.
  3. Select an agent to instantly focus them in the chat interface.

Option C: From the OPS Dashboard ​

  1. Go to the Dashboard (home page)
  2. The Live Agent Status cards show real-time activity
  3. The System Log captures all responses and events.
  4. Discover Nodes: Click the "Discover Nodes" button in the Infra section to scan your local network for secondary Bunker nodes. Discovered nodes will automatically appear in your dashboard for unified oversight.

Option D: Neural Sync (Voice-to-Swarm) ​

  1. Go to πŸŽ™οΈ Voice Interface from the sidebar
  2. Select Target: Choose an Agent (e.g., Agent of Nine) or a Mission Cluster.
  3. Click Start Sync β†’ Speak your high-level objective clearly.
  4. Hands-Free Response: The speaker icon activates automatically. The agent (typically Agent of Nine) will transcribe your intent via Groq Whisper and then synthesize a strategic confirmation back to you via OpenAI TTS.
  5. Click End Sync once the verbal handshake is complete.

Useful Terminal Commands ​

CommandWhat it does
/send <agent> <message>Send a task to a specific agent
/pause <agent>Pause a running agent
/resume <agent>Resume a paused agent
/statusShow all agent statuses
/swarm statusInventory mission clusters
/clearClear the system log

πŸ› οΈ Maintenance & Integrity (Python Environment) ​

For advanced users and AI agents, the execution/ directory contains standardized tools for maintaining the "Intelligence Grade" of the codebase:

ScriptPurposeProtocol
python execution/verify_all.pyFull System AuditPerforms pre-flight checks on engine & services
python execution/parity_guard.pyIntegrity GateEnsures documentation matches backend routing
python execution/scout.pySystem SearchHigh-fidelity recursive search with relative pathing

NOTE

All scripts in execution/ follow a strict [OK] / [FAIL] machine-readable reporting protocol for autonomous agents.


Troubleshooting ​

IssueFix
Dashboard shows OFFLINECheck that the engine is running (npm run engine or Docker container is up)
Agent returns no responseVerify the model exists in the Model Registry and the API key is valid
Neural Vault won't unlockThe vault creates a new encryption key on first use β€” use any password. If locked out, use Emergency Vault Reset at the bottom of the unlock screen.
Model dropdown is emptyGo to 🧠 AI Provider Manager β†’ unlock vault β†’ add models to the registry
Agent config doesn't saveCheck browser console β€” engine must be online for persistence to work
Tool-Calling fails (Groq)The engine includes Self-Healing Retries for Groq. Malformed tool syntax is automatically corrected in a second pass.
Agent is slow / rate limitedThe engine enforces rpm/tpm limits set on the model. The agent will wait for the quota window to reset rather than drop requests.
NEURAL_TOKEN panic on startA NEURAL_TOKEN env var is required for the engine to start. Set it in your .env file and make the dashboard token match it.
Workspace file access deniedAgent tried to access a path outside its sandbox. Check cluster_id mapping and ensure no path traversal in the filename.

🐸 Starter Swarm Configuration (Quick-Deploy) ​

This section provides a ready-to-use 3-agent swarm with concrete settings optimized for Groq's free tier. Follow this to have a working hierarchical swarm in under 5 minutes.

Agent Roster ​

AgentIDRoleModelProviderTemperatureBudgetSkills
Agent of Nine1CEOllama-3.3-70b-versatileGroq0.8$2.00issue_alpha_directive, web_search
Tadpole2COO (Alpha)llama-3.3-70b-versatileGroq0.6$3.00web_search, write_file, read_file, spawn_subagent
Elon3CTO (Specialist)llama-3.3-70b-versatileGroq0.3$1.00code_execute, write_file, read_file, list_files

TIP

Why these settings?

  • Agent of Nine uses high temperature (0.8) for creative strategic thinking and delegation.
  • Tadpole (Alpha) gets medium temperature (0.6) for balanced coordination and the spawn_subagent skill for recruitment.
  • Elon (Specialist) runs a fast, cheap model at low temperature (0.3) for precise code execution.
  • Budget caps prevent runaway token spend on Groq's free tier.

Rate Limits (Groq Free Tier Safe) ​

Set these in the 🧠 Providers β†’ Model Registry:

ModelRPMTPM
llama-3.3-70b-versatile3014000
llama-3.1-8b-instant3014000

Applying via the UI ​

  1. Go to πŸ›οΈ Agent Hierarchy Layer β†’ click each agent card
  2. Set the Model, Temperature, and Budget as shown above
  3. Expand Skills & Workflows β†’ toggle the listed skills for each agent
  4. Click πŸ’Ύ SAVE CONFIG for each agent

Applying via API (curl) ​

If you prefer programmatic setup, here are the exact payloads:

Configure Agent of Nine (CEO):

bash
curl -X PUT http://localhost:8000/v1/agents/1 \
  -H "Authorization: Bearer YOUR_NEURAL_TOKEN" \
  -H "Content-Type: application/json" \
  -d '{
    "model_id": "llama-3.3-70b-versatile",
    "provider": "groq",
    "temperature": 0.8,
    "budget_usd": 2.0,
    "skills": ["issue_alpha_directive", "web_search"]
  }'

Configure Tadpole (Alpha/COO):

bash
curl -X PUT http://localhost:8000/v1/agents/2 \
  -H "Authorization: Bearer YOUR_NEURAL_TOKEN" \
  -H "Content-Type: application/json" \
  -d '{
    "model_id": "llama-3.3-70b-versatile",
    "provider": "groq",
    "temperature": 0.6,
    "budget_usd": 3.0,
    "skills": ["web_search", "write_file", "read_file", "spawn_subagent"]
  }'

Configure Elon (Specialist/CTO):

bash
curl -X PUT http://localhost:8000/v1/agents/3 \
  -H "Authorization: Bearer YOUR_NEURAL_TOKEN" \
  -H "Content-Type: application/json" \
  -d '{
    "model_id": "llama-3.1-8b-instant",
    "provider": "groq",
    "temperature": 0.3,
    "budget_usd": 1.0,
    "skills": ["code_execute", "write_file", "read_file", "list_files"]
  }'

NOTE

Replace YOUR_NEURAL_TOKEN with the value from your .env file. There is no built-in development token anymore.


🎯 Showcase Mission: "Competitive Intelligence Swarm" ​

This mission demonstrates the full power of hierarchical swarming β€” strategic delegation, parallel research, file collaboration, and synthesis. Run this after applying the Starter Swarm configuration above.

Mission Overview ​

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚  YOU (Overlord)                                          β”‚
β”‚  "Analyze the top 3 AI agent frameworks and write        β”‚
β”‚   a competitive brief with code comparison."             β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                      β”‚ Neural Handoff
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β–Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚  Agent of Nine (CEO) β€” Depth 0                           β”‚
β”‚  Refines intent β†’ issues alpha directive to Tadpole      β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                      β”‚ issue_alpha_directive
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β–Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚  Tadpole (Alpha/COO) β€” Depth 1                           β”‚
β”‚  Decomposes into parallel research tasks                 β”‚
β”‚  β”œβ”€ spawn_subagent("researcher_a") β†’ CrewAI analysis     β”‚
β”‚  β”œβ”€ spawn_subagent("researcher_b") β†’ AutoGen analysis    β”‚
β”‚  └─ Assigns Elon to code comparison                      β”‚
β””β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
       β”‚              β”‚               β”‚ (Parallel)
   β”Œβ”€β”€β”€β–Όβ”€β”€β”€β”    β”Œβ”€β”€β”€β”€β”€β–Όβ”€β”€β”€β”€β”€β”   β”Œβ”€β”€β”€β”€β–Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
   β”‚Rsrchr Aβ”‚    β”‚ Rsrchr B  β”‚   β”‚  Elon (CTO) β€” D2     β”‚
   β”‚CrewAI  β”‚    β”‚ AutoGen   β”‚   β”‚  code_execute +       β”‚
   β”‚researchβ”‚    β”‚ research  β”‚   β”‚  write_file           β”‚
   β””β”€β”€β”€β”¬β”€β”€β”€β”€β”˜    β””β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”˜   β””β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
       β”‚               β”‚              β”‚
       β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                       β”‚ Results flow up
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β–Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚  Tadpole (Alpha) β€” Synthesis                             β”‚
β”‚  Merges all findings β†’ write_file("competitive_brief.md")β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Step 1: Dispatch the Mission ​

From the Terminal Bar:

/send Agent of Nine Analyze the top 3 AI agent frameworks (CrewAI, AutoGen, LangGraph). For each, research their architecture, pricing, and developer experience. Then have our CTO write a Python code comparison showing how each framework defines a simple 2-agent team. Synthesize everything into a competitive_brief.md in our workspace.

Or via curl:

bash
curl -X POST http://localhost:8000/v1/agents/1/tasks \
  -H "Authorization: Bearer YOUR_NEURAL_TOKEN" \
  -H "Content-Type: application/json" \
  -d '{
    "message": "Analyze the top 3 AI agent frameworks (CrewAI, AutoGen, LangGraph). For each, research their architecture, pricing, and developer experience. Then have our CTO write a Python code comparison showing how each framework defines a simple 2-agent team. Synthesize everything into a competitive_brief.md in our workspace.",
    "provider": "groq",
    "model_id": "llama-3.3-70b-versatile",
    "budget_usd": 2.0
  }'

Step 2: Watch the Swarm Execute ​

Open these dashboard views to observe the swarm in real-time:

ViewWhat You'll See
πŸ›οΈ HierarchyAgent status lights change: idle β†’ thinking β†’ active as each node activates
🎯 MissionsThe cluster sidebar shows task assignments and handoff chains
πŸ“Š OPS DashboardLive token burn, cost tracking, and the System Log streaming agent outputs
πŸ”’ OversightIf write_file or delete_file triggers, you'll see approval requests here

What Happens Under the Hood ​

  1. Agent of Nine receives the prompt, applies strategic reasoning, and fires issue_alpha_directive to Tadpole with a refined, tactical breakdown.
  2. Tadpole decomposes the directive into 3 parallel tasks:
    • Spawns Researcher A (ephemeral) β†’ searches for CrewAI architecture and pricing
    • Spawns Researcher B (ephemeral) β†’ searches for AutoGen architecture and pricing
    • Sends a direct task to Elon β†’ write Python code comparing all 3 frameworks
  3. Parallel Swarming (PERF-06) kicks in β€” all 3 sub-tasks execute concurrently via FuturesUnordered.
  4. As results flow back, Tadpole's synthesis turn merges them and calls write_file("competitive_brief.md") to save the final deliverable.
  5. Cost and token metrics are tracked per-agent in real-time on the OPS Dashboard.

Expected Output ​

After ~30-60 seconds (depending on Groq load), you'll find:

  • workspaces/<cluster-id>/competitive_brief.md β€” The final synthesized report
  • System Log entries showing the full delegation chain with swarm lineage breadcrumbs
  • Per-agent cost breakdown on each hierarchy node card

Scaling This Pattern ​

AdjustmentHow
Add more researchersGive Tadpole more budget and increase swarmDepth
Use multiple providersAssign Claude to Agent of Nine, Groq to specialists
Enable voice dispatchUse πŸŽ™οΈ Standups β†’ Neural Sync instead of typing
Auto-approve safe toolsSet autoApproveSafeSkills: true in Oversight Settings
Save as a templateUse "Promote to Role" on your configured agents

Step 14: Performance Analysis & Real-time Telemetry ​

Tadpole OS provides "Top Tier" observability into swarm health and technical performance.

1. Real-time Telemetry (Swarm Visualizer) ​

The Engine Dashboard features a high-performance Swarm Visualizer (God View):

  • Binary Swarm Pulse: Driven by a 10Hz MessagePack stream (0x02 header) for sub-millisecond state parity.
  • Topology Map: Visualizes the swarm as a 2D force-graph, showing agent status and recruitment relationships.
  • Detach & Recall: Pop the visualizer into a dedicated window for persistent oversight during deep-context missions.
  • Fiscal Burn: Real-time USD/token tracking via the TPM indicator.
  • Swarm Density: Monitor agent instantiation relative to system capacity.

2. Performance Analysis (Benchmarks) ​

  1. Go to πŸ“Š Performance Analysis from the sidebar.
  2. Timeline View: Review historical benchmark results (latency, throughput, status).
  3. Comparison Tool: Select any two runs to calculate performance deltas.
    • Example: Compare "Current" vs "Baseline" to identify code regressions or provider latency spikes.
  4. Target Enforcement: Metrics are color-coded against the technical specifications in Benchmark_Spec.md.

Step 15: The Swarm Template Ecosystem ​

Instead of manually configuring agents, Tadpole OS allows you to instantly download full, industry-specific agent swarms.

  1. Go to βš™οΈ System Configuration from the sidebar.
  2. Scroll down to the Template Ecosystem panel and click Open Template Store.
  3. Discover: Browse the store or use the fuzzy search to find templates tailored to your industry (e.g., "Legal Contract Review", "Healthcare Patient Intake").
  4. Install: Click Install Swarm. The engine uses native Git Cloning to securely fetch the template from the central Tadpole repository and unpacks it into your data/swarm_config directory.
  5. Sapphire Shield Approval: If the downloaded template requests powerful execution skills (like Shell Access or API payments), the engine will freeze initialization and require you (the Overlord) to manually approve the skills, ensuring Zero-Trust security.

Once installed, the Rust engine "hot-loads" the template from your local /data/swarm_config/ directory, and your new specialized agents will immediately appear in the πŸ›οΈ Hierarchy.


Architecture Quick Reference ​

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚         Your Browser (React)        β”‚
β”‚  Dashboard β”‚ Hierarchy β”‚ Missions   β”‚
β”‚  Providers β”‚ Oversight β”‚ Settings   β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                β”‚ HTTP + WebSocket
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β–Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚       Svc->>Rust: POST /v1/agents   β”‚
β”‚  Agent Registry β”‚ Task Router       β”‚
β”‚  Oversight Gate β”‚ Persistence       β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                β”‚ API Calls
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β–Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚        LLM Provider (Groq)          β”‚
β”‚  llama-3.3-70b β”‚ mixtral-8x7b      β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Sovereign Intelligence Architecture.