MCPHub LabRegistrymarcusquinn/aidevops
marcusquinn

marcusquinn/aidevops

Built by marcusquinn • 80 stars

What is marcusquinn/aidevops?

Vibe-Coding is easy. DevOps is hard. OpenCode & Git token-efficient AI agent automation for your app, business, and personal development. Opinionated tools, services, CLI & API stack for speed, securi

How to use marcusquinn/aidevops?

1. Install a compatible MCP client (like Claude Desktop). 2. Open your configuration settings. 3. Add marcusquinn/aidevops using the following command: npx @modelcontextprotocol/marcusquinn-aidevops 4. Restart the client and verify the new tools are active.
šŸ›”ļø Scoped (Restricted)
npx @modelcontextprotocol/marcusquinn-aidevops --scope restricted
šŸ”“ Unrestricted Access
npx @modelcontextprotocol/marcusquinn-aidevops

Key Features

Native MCP Protocol Support
Real-time Tool Activation & Execution
Verified Standard Implementation
Secure Resource & Context Handling

Optimized Use Cases

Extending AI models with custom local capabilities
Automating system workflows via natural language
Connecting external data sources to LLM context windows

marcusquinn/aidevops FAQ

Q

Is marcusquinn/aidevops safe?

Yes, marcusquinn/aidevops follows the standardized Model Context Protocol security patterns and only executes tools with explicit user-granted permissions.

Q

Is marcusquinn/aidevops up to date?

marcusquinn/aidevops is currently active in the registry with 80 stars on GitHub, indicating its reliability and community support.

Q

Are there any limits for marcusquinn/aidevops?

Usage limits depend on the specific implementation of the MCP server and your system resources. Refer to the official documentation below for technical details.

Official Documentation

View on GitHub

AI DevOps Framework

aidevops.sh — An OpenCode plugin and AI operations platform for launching and managing development, business, marketing, and creative projects. 13 specialist AI agents handle the automatable work across every domain so your time is preserved for real-world discovery and decisions that AI cannot yet reach.

Recommended setup: OpenCode + Claude models (Anthropic). All features, agents, and workflows are designed and tested for OpenCode first. Claude models (haiku, sonnet, opus) deliver the best results across all agent tiers.

"Scope a mission to redesign the landing pages — break it into milestones, dispatch workers in parallel, validate each milestone, and track budget across the whole project"

One conversation, autonomous project delivery — with enterprise-level security & quality-control.

Founded by Marcus Quinn on 9th November 2025 to help anyone level-up their AI & Open-Source game.

The Philosophy

Maximum value for your time and money. aidevops is built on these principles:

  • Autonomous orchestration - An AI supervisor runs every 2 minutes, dispatching parallel workers, merging PRs, detecting stuck processes, and advancing multi-day missions — no human babysitting required
  • Multi-domain agents - 13 specialist agents (code, automation, SEO, marketing, content, legal, sales, research, video, business, accounts, social media, health) with 900+ subagents loaded on demand
  • Multi-model safety - High-stakes operations (force push, production deploy, data migration) are verified by a second cross-provider model before execution — different providers have different failure modes, so correlated hallucinations are rare
  • Resource efficiency - Cost-aware model routing (local → haiku → flash → sonnet → pro → opus), project-type bundles that auto-configure quality gates and model tiers, budget tracking with burn-rate analysis
  • Self-healing - When something breaks, diagnose the root cause, create tasks, and fix it. Every error is a live test case for a permanent solution
  • Self-improving - When patterns of failure or inefficiency emerge, improve the framework itself. Session mining extracts learnings from past sessions automatically
  • Gap awareness - Every session is an opportunity to identify what's missing — gaps in automation, documentation, coverage, or processes — and create tasks to fill them
  • Git-first workflow - Protected branches, PR reviews, quality gates before merge. Sane vibe-coding through structure
  • Parallel agents - Multiple AI sessions running full Ralph loops on separate branches via git worktrees
  • Progressive discovery - /slash commands and @subagent mentions load knowledge into context only when needed
  • Open-source ready - Contribute to any project the same way you work on your own. Clone a repo, develop solutions to issues locally, and submit pull requests — the same full-loop workflow works everywhere

The result: an AI operations platform that manages projects across every business domain — absorbing everything automatable so you can focus on what matters.

Built on proven patterns: aidevops implements industry-standard agent design patterns - including multi-layer action spaces, context isolation, and iterative execution loops.

Why This Framework?

Beyond single-task AI. Most AI coding tools handle one conversation, one repo, one task at a time. aidevops manages your entire operation — dispatching parallel AI agents across multiple repos, routing tasks to domain-specialist agents, and running autonomously for days on multi-milestone projects.

What makes it different:

  • Autonomous supervisor - AI pulse runs every 2 minutes: merges ready PRs, dispatches workers, kills stuck processes, advances missions, triages quality findings — no human in the loop
  • Cross-domain intelligence - 13 agents spanning code, automation, business, marketing, legal, sales, content, video, research, SEO, social media, health, and accounts — each with domain expertise and specialist subagents
  • Multi-model safety - Destructive operations verified by a second AI model from a different provider before execution
  • 30+ service integrations - Hosting, Git platforms, DNS, security, monitoring, deployment, payments, communications
  • Mission orchestration - Multi-day autonomous projects broken into milestones with validation, budget tracking, and automatic advancement

<!-- Build & Quality Status -->

GitHub Actions Quality Gate Status CodeFactor Maintainability Codacy Badge CodeRabbit

<!-- License & Legal -->

License: MIT Copyright

<!-- GitHub Stats -->

GitHub stars GitHub forks GitHub watchers

<!-- Release & Version Info -->

GitHub release (latest by date) npm version Homebrew GitHub Release Date GitHub commits since latest release

<!-- Repository Stats (dynamic badges only - no hardcoded versions/counts) -->

GitHub repo size GitHub language count GitHub top language

<!-- Community & Issues -->

GitHub issues GitHub closed issues GitHub pull requests GitHub contributors

<!-- Framework Specific -->

Services Supported AGENTS.md AI Optimized MCP Servers API Integrations

<!-- AI-CONTEXT-START -->

Quick Reference

  • Purpose: AI-assisted DevOps automation framework
  • Install: npm install -g aidevops && aidevops update
  • Entry: aidevops CLI, ~/.aidevops/agents/AGENTS.md
  • Stack: Bash scripts, TypeScript (Bun), MCP servers

Key Commands

  • aidevops init - Initialize in any project
  • aidevops update - Update framework
  • aidevops auto-update - Automatic update polling (enable/disable/status)
  • aidevops secret - Manage secrets (gopass encrypted, AI-safe)
  • aidevops security - Full security assessment (posture, secrets, supply chain)
  • /onboarding - Interactive setup wizard (in AI assistant)

Agent Structure

  • 13 primary agents (Build+, Automate, SEO, Marketing, etc.) with specialist @subagents on demand
  • 900+ subagent markdown files organized by domain
  • 390+ helper scripts in .agents/scripts/
  • 69 slash commands for common workflows
<!-- AI-CONTEXT-END -->

Enterprise-Grade Quality & Security

Comprehensive DevOps framework with tried & tested services integrations, popular and trusted MCP servers, and enterprise-grade infrastructure quality assurance code monitoring and recommendations.

Security Notice

This framework provides agentic AI assistants with powerful infrastructure access. Use responsibly.

Capabilities: Execute commands, access credentials, modify infrastructure, interact with APIs Your responsibility: Use trusted AI providers, rotate credentials regularly, monitor activity

Security Commands

aidevops security              # Run ALL checks (posture + hygiene + supply chain)
aidevops security posture      # Interactive security posture setup (gopass, gh auth, SSH)
aidevops security status       # Combined posture + hygiene summary
aidevops security scan         # Secret hygiene & supply chain scan only
aidevops security scan-pth     # Python .pth file audit (supply chain attack vector)
aidevops security scan-secrets # Plaintext credential locations only
aidevops security scan-deps    # Unpinned dependency check
aidevops security check        # Per-repo security posture assessment
aidevops security dismiss <id> # Dismiss a security advisory after taking action

Running aidevops security with no arguments is the single command that covers everything — user security posture, plaintext secret detection, supply chain IoC scanning, and active advisories.

Security advisories are delivered via aidevops update and shown in the session greeting until dismissed. The scanner never exposes secret values — only file locations and key names. All remediation commands should be run in a separate terminal, not inside AI chat sessions.

Supply chain hardening: All Python dependencies are pinned to exact versions (==) to prevent malicious package upgrades. The .pth file auditor detects known supply chain attack indicators (e.g., the LiteLLM March 2026 PyPI compromise).

Quick Start

Installation Options

npm (recommended - verified provenance):

npm install -g aidevops && aidevops update

Note: npm suppresses postinstall output. The && aidevops update deploys agents to ~/.aidevops/agents/. The CLI will remind you if agents need updating.

Bun (fast alternative):

bun install -g aidevops && aidevops update

Homebrew (macOS/Linux):

brew install marcusquinn/tap/aidevops && aidevops update

Direct from source (aidevops.sh):

bash <(curl -fsSL https://aidevops.sh/install)

Manual (git clone):

git clone https://github.com/marcusquinn/aidevops.git ~/Git/aidevops
~/Git/aidevops/setup.sh

That's it! The setup script will:

  • Clone/update the repo to ~/Git/aidevops
  • Deploy agents to ~/.aidevops/agents/
  • Install the aidevops CLI command
  • Configure your AI assistants automatically
  • Offer to install Oh My Zsh (optional, opt-in) for enhanced shell experience
  • Guide you through recommended tools (Tabby, Zed, Git CLIs)
  • Ensure all PATH and alias changes work in both bash and zsh

New users: Start OpenCode and type /onboarding to configure your services interactively. OpenCode is the recommended tool for aidevops - all features, agents, and workflows are designed and tested for it first. The onboarding wizard will:

  • Explain what aidevops can do
  • Ask about your work to give personalized recommendations
  • Show which services are configured vs need setup
  • Guide you through setting up each service with links and commands

After installation, use the CLI:

aidevops status           # Check what's installed
aidevops doctor           # Detect duplicate installs and PATH conflicts
aidevops update           # Update framework + check registered projects
aidevops auto-update      # Manage automatic update polling (every 10 min)
aidevops init             # Initialize aidevops in any project
aidevops features         # List available features
aidevops repos            # List/add/remove registered projects
aidevops detect           # Scan for unregistered aidevops projects
aidevops upgrade-planning # Upgrade TODO.md/PLANS.md to latest templates
aidevops update-tools     # Check and update installed tools
aidevops uninstall        # Remove aidevops

Project tracking: When you run aidevops init, the project is automatically registered in ~/.config/aidevops/repos.json. Running aidevops update checks all registered projects for version updates.

Use aidevops in Any Project

Initialize aidevops features in any git repository:

cd ~/your-project
aidevops init                         # Enable all features
aidevops init planning                # Enable only planning
aidevops init planning,time-tracking  # Enable specific features

This creates:

  • .aidevops.json - Configuration with enabled features
  • .agents symlink → ~/.aidevops/agents/
  • TODO.md - Quick task tracking with time estimates
  • todo/PLANS.md - Complex execution plans
  • .beads/ - Task graph database (if beads enabled)

Available features: planning, git-workflow, code-quality, time-tracking, beads

Upgrade Planning Files

When aidevops templates evolve, upgrade existing projects to the latest format:

aidevops upgrade-planning           # Interactive upgrade with backup
aidevops upgrade-planning --dry-run # Preview changes without modifying
aidevops upgrade-planning --force   # Skip confirmation prompt

This preserves your existing tasks while adding TOON-enhanced parsing, dependency tracking, and better structure.

Automatic detection: aidevops update now scans all registered projects for outdated planning templates (comparing TOON meta version numbers) and offers to upgrade them in-place with backups.

Task Graph Visualization with Beads

Beads provides task dependency tracking and graph visualization:

aidevops init beads              # Enable beads (includes planning)

Task Dependencies:

- [ ] t001 First task
- [ ] t002 Second task blocked-by:t001
- [ ] t001.1 Subtask of t001
SyntaxMeaning
blocked-by:t001Task waits for t001 to complete
blocks:t002This task blocks t002
t001.1Subtask of t001 (hierarchical)

Commands:

CommandPurpose
/readyShow tasks with no open blockers
/list-verifyList verification queue (pending, passed, failed)
/sync-beadsSync TODO.md/PLANS.md with Beads graph
bd listList all tasks in Beads
bd readyShow ready tasks (Beads CLI)
bd graph <id>Show dependency graph for an issue

Architecture: aidevops markdown files (TODO.md, PLANS.md) are the source of truth. Beads syncs from them for visualization.

Optional Viewers: Beyond the bd CLI, there are community viewers for richer visualization:

  • beads_viewer (Python TUI) - PageRank, critical path analysis
  • beads-ui (Web) - Live updates in browser
  • bdui (React/Ink TUI) - Modern terminal UI
  • perles (Rust TUI) - BQL query language

See .agents/tools/task-management/beads.md for complete documentation and installation commands.

Your AI assistant now has agentic access to 30+ service integrations.

OpenCode Anthropic OAuth (Built-in)

OpenCode includes Anthropic OAuth authentication natively — no API key needed. OAuth is covered by your Claude Pro/Max subscription at zero additional cost.

Authenticate via the pool (recommended):

aidevops model-accounts-pool add anthropic
# Opens browser OAuth flow — no API key required
# Restart OpenCode after adding

Or via the OpenCode TUI:

Open OpenCode → Ctrl+A → Select Anthropic → Login with Claude.ai → follow browser OAuth flow.

Note: opencode auth login prompts for an API key, not OAuth. Use the commands above for subscription-based OAuth access.

Benefits:

  • Zero cost for Claude Pro/Max subscribers (covered by subscription)
  • Automatic token refresh — no manual re-authentication needed
  • Multiple accounts — add more accounts to the pool for automatic rotation when one hits rate limits
  • Beta features enabled — extended thinking modes and latest features

Cursor Models via Pool Proxy

Access Cursor Pro models (Composer 2, Claude 4.6 Opus/Sonnet, GPT-5.x, Gemini 3.1 Pro) in OpenCode through a local gRPC proxy that translates OpenAI-compatible requests to Cursor's protobuf/HTTP2 protocol.

Setup:

# Add your Cursor account to the pool (reads from local Cursor IDE)
oauth-pool-helper.sh add cursor

# Restart OpenCode — Cursor models appear in Ctrl+T model picker

How it works:

  • Reads Cursor credentials from the local IDE state database
  • Starts a gRPC proxy that speaks Cursor's native protocol (not the cursor-agent CLI)
  • Discovers available models via gRPC and registers them as an OpenCode provider
  • Supports true streaming, tool calling, and automatic token refresh
  • Falls back gracefully if no Cursor accounts are in the pool

Benefits:

  • Zero additional cost for Cursor Pro subscribers
  • True streaming — responses stream as they arrive (not buffered)
  • Tool calling — Cursor's native MCP tool protocol works through the proxy
  • Model discovery — automatically detects all models available to your account
  • Pool rotation — multiple accounts with LRU rotation and 429 failover

Google AI Pool (Gemini CLI / Vertex AI)

Use your Google AI Pro, AI Ultra, or Workspace subscription for Gemini models. Tokens are injected as ADC bearer tokens that Gemini CLI, Vertex AI SDK, and the Gemini API pick up automatically.

Setup:

# Add your Google account to the pool (browser OAuth flow)
aidevops model-accounts-pool add google

# Restart OpenCode — token is injected as GOOGLE_OAUTH_ACCESS_TOKEN

Supported plans:

  • Google AI Pro (~$25/mo) — daily Gemini CLI limits
  • Google AI Ultra (~$65/mo) — higher daily limits
  • Google Workspace with Gemini add-on — enterprise daily limits

Isolation guarantee: Google auth failures never affect Anthropic/OpenAI/Cursor providers. A Google 429 or auth error only puts the Google pool into cooldown.

GitHub AI Agent Integration

Enable AI-powered issue resolution directly from GitHub. Comment /oc fix this on any issue and the AI creates a branch, implements the fix, and opens a PR.

Security-first design - The workflow includes:

  • Trusted users only (OWNER/MEMBER/COLLABORATOR)
  • ai-approved label required on issues before AI processing
  • Prompt injection pattern detection
  • Audit logging of all invocations
  • 15-minute timeout and rate limiting

Quick setup:

# 1. Install the OpenCode GitHub App
# Visit: https://github.com/apps/opencode-agent

# 2. Add API key secret
# Repository → Settings → Secrets → ANTHROPIC_API_KEY

# 3. Create required labels
gh label create "ai-approved" --color "0E8A16" --description "Issue approved for AI agent"
gh label create "security-review" --color "D93F0B" --description "Requires security review"

The secure workflow is included at .github/workflows/opencode-agent.yml.

Usage:

ContextCommandResult
Issue (with ai-approved label)/oc fix thisCreates branch + PR
Issue/oc explain thisAI analyzes and replies
PR/oc review this PRCode review feedback
PR Files tab/oc add error handling hereLine-specific fix

See .agents/tools/git/opencode-github-security.md for the full security documentation.

Supported AI tool: OpenCode is the recommended and tested AI coding tool for aidevops. All features, agents, and workflows are designed and tested for OpenCode first. We recommend Claude models (Anthropic) for the best results across all agent tiers -- haiku for triage, sonnet for implementation, opus for complex reasoning.

Recommended stack:

  • OpenCode - The recommended AI coding agent. Powerful agentic TUI/CLI with native MCP support, Tab-based agent switching, LSP integration, plugin ecosystem, and excellent DX. All aidevops features are designed and tested for OpenCode first.
  • OpenCode Zen - Free tier of OpenCode with included models. Start working with AI straight away at no cost -- no API keys or subscriptions required.
  • Claude (Anthropic) - Our most-used and tested model provider. Claude haiku, sonnet, and opus deliver the best results across all aidevops agent tiers and workflows. Recommended for users who want the highest quality output.
  • Tabby - Recommended terminal. Colour-coded Profiles per project/repo, auto-syncs tab title with git repo/branch.
  • Zed - Recommended editor. High-performance with AI integration (use with the OpenCode Agent Extension).

Troubleshooting Auth

If you see "Anthropic Key Missing", "OpenAI Key Missing", or the model stops responding, run these commands from any terminal — no working model session required.

Step 1 — Check pool health

aidevops model-accounts-pool status       # counts: available / rate-limited / auth-error
aidevops model-accounts-pool check        # live token validity test per account

Step 2 — Fix based on what you see

SymptomCommand
Account shows rate-limitedaidevops model-accounts-pool rotate anthropic
All accounts in cooldownaidevops model-accounts-pool reset-cooldowns
Account shows auth-erroraidevops model-accounts-pool add anthropic (re-auth)
Pool is empty (no accounts)aidevops model-accounts-pool add anthropic
Recently re-authed, still brokenaidevops model-accounts-pool assign-pending anthropic
Google Gemini CLI rate-limitedaidevops model-accounts-pool rotate google
Google token expiredaidevops model-accounts-pool add google (re-auth)

Step 3 — If still broken, re-add the account

aidevops model-accounts-pool add anthropic     # Claude Pro/Max — opens browser OAuth
aidevops model-accounts-pool add openai        # ChatGPT Plus/Pro
aidevops model-accounts-pool add cursor        # Cursor Pro (reads from local IDE)
aidevops model-accounts-pool add google        # Google AI Pro/Ultra/Workspace — browser OAuth
aidevops model-accounts-pool import claude-cli # Import from existing Claude CLI auth

Restart OpenCode after any add, rotate, or reset-cooldowns to pick up the new credentials.

Full command reference

aidevops model-accounts-pool status            # Pool health at a glance
aidevops model-accounts-pool list              # Per-account detail + expiry
aidevops model-accounts-pool check             # Live API validity test
aidevops model-accounts-pool rotate [provider] # Switch to next available account NOW
aidevops model-accounts-pool reset-cooldowns   # Clear all rate-limit cooldowns
aidevops model-accounts-pool assign-pending <p># Assign stranded pending token
aidevops model-accounts-pool remove <p> <email># Remove an account

Note: reset-cooldowns clears cooldowns in the pool file. If OpenCode is already running, the in-memory token endpoint cooldown is only cleared when OpenCode restarts or when you use the /model-accounts-pool reset-cooldowns slash command inside an active session.

If you prefer guided help: Open OpenCode with a free model (OpenCode Zen includes free models that don't require any API key or subscription) and run the auth troubleshooting agent by typing:

@auth-troubleshooting

The agent contains the full recovery flow and symptom table. Free models work fine for this — no paid subscription needed.

Terminal Tab Title Sync

Your terminal tab/window title automatically shows repo/branch context when working in git repositories. This helps identify which codebase and branch you're working on across multiple terminal sessions.

Supported terminals: Tabby, cmux, iTerm2, Kitty, Alacritty, WezTerm, Hyper, and most xterm-compatible terminals.

How it works: The pre-edit-check.sh script's primary role is enforcing git workflow protection (blocking edits on main/master branches). As a secondary, non-blocking action, it updates the terminal title via escape sequences. No configuration needed - it's automatic.

Example format: {repo}/{branch-type}/{description}

See .agents/tools/terminal/terminal-title.md for customization options.

Companion tool:

  • claude-code CLI - Called from within OpenCode for sub-tasks and headless dispatch

Collaborator compatibility: Projects initialized with aidevops init include pointer files (.cursorrules, .windsurfrules, etc.) that reference AGENTS.md, helping collaborators using other editors find project context. aidevops does not install into or configure those tools.

Repo courtesy files: aidevops init scaffolds standard repo files if they don't exist: README.md, LICENCE (MIT), CHANGELOG.md, CONTRIBUTING.md, SECURITY.md, CODE_OF_CONDUCT.md. Author name and email are auto-detected from git config. Existing files are never overwritten.

Core Capabilities

AI-First Infrastructure Management:

  • SSH server access, remote command execution, API integrations
  • DNS management, application deployment, email monitoring
  • Git platform management, domain purchasing, setup automation
  • WordPress management, credential security, code auditing

Autonomous Orchestration:

  • Pulse supervisor - Autonomous AI supervisor runs every 2 minutes via launchd — merges ready PRs, dispatches workers, kills stuck processes, detects orphaned PRs, syncs TODO state with GitHub, triages quality findings, and advances missions. No human in the loop
  • Missions - Multi-day autonomous projects: /mission scopes a high-level goal into milestones and features. The pulse dispatches workers, validates milestones, tracks budget, and advances through the project automatically (mission-dashboard-helper.sh)
  • Multi-model verification - Destructive operations (force push, production deploy, data migration) are verified by a second AI model from a different provider before execution. Different providers have different failure modes, so correlated hallucinations are rare
  • Supervisor - SQLite state machine dispatches tasks to parallel AI agents with retry cycles, batch management, and cron scheduling
  • Runners - Named headless agent instances with persistent identity, instructions, and memory namespaces
  • /runners command - Batch dispatch from task IDs, PR URLs, or descriptions with concurrency control and progress monitoring
  • Mailbox - SQLite-backed inter-agent messaging for coordination across parallel sessions
  • Worktree isolation - Each agent works on its own branch in a separate directory, no merge conflicts
  • Budget tracking - Append-only cost log (budget-tracker-helper.sh) with burn-rate analysis and /budget-analysis command for model routing decisions
  • Observability - LLM request capture plugin (observability.mjs) for cost tracking, performance analysis, and debugging
  • Rate limits - Per-provider rate limit configuration (rate-limits.json) with throttle-risk warnings

Project Intelligence:

  • Bundles - Project-type presets that auto-configure model tiers, quality gates, and agent routing per repo. 7 built-in bundles (web-app, library, cli-tool, content-site, infrastructure, agent, schema) with auto-detection from marker files (bundle-helper.sh)
  • TTSR rules - Soft rule engine (ttsr-rule-loader.sh) with .agents/rules/ directory for AI output correction (e.g., no-edit-on-main, no-glob-for-discovery)
  • Cross-review - /cross-review dispatches the same prompt to multiple AI models in parallel, diffs results, and optionally auto-scores via a judge model
  • Local models - Run AI models locally via llama.cpp for free, private, offline inference (local-model-helper.sh) with HuggingFace GGUF model management
  • Tech stack lookup - /tech-stack detects technology stacks of URLs or finds sites using specific technologies (Wappalyzer, httpx, nuclei, BuiltWith)
  • IP reputation - ip-reputation-helper.sh checks IP addresses against multiple reputation databases (Spamhaus, ProxyCheck, AbuseIPDB) before VPS purchase or deployment

Conversational Memory & Entity System:

  • Entity memory - Cross-channel relationship continuity (entity-helper.sh): people, agents, and services tracked across Matrix, SimpleX, email, and CLI with versioned profiles
  • Conversational memory - Per-conversation context management (conversation-helper.sh): idle detection, immutable summaries, tone profile extraction
  • Three-layer architecture - Layer 0 (immutable raw log), Layer 1 (tactical summaries), Layer 2 (strategic entity profiles) in shared SQLite

Communications:

  • SimpleX bot - Channel-agnostic gateway with SimpleX Chat as first adapter for AI agent dispatch (simplex-bot/)
  • Matterbridge - Multi-platform chat bridge connecting 20+ platforms including Matrix, Discord, Telegram, Slack, IRC, WhatsApp, XMPP (matterbridge-helper.sh)
  • Localdev - Local development environment manager with dnsmasq, Traefik, mkcert for production-like .local domains with HTTPS (localdev-helper.sh)

MCP Toolkit:

  • MCPorter - Discover, call, compose, and generate CLIs/typed clients for MCP servers (mcporter npm package)
  • OpenAPI Search - Search and explore any OpenAPI specification via MCP (zero install, Cloudflare Worker)
  • Cloudflare Code Mode - Full Cloudflare API (2,500+ endpoints) via 2 tools in ~1,000 tokens

Unified Interface:

  • Standardized commands across all providers
  • Automated SSH configuration and multi-account support for all services
  • Security-first design with comprehensive logging, code quality reviews, and continual feedback-based improvement

Quality Control & Monitoring:

  • Multi-Platform Analysis: SonarCloud, CodeFactor, Codacy, CodeRabbit, Qlty, Gemini Code Assist, Snyk
  • Performance Auditing: PageSpeed Insights, Lighthouse, WebPageTest, Core Web Vitals (/performance command)
  • SEO Toolchain: 40+ SEO subagents including Semrush, Ahrefs, ContentKing, Screaming Frog, Bing Webmaster Tools, Rich Results Test, programmatic SEO, analytics tracking, schema validation, content analysis, keyword mapping, and AI readiness
  • SEO Debugging: Open Graph validation, favicon checker, social preview testing
  • Email Deliverability: SPF/DKIM/DMARC/MX validation, blacklist checking
  • Uptime Monitoring: Updown.io integration for website and SSL monitoring

Imported Skills

aidevops includes curated skills imported from external sources. Skills support automatic update tracking:

SkillSourceDescription
cloudflare-platformdmmulroy/cloudflare-skill60 Cloudflare products: Workers, Pages, D1, R2, KV, Durable Objects, AI, networking, security
heygenheygen-com/skillsAI avatar video creation API: avatars, voices, video generation, streaming, webhooks
remotionremotion-dev/skillsProgrammatic video creation with React, animations, rendering
video-prompt-designsnubroot/Veo-3-Meta-FrameworkAI video prompt engineering - 7-component meta prompt framework for Veo 3
animejsanimejs.comJavaScript animation library patterns and API (via Context7)
caldav-calendarClawdHubCalDAV calendar sync via vdirsyncer + khal (iCloud, Google, Fastmail, Nextcloud)
proxmox-fullClawdHubComplete Proxmox VE hypervisor management via REST API

CLI Commands:

aidevops skill add <owner/repo>    # Import a skill from GitHub
aidevops skill add clawdhub:<slug> # Import a skill from ClawdHub
aidevops skill list                # List imported skills
aidevops skill check               # Check for upstream updates
aidevops skill update [name]       # Update specific or all skills
aidevops skill scan [name]         # Security scan skills (Cisco Skill Scanner)
aidevops skill remove <name>       # Remove an imported skill

Skills are registered in ~/.aidevops/agents/configs/skill-sources.json with upstream tracking for update detection.

Security Scanning:

Imported skills are automatically security-scanned using Cisco Skill Scanner when installed. Scanning runs on both initial import and updates -- pulling a new version of a skill triggers the same security checks as the first import. CRITICAL/HIGH findings block the operation; MEDIUM/LOW findings warn but allow. Telemetry is disabled - no data is sent to third parties.

When a VirusTotal API key is configured (aidevops secret set VIRUSTOTAL_MARCUSQUINN), an advisory second layer scans file hashes against 70+ AV engines and checks domains/URLs referenced in skill content. VT scans are non-blocking -- the Cisco scanner remains the security gate.

ScenarioSecurity scan runs?CRITICAL/HIGH blocks?
aidevops skill add <source>YesYes
aidevops skill update [name]YesYes
aidevops skill add <source> --forceYesYes
aidevops skill add <source> --skip-securityYes (reports only)No (warns)
aidevops skill scan [name]Yes (standalone)Report only

The --force flag only controls file overwrite behavior (replacing an existing skill without prompting). To bypass security blocking, use --skip-security explicitly -- this separation ensures that routine updates and re-imports never silently skip security checks.

Scan results are logged to .agents/SKILL-SCAN-RESULTS.md automatically on each batch scan and skill import, providing a transparent audit trail of security posture over time.

Browse community skills: skills.sh | ClawdHub | Specification: agentskills.io

Reference:

Agent Sources (Private Repos)

Sync agents from private Git repositories into the framework. Private repos keep their own agents, helper scripts, and slash commands — aidevops sources sync deploys them alongside the core agents.

aidevops sources add ~/Git/my-private-agents     # Register a local repo
aidevops sources add-remote git@github.com:u/r.git  # Clone and register a remote repo
aidevops sources list                             # List configured sources
aidevops sources sync                             # Sync all sources
aidevops sources remove my-private-agents         # Remove a source

How it works: Private repos contain a .agents/ directory with agent subdirectories. Agents with mode: primary in their frontmatter are symlinked to the agents root for auto-discovery as primary agent tabs. Markdown files with agent: frontmatter are deployed as /slash commands. All sources sync automatically during aidevops update.

Reference: .agents/aidevops/agent-sources.md

Agent Design Patterns

aidevops implements proven agent design patterns identified by Lance Martin (LangChain).

PatternDescriptionaidevops Implementation
Give Agents a ComputerFilesystem + shell for persistent context~/.aidevops/.agent-workspace/, 390+ helper scripts
Multi-Layer Action SpaceFew tools, push actions to computerPer-agent MCP filtering (~12-20 tools each)
Knowledge Graph RoutingIndexed, cross-referenced agents instead of isolated skillssubagent-index.toon maps 900+ agents by domain, purpose, and dependency — agents discover related context through the graph, not just their own file
Progressive DisclosureLoad context on-demandSubagent routing with content summaries, YAML frontmatter, read-on-demand
Offload ContextWrite results to filesystem.agent-workspace/work/[project]/ for persistence
Cache ContextPrompt caching for costStable instruction prefixes
Isolate ContextSub-agents with separate windowsSubagent files with specific tool permissions
Multi-Agent OrchestrationCoordinate parallel agentsTOON mailbox, agent registry, supervisor dispatch
Compaction ResiliencePreserve context across compactionOpenCode plugin injects dynamic state at compaction time
Ralph LoopIterative execution until complete/full-loop, full-loop-helper.sh
Evolve ContextLearn from sessions/remember, /recall with SQLite FTS5 + opt-in semantic search
Pattern TrackingLearn what works/fails/patterns command, memory-helper.sh
Token-Efficient SerialisationMinimise context overhead for structured dataTOON format — 20-60% token reduction vs JSON/YAML for agent indexes, registries, and data exchange
Cost-Aware RoutingMatch model to task complexitymodel-routing.md with 7-tier guidance, /route command
Model ComparisonCompare models side-by-side/compare-models (live data), /compare-models-free (offline)
Response ScoringEvaluate actual model outputs/score-responses with structured criteria

Key insight: Context is a finite resource with diminishing returns. aidevops treats every token as precious - loading only what's needed, when it's needed.

See .agents/aidevops/architecture.md for detailed implementation notes and references.

Multi-Agent Orchestration

Run multiple AI agents in parallel on separate branches, coordinated through a lightweight mailbox system. Each agent works independently in its own git worktree while the supervisor manages task distribution and status reporting.

Architecture:

Supervisor (pulse loop)
ā”œā”€ā”€ Agent Registry (TOON format - who's active, what branch, idle/busy)
ā”œā”€ā”€ Mailbox System (SQLite WAL-mode, indexed queries)
│   ā”œā”€ā”€ task_assignment → worker inbox
│   ā”œā”€ā”€ status_report → coordinator outbox
│   └── broadcast → all agents
└── Model Routing (tier-based: haiku/sonnet/opus/flash/pro)

Key components:

ComponentScriptPurpose
Mailboxmail-helper.shSQLite-backed inter-agent messaging (send, check, broadcast, archive)
Supervisorsupervisor-helper.shAutonomous multi-task orchestration with SQLite state machine, batches, retry cycles, cron scheduling, auto-pickup from TODO.md
Registrymail-helper.sh registerAgent registration with role, branch, worktree, heartbeat
Model routingmodel-routing.md, /routeCost-aware 7-tier routing guidance (local/haiku/flash/sonnet/pro/opus/grok)
Budget trackingbudget-tracker-helper.shAppend-only cost log for model routing decisions
Observabilityobservability.mjs pluginLLM request capture for cost tracking and performance analysis

How it works:

  1. Each agent registers on startup (mail-helper.sh register --role worker)
  2. Supervisor runs periodic pulses (supervisor-helper.sh pulse)
  3. Pulse collects status reports, dispatches queued tasks to idle workers
  4. Agents send completion reports back via mailbox
  5. SQLite WAL mode + busy_timeout handles concurrent access (79x faster than previous file-based system)

Compaction plugin (.agents/plugins/opencode-aidevops/): When OpenCode compacts context (at ~200K tokens), the plugin injects current session state - agent registry, pending mailbox messages, git context, and relevant memories - ensuring continuity across compaction boundaries.

Custom system prompt (.agents/prompts/build.txt): Based on upstream OpenCode with aidevops-specific overrides for tool preferences, professional objectivity, and per-model reinforcements for weaker models.

Subagent index (.agents/subagent-index.toon): Compressed TOON routing table listing all agents, subagents, workflows, and scripts with model tier assignments - enables fast agent discovery without loading full markdown files.

Autonomous Orchestration & Parallel Agents

Why this matters: Long-running tasks -- batch PR reviews, multi-site audits, large refactors, multi-day feature projects -- are where AI agents deliver the most value. Instead of babysitting one task at a time, the supervisor dispatches work to parallel agents, each in its own git worktree, with automatic retry, progress tracking, and batch completion reporting.

Pulse Supervisor: Autonomous AI Operations

The pulse is the heartbeat of aidevops — an autonomous AI supervisor that runs every 2 minutes via launchd. There is no human at the terminal. It manages the entire development pipeline across all repos registered with pulse: true.

What it does each cycle:

PhaseAction
Capacity checkCircuit breaker, dynamic worker slots calculated from available RAM
Merge ready PRsGreen CI + no blocking reviews → squash merge (free — no worker slot needed)
Fix failing PRsDispatch a worker to fix CI failures or address review feedback
Detect stuck workPRs open 6+ hours with no activity → flag or close and re-file
Dispatch workersRoute open issues to available worker slots, respecting priority and blocked-by: dependencies
Advance missionsCheck active multi-day missions, dispatch features, validate milestones, track budget
Triage qualityRead daily quality sweep findings (ShellCheck, SonarCloud, Codacy, CodeRabbit), create issues for actionable findings
Sync TODOsCreate GitHub issues for unsynced TODO entries, commit ref changes
Kill stuck workersWorkers running 3+ hours with no PR are killed to free slots
Detect orphaned PRsOpen PRs with no active worker and no activity for 6+ hours are flagged for re-dispatch

Operational intelligence:

  • Struggle-ratio — computes messages / max(1, commits) for each active worker. High ratio (>30) with >30 min elapsed and zero commits flags the worker as "struggling". Ratio >50 after 1 hour flags "thrashing". Informational signal — the supervisor LLM decides the action (kill, wait, re-dispatch with more context)
  • Circuit breaker — prevents cascading failures by tracking success/failure rates and tripping when error rate exceeds threshold
  • Dynamic concurrency — worker slot count adapts to available RAM, not a hardcoded constant
  • Stale assignment recovery — tasks assigned to workers that died (no active process, no PR, 3+ hours stale) are automatically unassigned and made available for re-dispatch
  • Priority ordering — green PRs (free merge) > failing PRs (closer to done) > high-priority/bug issues > active mission features > product repos > smaller tasks > oldest

The pulse is an LLM, not a script. It reads issue bodies, assesses context, and uses judgment. When it encounters something unexpected — an issue body that says "completed", a task with no clear description, a label that doesn't match reality — it handles it the way a competent human manager would.

# Pulse runs automatically via launchd (every 2 minutes)
# Manual trigger:
opencode run "/pulse"

See: .agents/scripts/commands/pulse.md for the full supervisor specification.

Missions: Multi-Day Autonomous Projects

Missions are the highest-level orchestration primitive — autonomous multi-day projects that break a high-level goal into milestones, features, and validation criteria. The pulse supervisor advances them automatically.

# Scope a mission interactively
/mission "Redesign the landing pages for mobile-first with A/B testing"

How missions work:

  1. /mission scopes the goal into milestones with features and acceptance criteria
  2. Each feature becomes a TODO entry tagged mission:mNNN with a GitHub issue
  3. The pulse dispatches features as regular workers (respecting MAX_WORKERS)
  4. When all features in a milestone complete, the pulse dispatches a validation worker to verify integration
  5. Passed milestones advance automatically — the next milestone's features are dispatched
  6. Budget tracking pauses the mission if any category exceeds the alert threshold (default 80%)

Two execution modes:

ModeWorkflowBest for
FullWorktree + PR per feature, standard review flowProduction code, collaborative projects
POCDirect commits, skip ceremonyPrototypes, experiments, proof-of-concept

Mission state is tracked in a JSON file committed to the repo. Each pulse cycle reads the state, acts on it, and commits updates — so any session (or the next pulse) can pick up where the last one left off.

See: .agents/workflows/mission-orchestrator.md for the full orchestrator specification, .agents/scripts/commands/dashboard.md for the mission progress dashboard.

Multi-Model Verification: Cross-Provider Safety

High-stakes operations are verified by a second AI model from a different provider before execution. This catches single-model hallucinations before destructive operations cause irreversible damage.

When verification triggers:

Risk LevelExamplesAction
Criticalgit push --force to main, DROP DATABASE, production deployBlocked unless second model agrees
HighForce push to feature branch, data migration, secret exposureWarned, verification recommended
MediumBulk file deletion, config changesLogged
LowNormal edits, test runsNo verification

How it works:

  1. pre-edit-check.sh screens operations against the high-stakes taxonomy
  2. For critical/high operations, verify-operation-helper.sh sends the operation context to a second model (different provider than the primary)
  3. The verifier independently assesses whether the operation is safe
  4. On disagreement, the operation is blocked (critical) or warned (high)
  5. All verification decisions are logged for audit

Why cross-provider? Same-provider models share training data and failure modes. A Claude hallucination is unlikely to be reproduced by Gemini or GPT, and vice versa. The verification uses the cheapest model tier (haiku-equivalent) — cost is minimal per check.

Configuration: Per-repo via .agents/reference/high-stakes-operations.md. Opt-out with VERIFY_ENABLED=false (not recommended).

See: .agents/tools/verification/parallel-verify.md for the verification agent specification.

Project Bundles: Auto-Configuration

Bundles are project-type presets that auto-configure model tiers, quality gates, and agent routing per repo. Instead of manually configuring each project, bundles detect what kind of project you're working on and apply sensible defaults.

Built-in bundles:

BundleAuto-detected byModel defaultQuality gatesAgent routing
web-apppackage.json + framework markerssonnetFull (lint, test, build, a11y)Build+ default
librarypackage.json with main/exportssonnetFull + API docs checkBuild+ default
cli-toolbin field in package.jsonsonnetShellCheck, testBuild+ default
content-siteCMS markers, wp-config.phphaikuLighthouse, SEOMarketing for content tasks
infrastructureDockerfile, terraform/, ansible/sonnetShellCheck, security scanBuild+ default
agentAGENTS.md, .agents/opusAgent review, prompt qualityBuild+ default

Resolution priority: Explicit bundle field in repos.json > .aidevops.json project config > auto-detection from marker files.

CLI:

bundle-helper.sh detect <repo-path>    # Auto-detect bundle type
bundle-helper.sh resolve <repo-path>   # Show resolved config (with overrides)
bundle-helper.sh show <bundle-name>    # Show bundle defaults
bundle-helper.sh list                  # List all available bundles

See: .agents/bundles/ for bundle definitions, .agents/scripts/bundle-helper.sh for the CLI.

Parallel Agents & Headless Dispatch

Run multiple AI sessions concurrently with isolated contexts. Named runners provide persistent agent identities with their own instructions and memory.

FeatureDescription
Headless dispatchopencode run for one-shot tasks, opencode serve + --attach for warm server
RunnersNamed agent instances with per-runner AGENTS.md, config, and run logs (runner-helper.sh)
Session managementResume sessions with -s <id> or -c, fork with SDK
Memory namespacesPer-runner memory isolation with shared access when needed
SDK orchestration@opencode-ai/sdk for TypeScript parallel dispatch via Promise.all
Matrix integrationChat-triggered dispatch via self-hosted Matrix (optional)
# Create a named runner
runner-helper.sh create code-reviewer --description "Reviews code for security and quality"

# Dispatch a task (one-shot)
runner-helper.sh run code-reviewer "Review src/auth/ for vulnerabilities"

# Dispatch against warm server (faster, no MCP cold boot)
opencode serve --port 4096 &
runner-helper.sh run code-reviewer "Review src/auth/" --attach http://localhost:4096

# Parallel dispatch via CLI
opencode run --attach http://localhost:4096 --title "Review" "Review src/auth/" &
opencode run --attach http://localhost:4096 --title "Tests" "Generate tests for src/utils/" &
wait

# List runners and status
runner-helper.sh list
runner-helper.sh status code-reviewer

Architecture:

OpenCode Server (opencode serve)
ā”œā”€ā”€ Session 1 (runner/code-reviewer)
ā”œā”€ā”€ Session 2 (runner/seo-analyst)
└── Session 3 (scheduled-task)
         ↑
    HTTP API / SSE Events
         ↑
ā”Œā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”“ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”
│  Dispatch Layer │ ← runner-helper.sh, cron, Matrix bot, SDK
ā””ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”˜

Example runner templates: code-reviewer, seo-analyst - copy and customize for your own runners.

Matrix bot dispatch (optional): Bridge Matrix chat rooms to runners for chat-triggered AI. Each room maintains persistent conversation context via SQLite -- on idle timeout, the session is compacted (summarised) and stored, so the next message resumes with full context.

# Setup Matrix bot (interactive wizard)
matrix-dispatch-helper.sh setup

# Map rooms to runners (each room = separate session)
matrix-dispatch-helper.sh map '!dev-room:server' code-reviewer
matrix-dispatch-helper.sh map '!seo-room:server' seo-analyst

# Start bot (daemon mode)
matrix-dispatch-helper.sh start --daemon

# In Matrix room: "!ai Review src/auth.ts for security issues"

# Manage sessions
matrix-dispatch-helper.sh sessions list
matrix-dispatch-helper.sh sessions stats

See: headless-dispatch.md for full documentation including parallel vs sequential decision guide, SDK examples, CI/CD integration, and custom agent configuration. matrix-bot.md for Matrix bot setup including Cloudron Synapse guide and session persistence.

Self-Improving Agent System

Agents that learn from experience and contribute improvements:

PhaseDescription
ReviewAnalyze memory for success/failure patterns (memory-helper.sh)
RefineGenerate and apply improvements to agents
TestValidate in isolated OpenCode sessions
PRContribute to community with privacy filtering

Safety guardrails:

  • Worktree isolation for all changes
  • Human approval required for PRs
  • Mandatory privacy filter (secretlint + pattern redaction)
  • Dry-run default, explicit opt-in for PR creation
  • Audit log to memory

Agent Testing Framework

Test agent behavior through isolated AI sessions with automated validation:

# Create a test suite
agent-test-helper.sh create my-tests

# Run tests (auto-detects claude or opencode CLI)
agent-test-helper.sh run my-tests

# Quick single-prompt test
agent-test-helper.sh run-one "What tools do you have?" --expect "bash"

# Before/after comparison for agent changes
agent-test-helper.sh baseline my-tests   # Save current behavior
# ... modify agents ...
agent-test-helper.sh compare my-tests    # Detect regressions

Test suites are JSON files with prompts and validation rules (expect_contains, expect_not_contains, expect_regex, min_length, max_length). Results are saved for historical tracking.

See: agent-testing.md subagent for full documentation and example test suites.

Voice Bridge - Talk to Your AI Agent

Speak naturally to your AI coding agent and hear it respond. The voice bridge connects your microphone to OpenCode via a fast local pipeline -- ask questions, give instructions, execute tasks, all by voice.

Mic → Silero VAD → Whisper MLX (1.4s) → OpenCode (4-6s) → Edge TTS (0.4s) → Speaker

Round-trip: ~6-8 seconds on Apple Silicon. The agent can edit files, run commands, create PRs, and confirm what it did -- all via voice.

Quick start:

# Start a voice conversation (installs deps automatically)
voice-helper.sh talk

# Choose engines and voice
voice-helper.sh talk whisper-mlx edge-tts en-GB-SoniaNeural
voice-helper.sh talk whisper-mlx macos-say    # Offline mode

# Utilities
voice-helper.sh devices      # List audio input/output devices
voice-helper.sh voices       # List available TTS voices
voice-helper.sh benchmark    # Test STT/TTS/LLM speeds
voice-helper.sh status       # Check component availability

Features:

FeatureDetails
Swappable STTwhisper-mlx (fastest on Apple Silicon), faster-whisper (CPU)
Swappable TTSedge-tts (best quality), macos-say (offline), facebookMMS (local)
Voice exitSay "that's all", "goodbye", "all for now" to end naturally
STT correctionLLM sanity-checks transcription errors before acting (e.g. "test.txte" → "test.txt")
Task executionFull tool access -- edit files, git operations, run commands
Session handbackConversation transcript output on exit for calling agent context
TUI compatibleGraceful degradation when launched from AI tool's Bash (no tty)

How it works: The bridge uses opencode run --attach to connect to a running OpenCode server for low-latency responses (~4-6s vs ~30s cold start). It automatically starts opencode serve if not already running.

Requirements: Apple Silicon Mac (for whisper-mlx), Python 3.10+, internet (for edge-tts). The voice helper installs Python dependencies automatically into the S2S venv.

Speech-to-Speech Pipeline (Advanced)

For advanced use cases (custom LLMs, server/client deployment, multi-language, phone integration), the full huggingface/speech-to-speech pipeline is also available:

speech-to-speech-helper.sh setup              # Install pipeline
speech-to-speech-helper.sh start --local-mac  # Run on Apple Silicon
speech-to-speech-helper.sh start --cuda       # Run on NVIDIA GPU
speech-to-speech-helper.sh start --server     # Server mode (remote clients)

Supported languages: English, French, Spanish, Chinese, Japanese, Korean (auto-detect or fixed).

Additional voice methods:

MethodDescription
VoiceInk + ShortcutmacOS: transcription → OpenCode API → response
iPhone ShortcutiOS: dictate → HTTP → speak response
Pipecat STSFull voice pipeline: Soniox STT → AI → Cartesia TTS

See: speech-to-speech.md for full component options, CLI parameters, and integration patterns (Twilio phone, video narration, voice-driven DevOps).

Scheduled Agent Tasks

Cron-based agent dispatch for automated workflows:

# Example: Daily SEO report at 9am
0 9 * * * ~/.aidevops/agents/scripts/runner-helper.sh run "seo-analyst" "Generate daily SEO report"

See: TODO.md tasks t109-t118 for implementation status.

Requirements

Recommended Hardware

aidevops itself is lightweight (shell scripts + markdown), but AI model workloads benefit from capable hardware:

TierMachineCPURAMGPUBest For
MinimumAny modern laptop4+ cores8GBNoneFramework only, cloud AI APIs
RecommendedMac Studio / desktopApple M1+ or 8+ cores16GB+MPS (Apple) or NVIDIA 8GB+Local voice, browser automation, dev servers
Power UserWorkstation8+ cores32GB+NVIDIA 24GB+ VRAMFull voice pipeline, local LLMs, parallel agents
ServerCloud GPUAny16GB+A100 / H100Production voice, multi-user, batch processing

Cloud GPU providers for on-demand GPU access: NVIDIA Cloud, Vast.ai, RunPod, Lambda. See .agents/tools/infrastructure/cloud-gpu.md for the full deployment guide (SSH setup, Docker, model caching, cost optimization).

Note: Most aidevops features (infrastructure management, SEO, code quality, Git workflows) require no GPU. GPU is only needed for local AI model inference (voice pipeline, vision models, local LLMs).

Software Dependencies

# Install dependencies (auto-detected by setup.sh)
brew install sshpass jq curl mkcert dnsmasq fd ripgrep  # macOS
sudo apt-get install sshpass jq curl dnsmasq fd-find ripgrep  # Ubuntu/Debian

# Generate SSH key
ssh-keygen -t ed25519 -C "your-email@domain.com"

File Discovery Tools

AI agents use fast file discovery tools for efficient codebase navigation:

ToolPurposeSpeed
fdFast file finder (replaces find)~10x faster
ripgrepFast content search (replaces grep)~10x faster

Both tools respect .gitignore by default and are written in Rust for maximum performance.

Preference order for file discovery:

  1. git ls-files '*.md' - Instant, git-tracked files only
  2. fd -e md - Fast, respects .gitignore
  3. rg --files -g '*.md' - Fast, respects .gitignore
  4. Built-in glob tools - Fallback when bash unavailable

The setup script offers to install these tools automatically.

Comprehensive Service Coverage

Infrastructure & Hosting

  • Hostinger: Shared hosting, domains, email
  • Hetzner Cloud: VPS servers, networking, load balancers
  • Closte: Managed hosting, application deployment
  • Coolify Enhanced with CLI: Self-hosted PaaS with CLI integration
  • Cloudron Enhanced with packaging guide: Server and app management platform with custom app packaging support
  • Vercel Enhanced with CLI: Modern web deployment platform with CLI integration
  • AWS: Cloud infrastructure support via standard protocols
  • DigitalOcean: Cloud infrastructure support via standard protocols

Domain & DNS

Development & Git Platforms with CLI Integration

  • GitHub Enhanced with CLI: Repository management, actions, API, GitHub CLI (gh) integration
  • GitLab Enhanced with CLI: Self-hosted and cloud Git platform with GitLab CLI (glab) integration
  • Gitea Enhanced with CLI: Lightweight Git service with Gitea CLI (tea) integration
  • Agno: Local AI agent operating system for DevOps automation
  • Pandoc: Document conversion to markdown for AI processing

AI Orchestration Frameworks

  • Langflow: Visual drag-and-drop builder for AI workflows (MIT, localhost:7860)
  • CrewAI: Multi-agent teams with role-based orchestration (MIT, localhost:8501)
  • AutoGen: Microsoft's agentic AI framework with MCP support (MIT, localhost:8081)

Video Creation

  • Remotion: Programmatic video creation with React - animations, compositions, media handling, captions
  • Video Prompt Design: AI video prompt engineering using the 7-component meta prompt framework for Veo 3 and similar models
  • MuAPI: Multimodal AI API for image/video/audio/VFX generation, workflows, agents, music (Suno), and lip-sync - unified creative orchestration platform
  • yt-dlp: YouTube video/audio/playlist/channel downloads, transcript extraction, and local file audio conversion via ffmpeg

WordPress Development

  • LocalWP: WordPress development environment with MCP database access
  • MainWP: WordPress site management dashboard

Git CLI Enhancement Features:

  • .agents/scripts/github-cli-helper.sh: Advanced GitHub repository, issue, PR, and branch management
  • .agents/scripts/gitlab-cli-helper.sh: Complete GitLab project, issue, MR, and branch management
  • .agents/scripts/gitea-cli-helper.sh: Full Gitea repository, issue, PR, and branch management

Security & Code Quality

  • gopass: GPG-encrypted secret management with AI-native wrapper (aidevops secret) - subprocess injection + output redaction keeps secrets out of AI context
  • Vaultwarden: Password and secrets management
  • SonarCloud: Security and quality analysis (A-grade ratings)
  • CodeFactor: Code quality metrics (A+ score)
  • Codacy: Multi-tool analysis (0 findings)
  • CodeRabbit: AI-powered code reviews
  • Snyk: Security vulnerability scanning
  • Socket: Dependency security and supply chain protection
  • Sentry: Error monitoring and performance tracking
  • Cisco Skill Scanner: Security scanner for AI agent skills (prompt injection, exfiltration, malicious code)
  • VirusTotal: Advisory threat intelligence via VT API v3 -- file hash scanning (70+ AV engines), domain/URL reputation checks for imported skills
  • Secretlint: Detect exposed secrets in code
  • OSV Scanner: Google's vulnerability database scanner
  • Qlty: Universal code quality platform (70+ linters, auto-fixes)
  • Gemini Code Assist: Google's AI-powered code completion and review

AI Prompt Optimization

  • Augment Context Engine: Semantic codebase retrieval with deep code understanding
  • Repomix: Pack codebases into AI-friendly context (80% token reduction with compress mode)
  • DSPy: Framework for programming with language models
  • DSPyGround: Interactive playground for prompt optimization
  • TOON Format: Token-Oriented Object Notation - 20-60% token reduction for LLM prompts

Document Processing & OCR

  • Document Creation Agent (document-creation-helper.sh): Unified document format conversion, template-based creation, and OCR for scanned PDFs/images. Routes to the best available tool (pandoc, odfpy, LibreOffice, Tesseract, EasyOCR, GLM-OCR) based on format pair and availability. Supports 13+ formats (ODT, DOCX, PDF, MD, HTML, EPUB, PPTX, ODP, XLSX, ODS, RTF, CSV, TSV).
  • LibPDF: PDF form filling, digital signatures (PAdES B-B/T/LT/LTA), encryption, merge/split, text extraction
  • MinerU: Layout-aware PDF-to-markdown/JSON conversion with OCR (109 languages), formula-to-LaTeX, and table extraction (53k+ stars, AGPL-3.0)
  • Unstract: LLM-powered structured data extraction from unstructured documents (PDF, images, DOCX)
  • GLM-OCR: Local OCR via Ollama - purpose-built for document text extraction (tables, forms, complex layouts) with zero cloud dependency

PDF/OCR Tool Selection:

NeedToolWhy
Format conversionDocument Creation AgentAuto-selects best tool, 13+ formats
Complex PDF to markdownMinerULayout-aware, formulas, tables, 109-language OCR
Quick text extractionGLM-OCRLocal, fast, no API keys, privacy-first
Structured JSON outputUnstractSchema-based extraction, complex documents
Screen/window OCRPeekaboo + GLM-OCRpeekaboo image --analyze --model ollama/glm-ocr
PDF text extractionLibPDFNative PDF parsing, no AI needed
Simple format conversionPandocLightweight, broad format support
Scanned PDF OCRDocument Creation AgentAuto-detects, routes to Tesseract/EasyOCR/GLM-OCR

Quick start:

# Document creation agent
document-creation-helper.sh status                          # Check available tools
document-creation-helper.sh install --standard              # Install core tools
document-creation-helper.sh convert report.pdf --to odt     # Convert formats
document-creation-helper.sh convert scan.pdf --to md --ocr  # OCR scanned PDF
document-creation-helper.sh template draft --type letter     # Generate template

# GLM-OCR direct
ollama pull glm-ocr
ollama run glm-ocr "Extract all text" --images /path/to/document.png

See .agents/tools/ocr/glm-ocr.md for batch processing, PDF workflows, and Peekaboo integration.

Communications

  • Twilio: SMS, voice calls, WhatsApp, phone verification (Verify API), call recording & transcription
  • Telfon: Twilio-powered cloud phone system with iOS/Android/Chrome apps for end-user calling interface
  • Matrix: Self-hosted chat with bot integration for AI runner dispatch (matrix-dispatch-helper.sh)
  • SimpleX Chat: Privacy-first messaging with AI bot gateway for agent dispatch (simplex-bot/)
  • Matterbridge: Multi-platform chat bridge connecting 20+ platforms (Matrix, Discord, Telegram, Slack, IRC, WhatsApp, XMPP) with SimpleX adapter (matterbridge-helper.sh)

Animation & Video

  • Anime.js: Lightweight JavaScript animation library for CSS, SVG, DOM attributes, and JS objects
  • Remotion: Programmatic video creation with React - create videos using code with 29 specialized rule files
  • Video Prompt Design: Structured prompt engineering for AI video generation (Veo 3, 7-component framework, character consistency, audio design)

Voice AI

  • Voice Bridge: Talk to your AI coding agent via speech -- Silero VAD → Whisper MLX → OpenCode → Edge TTS (~6-8s round-trip)
  • Speech-to-Speech: Open-source modular voice pipeline (VAD → STT → LLM → TTS) with local GPU and cloud GPU deployment
  • Pipecat: Real-time voice agent framework with Soniox STT, Cartesia TTS, and multi-LLM support

Performance & Monitoring

  • PageSpeed Insights: Website performance auditing
  • Lighthouse: Comprehensive web app analysis
  • WebPageTest: Real-world performance testing from 40+ global locations with filmstrip, waterfall, and Core Web Vitals
  • Updown.io: Website uptime and SSL monitoring

AI & Documentation

  • Context7: Real-time documentation access for libraries and frameworks
  • Context7 CLI mode: npx ctx7 setup --opencode --cli for docs lookup without MCP transport (useful fallback in shell-first workflows)
  • Local Models: Run AI models locally via llama.cpp for free, private, offline inference with HuggingFace GGUF model management (local-model-helper.sh)

Local Development

  • Localdev: Local development environment manager with dnsmasq, Traefik, and mkcert for production-like .local domains with HTTPS on port 443 (localdev-helper.sh)

MCP Integrations

Model Context Protocol servers for real-time AI assistant integration. The framework configures these MCPs for OpenCode (TUI, Desktop, and Extension for Zed/VSCode).

All Supported MCPs (20 available)

MCP packages are installed globally via bun install -g for instant startup (no npx registry lookups). Run setup.sh or aidevops update-tools to update to latest versions.

MCPPurposeTierAPI Key Required
Augment Context EngineSemantic codebase retrievalGlobalYes (Augment account)
Claude Code MCPClaude as sub-agentGlobalNo
Amazon Order HistoryOrder data extractionPer-agentNo
Chrome DevToolsBrowser debugging & automationPer-agentNo
Context7Library documentation lookupPer-agentNo
Docker MCPContainer managementPer-agentNo
Google AnalyticsAnalytics dataPer-agentYes (Google API)
Google Search ConsoleSearch performance dataPer-agentYes (Google API)
Grep by VercelGitHub code searchPer-agentNo
LocalWPWordPress database accessPer-agentNo (local)
macOS AutomatormacOS automationPer-agentNo
PlaywriterBrowser with extensionsPer-agentNo
QuickFileAccounting APIPer-agentYes
RepomixCodebase packing for AI contextPer-agentNo
SentryError trackingPer-agentYes
shadcnUI component libraryPer-agentNo
SocketDependency securityPer-agentNo
UnstractDocument data extractionPer-agentYes
OpenAPI SearchSearch and explore any OpenAPI specPer-agentNo
Cloudflare Code ModeFull Cloudflare API (2,500+ endpoints via 2 tools)Per-agentYes (Cloudflare)

Tier explanation:

  • Global - Tools always available (loaded into every session)
  • Per-agent - Tools disabled globally, enabled per-agent via config (zero context overhead when unused)

Performance optimization: MCP packages are installed globally via bun install -g for instant startup (~0.1s vs 2-3s with npx). The framework uses a three-tier loading strategy: MCPs load eagerly at startup or on-demand when their subagent is invoked. This reduces OpenCode startup time significantly.

SEO Integrations (curl subagents - no MCP overhead)

These use direct API calls via curl, avoiding MCP server startup entirely:

IntegrationPurposeAPI Key Required
AhrefsSEO analysis & backlinksYes
DataForSEOSERP, keywords, backlinks, on-pageYes
SerperGoogle Search API (web, images, news)Yes
SemrushDomain analytics, keywords, backlinks, competitor researchYes
ContentKingReal-time SEO monitoring, change tracking, issuesYes
WebPageTestReal-world performance testing from 40+ global locationsYes
HostingerHosting managementYes
NeuronWriterContent optimization & NLP analysisYes
OutscraperGoogle Maps & business data extractionYes

By Category

Context & Codebase:

  • Augment Context Engine - Semantic codebase retrieval with deep code understanding
  • llm-tldr - Semantic code analysis with 95% token savings (details below)
  • Context7 - Real-time documentation access for thousands of libraries
  • Repomix - Pack codebases into AI-friendly context
  • OpenAPI Search - Search and explore any OpenAPI specification (zero install, Cloudflare Worker)
  • MCPorter - Discover, call, compose, and generate CLIs/typed clients for MCP servers

Browser Automation (8 tools + anti-detect stack, benchmarked):

  • Playwright - Fastest engine (0.9s form fill), parallel contexts, extensions, proxy (auto-installed)
  • playwright-cli - Microsoft official CLI for AI agents, --session isolation, built-in tracing
  • dev-browser - Persistent profile, stays logged in, ARIA snapshots, pairs with DevTools
  • agent-browser - CLI/CI/CD, --session parallel, ref-based element targeting, iOS Simulator support (macOS)
  • Crawl4AI - Bulk extraction, arun_many parallel (1.7x), LLM-ready markdown
  • WaterCrawl - Self-hosted crawling with web search, sitemap generation, JS rendering, proxy support
  • Playwriter - Your browser's extensions/passwords/proxy, already unlocked
  • Stagehand - Natural language automation, self-healing selectors
  • Chrome DevTools MCP - Companion: Lighthouse, network throttling, CSS coverage (pairs with any tool)
  • Cloudflare Browser Rendering - Server-side web scraping
  • Peekaboo - macOS screen capture and GUI automation (pixel-accurate captures, AI vision analysis)
  • Sweet Cookie - Browser cookie extraction for API calls without launching a browser
  • Anti-Detect Stack (details):
    • Camoufox (4.9k stars) - Firefox anti-detect, C++ fingerprint injection, WebRTC/Canvas/WebGL spoofing
    • rebrowser-patches (1.2k stars) - Chromium CDP leak prevention, automation signal removal
    • Multi-profile management - Persistent/clean/warm/disposable profiles (like AdsPower/GoLogin)
    • Proxy integration - Residential, SOCKS5, VPN per profile with geo-targeting

SEO & Research:

  • Google Search Console - Search performance insights (MCP)
  • Grep by Vercel - Search code snippets across GitHub repositories (MCP)
  • Ahrefs - SEO analysis, backlink research, keyword data (curl subagent)
  • DataForSEO - Comprehensive SEO data APIs (curl subagent)
  • Serper - Google Search API (curl subagent)
  • SEO Audit - Comprehensive technical SEO auditing: crawlability, indexation, Core Web Vitals, on-page optimization, E-E-A-T signals (imported skill from marketingskills)
  • Keyword Research - Strategic keyword research with SERP weakness detection (via DataForSEO + Serper + Ahrefs)
  • Site Crawler - Screaming Frog-like SEO auditing: broken links, redirects, meta issues, structured data
  • Domain Research - DNS intelligence via THC (4.51B records) and Reconeer APIs: rDNS, subdomains, CNAMEs
  • NeuronWriter - Content optimization with NLP analysis, competitor research, and content scoring (curl subagent)

Data Extraction:

  • Outscraper - Google Maps, business data, reviews extraction (curl subagent)
  • curl-copy - Authenticated scraping via DevTools "Copy as cURL" (no browser automation needed)

Performance & Security:

  • PageSpeed Insights - Website performance auditing
  • Snyk - Security vulnerability scanning
  • Cloudflare Code Mode - Full Cloudflare API (2,500+ endpoints) via 2 tools in ~1,000 tokens (DNS, WAF, DDoS, R2, Workers, Zero Trust)
  • IP Reputation - Multi-provider IP reputation checking (Spamhaus, ProxyCheck, AbuseIPDB) for VPS/proxy vetting (ip-reputation-helper.sh)

WordPress & Development:

CRM & Marketing:

  • FluentCRM - WordPress CRM: contacts, tags, lists, campaigns, automations, smart links, webhooks

Accounts & Finance:

Document Processing & OCR:

  • LibPDF - PDF form filling, digital signatures, encryption, merge/split (via helper script)
  • Unstract - LLM-powered structured data extraction from PDFs, images, DOCX (MCP)
  • GLM-OCR - Local OCR via Ollama for document text extraction (subagent)

Quick Setup

# Install all MCP integrations
bash .agents/scripts/setup-mcp-integrations.sh all

# Install specific integration
bash .agents/scripts/setup-mcp-integrations.sh stagehand          # JavaScript version
bash .agents/scripts/setup-mcp-integrations.sh stagehand-python   # Python version
bash .agents/scripts/setup-mcp-integrations.sh stagehand-both     # Both versions
bash .agents/scripts/setup-mcp-integrations.sh chrome-devtools

OpenCode LSP Configuration

OpenCode includes built-in LSP servers for 35+ languages. For aidevops projects that use Markdown and TOON extensively, add these optional LSP servers to your opencode.json for real-time diagnostics during editing:

{
  "lsp": {
    "markdownlint": {
      "command": ["markdownlint-language-server", "--stdio"],
      "extensions": [".md"]
    },
    "toon-lsp": {
      "command": ["toon-lsp"],
      "extensions": [".toon"]
    }
  }
}

Install the servers:

npm install -g markdownlint-language-server  # Markdown diagnostics
cargo install toon-lsp                        # TOON syntax validation

These catch formatting and syntax issues during editing, reducing preflight/postflight fix cycles.

Browser Automation

8 browser tools + anti-detect stack + device emulation, benchmarked and integrated for AI-assisted web automation, dev testing, mobile/responsive testing, data extraction, and bot detection evasion. Agents automatically select the optimal tool based on task requirements.

Performance Benchmarks

Tested on macOS ARM64, all headless, warm daemon:

TestPlaywrightplaywright-clidev-browseragent-browserCrawl4AIPlaywriterStagehand
Navigate + Screenshot1.43s~1.9s1.39s1.90s2.78s2.95s7.72s
Form Fill (4 fields)0.90s~1.4s1.34s1.37sN/A2.24s2.58s
Data Extraction (5 items)1.33s~1.5s1.08s1.53s2.53s2.68s3.48s
Multi-step (click + nav)1.49s~2.0s1.49s3.06sN/A4.37s4.48s
Parallel (3 sessions)1.6s~2.0sN/A2.0s3.0sN/ASlow

Feature Matrix

FeaturePlaywrightplaywright-clidev-browseragent-browserCrawl4AIPlaywriterStagehand
HeadlessYesYes (default)YesYes (default)YesNo (your browser)Yes
Proxy/VPNFullNoVia argsNoFullYour browserVia args
ExtensionsYes (persistent)NoYes (profile)NoNoYes (yours)Possible
Password managersPartial (needs unlock)NoPartialNoNoYes (unlocked)No
Device emulationFull (100+ devices)NoNoNoNoNoVia Playwright
Parallel sessions5 ctx/2.1s--sessionShared3 sess/2.0sarun_many 1.7xSharedPer-instance
Session persistencestorageStateProfile dirProfile dirstate save/loaduser_data_dirYour browserPer-instance
TracingFull APIBuilt-in CLIVia PlaywrightVia PlaywrightNoVia CDPVia Playwright
Natural languageNoNoNoNoLLM extractionNoYes
Self-healingNoNoNoNoNoNoYes
iOS SimulatorNoNoNoYes (macOS)NoNoNo
MaintainerMicrosoftMicrosoftCommunityVercelCommunityCommunityBrowserbase

Tool Selection

NeedToolWhy
Fastest automationPlaywright0.9s form fill, parallel contexts
AI agent (CLI)playwright-cliMicrosoft official, --session isolation, built-in tracing
Stay logged indev-browserProfile persists across restarts
Your extensions/passwordsPlaywriterAlready unlocked in your browser
Bulk extractionCrawl4AIPurpose-built, parallel, LLM-ready output
Self-hosted crawlingWaterCrawlDocker deployment, web search, sitemap generation
CLI/CI/CDplaywright-cli or agent-browserNo server needed, --session isolation
iOS mobile testingagent-browserReal Safari in iOS Simulator (macOS only)
Unknown pagesStagehandNatural language, self-healing
Performance debuggingChrome DevTools MCPCompanion tool, pairs with any browser
Mobile/tablet emulationPlaywright100+ device presets, viewport, touch, geolocation, locale
Authenticated one-off scrapecurl-copyDevTools "Copy as cURL" → paste to terminal/AI
Bot detection evasionAnti-detect stackCamoufox (full) or rebrowser-patches (quick)
Multi-accountBrowser profilesPersistent fingerprint + proxy per account

AI Page Understanding

Agents use lightweight methods instead of expensive vision API calls:

MethodSpeedToken CostUse For
ARIA snapshot~0.01s50-200 tokensForms, navigation, interactive elements
Text extraction~0.002sText lengthReading content
Element scan~0.002s~20/elementForm filling, clicking
Screenshot~0.05s~1K tokens (vision)Visual debugging only

See .agents/tools/browser/browser-automation.md for the full decision tree and browser-benchmark.md for reproducible benchmark scripts.

Device Emulation

Test responsive layouts and mobile-specific behavior using Playwright's built-in device emulation. Supports 100+ device presets with viewport, user agent, touch events, device scale factor, geolocation, locale/timezone, permissions, color scheme, offline mode, and network throttling.

Common device presets:

DeviceViewportScaleTouch
iPhone 15393x8523Yes
iPad Pro 11834x11942Yes
Pixel 7412x9152.625Yes
Galaxy S9+320x6584.5Yes
Desktop Chrome1280x7201No

Emulation capabilities:

FeatureExample
Device presetsdevices['iPhone 13'] - viewport, UA, touch, scale
Viewport/HiDPIviewport: { width: 2560, height: 1440 }, deviceScaleFactor: 2
Geolocationgeolocation: { longitude: -74.006, latitude: 40.7128 }
Locale/timezonelocale: 'de-DE', timezoneId: 'Europe/Berlin'
Color schemecolorScheme: 'dark'
Offline modeoffline: true
Permissionspermissions: ['geolocation', 'notifications']
Network throttlingCDP-based Slow 3G / Fast 3G emulation

Recipes included: Responsive breakpoint testing, multi-device parallel testing, touch gesture testing, geolocation-dependent features, dark mode visual regression, and network condition emulation.

See .agents/tools/browser/playwright-emulation.md for complete documentation with code examples.

Anti-Detect Browser

Open-source alternative to AdsPower, GoLogin, and OctoBrowser for multi-account automation and bot detection evasion.

Architecture:

Layer 4: CAPTCHA Solving    → CapSolver (existing)
Layer 3: Network Identity   → Proxies (residential/SOCKS5/VPN per profile)
Layer 2: Browser Identity   → Camoufox (C++ fingerprint injection)
Layer 1: Automation Stealth → rebrowser-patches (CDP leak prevention)
Layer 0: Browser Engine     → Playwright (existing)

Profile Types:

TypeCookiesFingerprintUse Case
PersistentSavedFixed per profileAccount management, stay logged in
CleanNoneRandom each launchScraping, one-off tasks
WarmSavedFixedPre-warmed accounts (browsing history)
DisposableNoneRandomSingle-use, maximum anonymity

Quick Start:

# Setup
anti-detect-helper.sh setup

# Create profile with proxy
anti-detect-helper.sh profile create "my-account" --type persistent --os macos

# Launch (Camoufox with auto-generated fingerprint)
anti-detect-helper.sh launch --profile "my-account" --headless

# Test detection (BrowserScan, SannyBot)
anti-detect-helper.sh test --profile "my-account"

# Warm up profile with browsing history
anti-detect-helper.sh warmup "my-account" --duration 30m

Engine Selection:

EngineStealth LevelSpeedBest For
Camoufox (Firefox)High (C++ level)MediumFull anti-detect, fingerprint rotation
rebrowser-patches (Chromium)Medium (CDP patches)FastQuick stealth on existing Playwright code

See .agents/tools/browser/anti-detect-browser.md for the full decision tree and subagent index.

Repomix - AI Context Generation

Repomix packages your codebase into AI-friendly formats for sharing with AI assistants. This framework includes optimized Repomix configuration for consistent context generation.

Why Repomix?

Use CaseToolWhen to Use
Interactive codingAugment Context EngineReal-time semantic search during development
Share with external AIRepomixSelf-contained snapshot for ChatGPT, Claude web, etc.
Architecture reviewRepomix (compress)80% token reduction, structure only
CI/CD integrationGitHub ActionAutomated context in releases

Quick Usage

# Pack current repo with configured defaults
npx repomix

# Compress mode (~80% smaller, structure only)
npx repomix --compress

# Or use the helper script
.agents/scripts/context-builder-helper.sh pack      # Full context
.agents/scripts/context-builder-helper.sh compress  # Compressed

Configuration Files

FilePurpose
repomix.config.jsonDefault settings (style, includes, security)
.repomixignoreAdditional exclusions beyond .gitignore
repomix-instruction.mdCustom AI instructions included in output

Key Design Decisions

  • No pre-generated files: Outputs are generated on-demand to avoid staleness
  • Inherits .gitignore: Security patterns automatically respected
  • Secretlint enabled: Scans for exposed credentials before output
  • Symlinks excluded: Avoids duplicating .agents/ content

MCP Integration

Repomix runs as an MCP server for direct AI assistant integration:

{
  "repomix": {
    "type": "local",
    "command": ["repomix", "--mcp"],
    "enabled": true
  }
}

Install globally first: bun install -g repomix (done automatically by setup.sh)

See .agents/tools/context/context-builder.md for complete documentation.

Augment Context Engine - Semantic Codebase Search

Augment Context Engine provides semantic codebase retrieval - understanding your code at a deeper level than simple text search. It's the recommended tool for real-time interactive coding sessions.

Why Augment Context Engine?

Featuregrep/globAugment Context Engine
Text matchingExact patternsSemantic understanding
Cross-file contextManualAutomatic
Code relationshipsNoneUnderstands dependencies
Natural languageNoYes

Use it to:

  • Find related code across your entire codebase
  • Understand project architecture quickly
  • Discover patterns and implementations
  • Get context-aware code suggestions

Quick Setup

# 1. Install Auggie CLI (requires Node.js 22+)
npm install -g @augmentcode/auggie@prerelease

# 2. Authenticate (opens browser)
auggie login

# 3. Verify installation
auggie token print

MCP Integration

Add to your AI assistant's MCP configuration:

OpenCode (~/.config/opencode/opencode.json):

{
  "mcp": {
    "augment-context-engine": {
      "type": "local",
      "command": ["auggie", "--mcp"],
      "enabled": true
    }
  }
}

claude-code CLI:

claude mcp add-json auggie-mcp --scope user '{"type":"stdio","command":"auggie","args":["--mcp"]}'

Verification

Test with this prompt:

What is this project? Please use codebase retrieval tool to get the answer.

The AI should provide a semantic understanding of your project architecture.

Repomix vs Augment Context Engine

Use CaseToolWhen to Use
Interactive codingAugment Context EngineReal-time semantic search during development
Share with external AIRepomixSelf-contained snapshot for ChatGPT, Claude web, etc.
Architecture reviewRepomix (compress)80% token reduction, structure only
CI/CD integrationRepomix GitHub ActionAutomated context in releases

See .agents/tools/context/augment-context-engine.md for complete documentation.

llm-tldr - Semantic Code Analysis

llm-tldr extracts code structure and semantics, saving ~95% tokens compared to raw code. From the Continuous-Claude project.

# Install
pip install llm-tldr

# CLI usage
tldr tree ./src                    # File structure with line counts
tldr structure src/auth.py         # Code skeleton (classes, functions)
tldr context src/auth.py           # Full semantic analysis
tldr search "authentication" ./src # Semantic code search
tldr impact src/auth.py validate   # What would change affect?

MCP Integration:

{
  "llm-tldr": {
    "command": "tldr-mcp",
    "args": ["--project", "${workspaceFolder}"]
  }
}
FeatureToken SavingsUse Case
Structure extraction90%Understanding code layout
Context analysis95%Comprehensive code understanding
Semantic searchN/AFinding code by meaning
Impact analysisN/AChange risk assessment

See .agents/tools/context/llm-tldr.md for complete documentation.

Cross-Tool Compatibility

Agent Skills Standard

aidevops implements the Agent Skills standard for cross-tool compatibility. Skills are auto-discovered by compatible AI assistants.

Generated SKILL.md files in ~/.aidevops/agents/ provide skill metadata following the Agent Skills standard. These are discoverable by any compatible tool.

Claude Code Plugin Marketplace

aidevops is registered as a Claude Code plugin marketplace. Install with two commands:

/plugin marketplace add marcusquinn/aidevops
/plugin install aidevops@aidevops

This installs the complete framework: 13 primary agents, 900+ subagents, and 390+ helper scripts.

Importing External Skills

Import skills from GitHub or ClawdHub using the aidevops skill CLI:

# Import from GitHub (auto-detects format)
aidevops skill add owner/repo

# Import from ClawdHub (skill registry with vector search)
aidevops skill add clawdhub:caldav-calendar
aidevops skill add https://clawdhub.com/owner/slug

# More examples
aidevops skill add anthropics/skills/pdf           # Specific skill from multi-skill repo
aidevops skill add vercel-labs/agent-skills         # All skills from a repo
aidevops skill add expo/skills --name expo-dev      # Custom name
aidevops skill add owner/repo --dry-run             # Preview without changes

Supported sources:

  • GitHub repos (owner/repo or full URL) — fetched via git clone
  • ClawdHub (clawdhub:slug or full URL) — fetched via Playwright browser automation

Supported formats:

  • SKILL.md - Agent Skills standard (preferred)
  • AGENTS.md - Claude Code agents format
  • .cursorrules - Cursor rules format (auto-converted)

Features:

  • Auto-detection of skill format and category placement
  • Conflict detection with merge/replace/rename options
  • Upstream commit tracking for update detection (aidevops skill check)
  • Conversion to aidevops subagent format with YAML frontmatter
  • Registry stored in .agents/configs/skill-sources.json
  • Telemetry disabled (no data sent to skills.sh or other services)

How it differs from npx add-skill:

aidevops skill addnpx add-skill
TargetConverts to aidevops format in .agents/Copies SKILL.md to agent-specific dirs
TrackingGit commit-based upstream trackingLock file with content hashes
TelemetryDisabledSends anonymous install counts
ScopeOpenCode-first22+ agents
Updatesaidevops skill check (GitHub API)npx skills check (Vercel API)

See .agents/scripts/add-skill-helper.sh for implementation details.

AI Agents & Subagents

Agents are specialized AI personas with focused knowledge and tool access. Instead of giving your AI assistant access to everything at once (which wastes context tokens), agents provide targeted capabilities for specific tasks.

Call them in your AI assistant conversation with a simple @mention

How Agents Work

ConceptDescription
Main AgentDomain-focused assistant (e.g., WordPress, SEO, DevOps)
SubagentSpecialized assistant for specific services (invoked with @mention)
MCP ToolsOnly loaded when relevant agent is invoked (saves tokens)

Main Agents

Primary agents as registered in subagent-index.toon (13 total). MCPs are loaded on-demand per subagent, not per primary agent:

NameFilePurposeModel Tier
Build+build-plus.mdEnhanced Build with context tools (default agent)opus
Automateautomate.mdScheduling, dispatch, monitoring, background orchestrationsonnet
Accountsaccounts.mdFinancial operationsopus
Businessbusiness.mdCompany orchestration via AI runnerssonnet
Contentcontent.mdContent creation workflowsopus
Healthhealth.mdHealth and wellnessopus
Legallegal.mdLegal complianceopus
Marketingmarketing.mdMarketing strategy, email campaigns, paid ads, CROopus
Researchresearch.mdResearch and analysis tasksgemini/grok
Salessales.mdSales operations and CRM pipelineopus
SEOseo.mdSEO optimization and analysisopus
Social-Mediasocial-media.mdSocial media managementopus
Videovideo.mdAI video generation and prompt engineeringopus

Specialist subagents (@plan-plus, @aidevops, @wordpress, Build-Agent, Build-MCP, etc.) live under tools/ or as mode: subagent files and are invoked via @mention when domain expertise is needed. See subagent-index.toon for the full listing.

Example Subagents with MCP Integration

These are examples of subagents that have supporting MCPs enabled. See .agents/ for the full list of 900+ subagents organized by domain.

AgentPurposeMCPs Enabled
@hostingerHosting, WordPress, DNS, domainshostinger-api
@hetznerCloud servers, firewalls, volumeshetzner-* (multi-account)
@wordpressLocal dev, MainWP managementlocalwp, context7
@seoSearch Console, keyword research, domain intelligencegsc, ahrefs, serper, context7
@dataforseoSERP, keywords, backlinks, on-page analysis(curl subagent)
@domain-researchDNS intelligence: rDNS, subdomains, CNAMEs (THC + Reconeer)(API-based)
@serperGoogle Search API (web, images, news, places)serper
@list-keysList all configured API keys and storage locations(read-only)
@code-standardsQuality standards reference, compliance checkingcontext7
@browser-automationTesting, scraping, DevToolschrome-devtools, context7
@performanceCore Web Vitals, network analysis, accessibilitychrome-devtools
@git-platformsGitHub, GitLab, Giteagh_grep, context7
@sentryError monitoring, Next.js SDK setupsentry
@socketDependency security scanningsocket
@security-analysisAI-powered vulnerability detection (OSV, Ferret, git history)osv-scanner, gemini-cli-security
@secretlintDetect exposed secrets in code(Docker-based)
@snykSecurity vulnerability scanning(API-based)
@auditingCode auditing services and security analysis(API-based)
@agent-reviewSession analysis, agent improvement (under build-agent/)(read/write only)

Setup for OpenCode

# Install aidevops agents for OpenCode
.agents/scripts/generate-opencode-agents.sh

# Check status
.agents/scripts/generate-opencode-agents.sh  # Shows status after generation

Claude Marketplace

aidevops is available in the Claude marketplace:

/plugin marketplace add marcusquinn/aidevops
/plugin install aidevops-all@aidevops

Agent Skills (SKILL.md): Auto-discovered from ~/.aidevops/agents/ after running setup.sh. Compatible with any tool that supports the Agent Skills standard.

Continuous Improvement with @agent-review

End every session by calling @agent-review to analyze what worked and what didn't:

@agent-review analyze this session and suggest improvements to the agents used

The review agent will:

  1. Identify which agents were used
  2. Evaluate missing, incorrect, or excessive information
  3. Suggest specific improvements to agent files
  4. Generate ready-to-apply edits
  5. Optionally compose a PR to contribute improvements back to aidevops

This creates a feedback loop:

Session → @agent-review → Improvements → Better Agents → Better Sessions
                ↓
         PR to aidevops repo (optional)

Contributing improvements:

@agent-review create a PR for improvement #2

The agent will create a branch, apply changes, and submit a PR to marcusquinn/aidevops with a structured description. Your real-world usage helps improve the framework for everyone.

Code quality learning loop:

The @code-quality agent also learns from issues. After fixing violations from SonarCloud, Codacy, ShellCheck, etc., it analyzes patterns and updates framework guidance to prevent recurrence:

Quality Issue → Fix Applied → Pattern Identified → Framework Updated → Issue Prevented

Slash Commands (OpenCode)

Slash commands provide quick access to common workflows directly from the OpenCode prompt. Type / to see available commands.

Available Commands

Planning & Task Management:

CommandPurpose
/list-todoList tasks with sorting, filtering, and grouping
/save-todoSave discussion as task or plan (auto-detects complexity)
/plan-statusCheck status of plans in TODO.md and todo/PLANS.md
/create-prdCreate a Product Requirements Document for complex features
/generate-tasksGenerate implementation tasks from a PRD
/log-time-spentLog time spent on a task for tracking
/readyShow tasks with no open blockers (Beads integration)
/sync-beadsSync TODO.md/PLANS.md with Beads task graph
/rememberStore knowledge for cross-session recall
/recallSearch memories from previous sessions

Plans are tracked in TODO.md (all tasks) and todo/PLANS.md (complex execution plans). Task dependencies are visualized with Beads.

/list-todo options:

OptionExamplePurpose
--priority/list-todo -pSort by priority (high → low)
--estimate/list-todo -eSort by time estimate (shortest first)
--tag/list-todo -t seoFilter by tag
--owner/list-todo -o marcusFilter by assignee
--estimate/list-todo --estimate "<2h"Filter by estimate range
--group-by/list-todo -g tagGroup by tag, owner, status, or estimate
--plans/list-todo --plansInclude full plan details
--compact/list-todo --compactOne-line per task

Time Tracking: Tasks support time estimates as active session time (excluding AFK gaps) with the format ~4h started:2025-01-15T10:30Z. The session-time-helper.sh analyses real session data to calibrate estimates vs actuals.

Risk Levels: Tasks support risk:low/med/high indicating human oversight needed:

RiskOversightExample
lowAutonomousDocs, formatting, simple refactors
medSupervisedFeature implementation, API changes
highEngagedSecurity, data migrations, infrastructure

Configure time tracking per-repo via .aidevops.json.

Development Workflow (typical order):

CommandPurpose
/contextBuild AI context with Repomix for complex tasks
/featureStart a new feature branch workflow
/bugfixStart a bugfix branch workflow
/hotfixStart an urgent hotfix workflow
/linters-localRun local linting (shfmt, ShellCheck, secretlint)
/code-audit-remoteRun remote auditing (CodeRabbit, Codacy, SonarCloud)
/code-standardsCheck against documented quality standards
/code-simplifierSimplify and refine code for clarity and maintainability
/testing-setupInteractive per-repo testing infrastructure setup with bundle-aware defaults
/list-keysList all configured API keys and their storage locations
/performanceWeb performance audit (Core Web Vitals, Lighthouse, PageSpeed)
/prUnified PR workflow (orchestrates all checks)
/cross-reviewDispatch prompt to multiple AI models, diff results, auto-score
/tech-stackDetect technology stacks of URLs or find sites using specific technologies
/missionScope a high-level goal into milestones and features for autonomous execution
/budget-analysisAnalyze AI model spend, burn rate, and cost optimization opportunities

Content Workflow:

CommandPurpose
/humaniseRemove AI writing patterns, make text sound human

Media:

CommandPurpose
/yt-dlpDownload YouTube videos/audio/playlists, extract transcripts, convert local files

SEO Workflow:

CommandPurpose
/keyword-researchSeed keyword expansion with volume, CPC, difficulty
/autocomplete-researchGoogle autocomplete long-tail discovery
/keyword-research-extendedFull SERP analysis with weakness detection
/webmaster-keywordsKeywords from GSC + Bing for your verified sites
/neuronwriterContent optimization with NLP term recommendations and scoring
/seo-exportExport SEO data from GSC, Bing, Ahrefs, DataForSEO to TOON format
/seo-analyzeAnalyze exported data for quick wins, striking distance, low CTR
/seo-opportunitiesCombined export + analysis workflow
/seo-auditComprehensive SEO audit: technical, on-page, content, E-E-A-T

SEO Debugging & Auditing (subagents in seo/):

SubagentPurpose
@seo-auditComprehensive SEO audit: technical, on-page, content quality, E-E-A-T
@debug-opengraphValidate Open Graph meta tags, preview social sharing
@debug-faviconValidate favicon setup across platforms (ico, apple-touch, manifest)

Email Deliverability:

CommandPurpose
/email-health-checkCheck SPF/DKIM/DMARC/MX records and blacklist status

Release Workflow (in order):

CommandPurpose
/preflightRun quality checks before release
/changelogUpdate CHANGELOG.md with recent changes
/version-bumpBump version following semver
/releaseFull release workflow (bump, tag, GitHub release, auto-changelog)
/postflightVerify release health after deployment

Auto-Task Completion: The release workflow automatically marks tasks as complete when commit messages reference them (e.g., Closes t037, Fixes t042). Tasks in TODO.md are updated with completed: timestamps.

Documentation:

CommandPurpose
/readmeCreate or update README.md (supports --sections for partial updates)

Meta/Improvement:

CommandPurpose
/add-skillImport external skills from GitHub repos (SKILL.md, AGENTS.md, .cursorrules)
/agent-reviewAnalyze session and suggest agent improvements
/session-reviewReview session for completeness and capture learnings
/full-loopEnd-to-end development loop (task → preflight → PR → postflight → deploy)
/preflight-loopRun preflight checks iteratively until all pass
/runnersBatch dispatch tasks to parallel agents (task IDs, PR URLs, or descriptions)
/log-issue-aidevopsReport issues with aidevops (gathers diagnostics, checks duplicates, creates GitHub issue)

AI Model Comparison:

CommandPurpose
/compare-modelsCompare AI models by pricing, context, capabilities (with live web data)
/compare-models-freeCompare AI models using offline embedded data only (no web fetches)
/score-responsesScore and compare actual model responses with structured criteria
/routeSuggest optimal model tier for a task description

Ralph Loop - Iterative AI Development

The Ralph Loop (named after Ralph Wiggum's persistent optimism) enables autonomous iterative development. The AI keeps working on a task until it's complete, automatically resolving issues that arise.

How it works:

Task → Implement → Check → Fix Issues → Re-check → ... → Complete
         ↑                    ↓
         ā””ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”˜ (loop until done)

Usage:

# Run quality checks iteratively until all pass
/preflight-loop --auto-fix --max-iterations <MAX_ITERATIONS>

# Or run linters directly
.agents/scripts/linters-local.sh

Note: Store any API credentials securely via environment variables or .env files (never commit credentials to version control).

Key features:

  • Automatic issue detection and resolution
  • Configurable max iterations (prevents infinite loops)
  • Works with any quality check (linting, tests, builds)
  • Detailed logging of each iteration

See .agents/workflows/ralph-loop.md for the full workflow guide.

Full Loop - End-to-End Development Automation

The Full Loop chains all development phases into a single automated workflow:

Task Development → Preflight → PR Create → PR Review → Postflight → Deploy

Usage:

# Start a full development loop
/full-loop "Implement feature X with tests"

# With options
/full-loop "Fix bug Y" --max-task-iterations 30 --skip-postflight

Options:

OptionDescription
--max-task-iterations NMax iterations for task (default: 50)
--skip-preflightSkip preflight checks
--skip-postflightSkip postflight monitoring
--no-auto-prPause for manual PR creation

The loop pauses for human input at merge approval, rollback decisions, and scope changes.

See .agents/scripts/commands/full-loop.md for complete documentation.

Git Worktrees - Parallel Branch Development

Work on multiple branches simultaneously without stashing or switching. Each branch gets its own directory.

Recommended: Worktrunk (wt) - Git worktree management with shell integration, CI status, and PR links:

# Install (macOS/Linux)
brew install max-sixty/worktrunk/wt && wt config shell install
# Restart your shell for shell integration to take effect

# Create worktree + cd into it
wt switch -c feature/my-feature

# Create worktree + start any AI CLI (-x runs command after switch)
wt switch -c -x claude feature/ai-task

# List worktrees with CI status and PR links
wt list

# Merge + cleanup (squash/rebase options)
wt merge

Fallback (no dependencies):

~/.aidevops/agents/scripts/worktree-helper.sh add feature/my-feature
# Creates: ~/Git/{repo}-feature-my-feature/ (cd there manually)
~/.aidevops/agents/scripts/worktree-helper.sh list
~/.aidevops/agents/scripts/worktree-helper.sh clean

Benefits:

  • Run tests on one branch while coding on another
  • Compare implementations side-by-side
  • No context switching or stash management
  • Each AI session can work on a different branch

Worktree-first workflow: The pre-edit check now enforces worktrees as the default when creating branches, keeping your main directory on main. This prevents uncommitted changes from blocking branch switches and ensures parallel sessions don't inherit wrong branch state.

See .agents/workflows/worktree.md for the complete guide and .agents/tools/git/worktrunk.md for Worktrunk documentation.

Session Management - Parallel AI Sessions

Spawn new AI sessions for parallel work or fresh context. The framework detects natural session completion points and suggests next steps.

Completion signals:

SignalMeaning
PR mergedBranch work complete
Release publishedVersion shipped
All tests passingQuality gates satisfied
User says "done"Explicit completion

Spawning options:

# Background session (same terminal)
opencode --non-interactive --prompt "Continue with feature X" &

# New terminal tab (macOS)
osascript -e 'tell application "Terminal" to do script "cd ~/Git/project && opencode"'

# Worktree-based (isolated branch) - recommended
wt switch -c -x opencode feature/next-feature  # Worktrunk: create + start AI CLI
# Or fallback:
# ~/.aidevops/agents/scripts/worktree-helper.sh add feature/next-feature
# cd ~/Git/{repo}-feature-next-feature && opencode

Session handoff pattern:

When ending a session, the AI provides a continuation prompt for the next session:

## Continuation Prompt
[Copy this to start a new session with full context]

See .agents/workflows/session-manager.md for the complete guide.

Cross-Session Memory System

"Compound, then clear" - Sessions should build on each other. The memory system stores knowledge, patterns, and learnings for future sessions using SQLite FTS5 for fast full-text search, with opt-in semantic similarity search via vector embeddings.

Slash commands:

CommandPurpose
/remember {content}Store a memory with AI-assisted categorization
/recall {query}Search memories by keyword
/recall --recentShow 10 most recent memories
/recall --auto-onlySearch only auto-captured memories
/recall --statsShow memory statistics
/memory-logShow recent auto-captured memories
/patterns {task}Show success/failure patterns for a task type
/route {task}Suggest optimal model tier for a task

Memory types:

TypeUse For
WORKING_SOLUTIONFixes that worked
FAILED_APPROACHWhat didn't work (avoid repeating)
CODEBASE_PATTERNProject conventions
USER_PREFERENCEDeveloper preferences
TOOL_CONFIGTool setup notes
DECISIONProcess, workflow, or policy choices (naming, release cadence, branching)
CONTEXTBackground info
ARCHITECTURAL_DECISIONSystem-level architecture (service boundaries, data flow, tech stack)
ERROR_FIXBug fixes and patches
OPEN_THREADUnresolved questions or follow-ups
SUCCESS_PATTERNApproaches that consistently work
FAILURE_PATTERNApproaches that consistently fail

Semantic search (opt-in):

# Enable semantic similarity search (~90MB model download)
memory-embeddings-helper.sh setup

# Search by meaning, not just keywords
memory-helper.sh recall "how to optimize queries" --semantic

Pattern tracking:

# Record what worked (via memory system)
memory-helper.sh store --type SUCCESS_PATTERN --content "Structured debugging found root cause" --tags "bugfix,sonnet"

# Record what failed
memory-helper.sh store --type FAILURE_PATTERN --content "Blind refactor without tests caused regressions" --tags "refactor"

# Get suggestions for a new task
memory-helper.sh recall "refactor auth middleware" --semantic

Auto-capture: AI agents automatically store memories using --auto flag when they detect working solutions, failed approaches, or decisions. Privacy filters strip <private> tags and reject secret patterns.

CLI usage:

# Store a memory
~/.aidevops/agents/scripts/memory-helper.sh store --type "WORKING_SOLUTION" --content "Fixed CORS with nginx headers" --tags "cors,nginx"

# Store auto-captured memory (from AI agent)
~/.aidevops/agents/scripts/memory-helper.sh store --auto --content "Fixed CORS with nginx headers" --type WORKING_SOLUTION

# Recall memories (keyword search - default)
~/.aidevops/agents/scripts/memory-helper.sh recall "cors"

# Recall memories (semantic similarity - opt-in)
~/.aidevops/agents/scripts/memory-helper.sh recall "cors" --semantic

# Show auto-capture log
~/.aidevops/agents/scripts/memory-helper.sh log

# View statistics (includes auto-capture counts)
~/.aidevops/agents/scripts/memory-helper.sh stats

# Maintenance
~/.aidevops/agents/scripts/memory-helper.sh validate   # Check for stale entries
~/.aidevops/agents/scripts/memory-helper.sh prune      # Remove stale memories

# Namespaces (per-runner memory isolation)
~/.aidevops/agents/scripts/memory-helper.sh --namespace my-runner store --content "Runner learning"
~/.aidevops/agents/scripts/memory-helper.sh --namespace my-runner recall "query" --shared
~/.aidevops/agents/scripts/memory-helper.sh namespaces  # List all namespaces

Storage: ~/.aidevops/.agent-workspace/memory/memory.db (+ optional embeddings.db for semantic search, namespaces/ for per-runner isolation)

See .agents/memory/README.md for complete documentation.

Installation

Slash commands are automatically installed by setup.sh for both OpenCode and Claude Code:

# OpenCode commands deployed to:
~/.config/opencode/command/

# Claude Code commands deployed to:
~/.claude/commands/

# Regenerate commands manually:
.agents/scripts/generate-opencode-commands.sh
.agents/scripts/generate-claude-commands.sh

Both generators read from the same source (.agents/scripts/commands/*.md), ensuring command parity across tools.

Usage

In OpenCode or Claude Code, type the command at the prompt:

/preflight
/release minor
/feature add-user-authentication

Commands invoke the corresponding workflow subagent with appropriate context.


Agent Lifecycle (Three Tiers)

User-created agents survive aidevops update. Agents progress through tiers as they mature:

TierLocationPurposeSurvives Update
Draft~/.aidevops/agents/draft/R&D, experimental, auto-created by orchestration tasksYes
Custom~/.aidevops/agents/custom/User's permanent private agentsYes
Shared.agents/ in repoOpen-source, distributed to all usersManaged by repo

Promotion workflow: Draft agents that prove useful can be promoted to custom (private) or shared (open-source via PR). Orchestration agents can create drafts in draft/ for reusable parallel processing context.

Creating Custom Agents

Create a markdown file in ~/.config/opencode/agent/ (OpenCode) or reference in your AI's system prompt:

---
description: Short description of what this agent does
mode: subagent
temperature: 0.2
tools:
  bash: true
  specific-mcp_*: true
---

# Agent Name

Detailed instructions for the agent...

See .agents/opencode-integration.md for complete documentation.


Usage Examples

Server Management

# List all servers across providers
./.agents/scripts/servers-helper.sh list

# Connect to specific servers
./.agents/scripts/hostinger-helper.sh connect example.com
./.agents/scripts/hetzner-helper.sh connect main web-server

# Execute commands remotely
./.agents/scripts/hostinger-helper.sh exec example.com "uptime"

Monitoring & Uptime (Updown.io)

# List all monitors
./.agents/scripts/updown-helper.sh list

# Add a new website check
./.agents/scripts/updown-helper.sh add https://example.com "My Website"

Domain & DNS Management

# Purchase and configure domain
./.agents/scripts/spaceship-helper.sh purchase example.com
./.agents/scripts/dns-helper.sh cloudflare add-record example.com A 192.168.1.1

# Check domain availability
./.agents/scripts/101domains-helper.sh check-availability example.com

Strategic Keyword Research

# Basic keyword research with volume, CPC, difficulty
./.agents/scripts/keyword-research-helper.sh research "seo tools" --limit 20

# Google autocomplete long-tail discovery
./.agents/scripts/keyword-research-helper.sh autocomplete "how to" --provider both

# Extended research with SERP weakness detection
./.agents/scripts/keyword-research-helper.sh extended "keywords" --quick

# Competitor keyword research
./.agents/scripts/keyword-research-helper.sh extended --competitor ahrefs.com --limit 50

# Keyword gap analysis (find keywords competitor ranks for but you don't)
./.agents/scripts/keyword-research-helper.sh extended --gap semrush.com,ahrefs.com

# Domain research (all keywords a domain ranks for)
./.agents/scripts/keyword-research-helper.sh extended --domain example.com --limit 100

Features:

  • 6 Research Modes: Keyword expansion, autocomplete, domain research, competitor research, keyword gap, extended SERP analysis
  • 17 SERP Weaknesses: Low domain score, no backlinks, thin content, UGC-heavy, non-HTTPS, and more
  • KeywordScore Algorithm: 0-100 score based on weakness count, volume, and difficulty
  • Multi-Provider: DataForSEO (primary), Serper (autocomplete), Ahrefs (domain ratings)
  • Locale Support: US/UK/CA/AU/DE/FR/ES with saved preferences
  • Output Formats: Markdown tables (TUI) and CSV export to ~/Downloads

Quality Control & Performance

# Run quality analysis with auto-fixes
bash .agents/scripts/qlty-cli.sh check 10
bash .agents/scripts/qlty-cli.sh fix

# Run chunked Codacy analysis for large repositories
bash .agents/scripts/codacy-cli-chunked.sh quick    # Fast analysis
bash .agents/scripts/codacy-cli-chunked.sh chunked # Full analysis

# AI coding assistance
bash .agents/scripts/ampcode-cli.sh scan ./src
bash .agents/scripts/continue-cli.sh review

# Audit website performance
./.agents/scripts/pagespeed-helper.sh wordpress https://example.com

Documentation & Resources

Wiki Guides:

Agent Guides (in .agents/):

  • API Integrations - Service APIs
  • Browser Automation - 8 tools + anti-detect stack: decision tree, parallel, extensions, fingerprinting
  • Device Emulation - Mobile/tablet testing: 100+ device presets, viewport, geolocation, locale, dark mode
  • Anti-Detect Browser - Multi-profile management, fingerprint rotation, proxy integration
  • Web Performance - Core Web Vitals, network dependencies, accessibility (Chrome DevTools MCP)
  • PageSpeed - Lighthouse CLI and PageSpeed Insights API
  • Pandoc - Document format conversion
  • Security - Enterprise security standards

Provider-Specific Guides: Hostinger, Hetzner, Cloudflare, WordPress, Git platforms, Vercel CLI, Coolify CLI, and more in .agents/

Architecture

aidevops/
ā”œā”€ā”€ setup.sh                       # Main setup script
ā”œā”€ā”€ AGENTS.md                      # AI agent guidance (dev)
ā”œā”€ā”€ .agents/                        # Agents and documentation
│   ā”œā”€ā”€ AGENTS.md                  # User guide (deployed to ~/.aidevops/agents/)
│   ā”œā”€ā”€ *.md                       # 13 primary agents
│   ā”œā”€ā”€ scripts/                   # 390+ helper scripts
│   ā”œā”€ā”€ tools/                     # Cross-domain utilities (video, browser, git, etc.)
│   ā”œā”€ā”€ services/                  # External service integrations
│   └── workflows/                 # Development process guides
ā”œā”€ā”€ configs/                       # Configuration templates
ā”œā”€ā”€ ssh/                           # SSH key management
└── templates/                     # Reusable templates

Configuration & Setup

aidevops is configured through two JSONC/JSON files that control framework behaviour, model routing, quality gates, and user preferences. Both are optional -- sensible defaults apply out of the box.

FilePurposeCLI
~/.config/aidevops/config.jsoncFramework config: updates, models, safety, quality, orchestration, pathsaidevops config
~/.config/aidevops/settings.jsonUser preferences: onboarding, UI, model routing defaultssettings-helper.sh

Quick examples:

# Disable automatic updates
aidevops config set updates.auto_update false

# Use opus-tier verification for destructive operations
aidevops config set safety.verification_tier opus

# View all current config values
aidevops config list

# Validate config against schema
aidevops config validate

Precedence: Environment variables (AIDEVOPS_*) > config file > built-in defaults. See the full configuration reference for every option, type, default, and example.

Service Credentials

Service-specific credentials (hosting, DNS, Git platforms, etc.) use a separate template system:

# 1. Copy and customize configuration templates
cp configs/hostinger-config.json.txt configs/hostinger-config.json
cp configs/hetzner-config.json.txt configs/hetzner-config.json
# Edit with your actual credentials

# 2. Test connections
./.agents/scripts/servers-helper.sh list

# 3. Install MCP integrations (optional)
bash .agents/scripts/setup-mcp-integrations.sh all

See .agents/reference/configuration.md for details on the template system and credential security.

Security & Best Practices

Credential Management:

  • Store API tokens in separate config files (never hardcode)
  • Use Ed25519 SSH keys (modern, secure, fast)
  • Set proper file permissions (600 for configs)
  • Regular key rotation and access audits

Multi-Tenant Credential Storage

Manage multiple accounts/clients per service with isolated credential sets:

# Create a new tenant
credential-helper.sh create client-acme

# Switch active tenant
credential-helper.sh switch client-acme

# Set credentials for current tenant
credential-helper.sh set GITHUB_TOKEN ghp_xxx

# Per-project override (gitignored)
echo "client-acme" > .aidevops-tenant

# Export for scripts
source <(credential-helper.sh export)

Resolution priority: Project .aidevops-tenant → Global active tenant → Default

See .agents/tools/credentials/multi-tenant.md for complete documentation.

Quality Assurance:

  • Multi-platform analysis (SonarCloud, CodeFactor, Codacy, CodeRabbit, Qlty, Snyk, Gemini Code Assist)
  • Automated security monitoring and vulnerability detection

Contributing & License

Contributing:

  1. Fork the repository
  2. Create feature branch
  3. Add provider support or improvements
  4. Test with your infrastructure
  5. Submit pull request

License: MIT License - see LICENSE file for details Created by Marcus Quinn - Copyright Ā© 2025-2026


What This Framework Achieves

For You:

  • Autonomous project management — dispatch a mission and let AI agents handle milestones, validation, and delivery across days
  • Cross-domain operations — code, automation, business, marketing, legal, sales, content, video, research, SEO, social media, health, and accounts managed through one platform
  • Multi-model safety — destructive operations verified by a second AI provider before execution
  • Enterprise-grade quality — multi-platform analysis, automated security monitoring, continual improvement loops
  • Infrastructure management — 30+ service integrations with standardized commands across all providers

For Your AI Agents:

  • Autonomous supervisor — pulse runs every 2 minutes, merging PRs, dispatching workers, killing stuck processes, advancing missions
  • Operational intelligence — struggle-ratio detection, orphaned PR recovery, circuit breaker, dynamic concurrency
  • Cost-aware routing — 7-tier model selection (local → haiku → flash → sonnet → pro → opus → grok) with budget tracking
  • Progressive context — 900+ subagents loaded on demand, project bundles auto-configuring quality gates and model tiers
  • Self-improving — session mining extracts learnings, quality findings auto-create tasks, patterns feed back into agent prompts

Get Started:

# npm (recommended)
npm install -g aidevops && aidevops update

# Bun (fast alternative)
bun install -g aidevops && aidevops update

# Homebrew
brew install marcusquinn/tap/aidevops && aidevops update

# Direct from source
bash <(curl -fsSL https://aidevops.sh/install)

An AI operations platform for launching and managing projects across every business domain — from code to content, infrastructure to invoicing.

Star History

Star History Chart

Global Ranking

8.0
Trust ScoreMCPHub Index

Based on codebase health & activity.

Manual Config

{ "mcpServers": { "marcusquinn-aidevops": { "command": "npx", "args": ["marcusquinn-aidevops"] } } }