Files
APAW/AGENTS.md
¨NW¨ 2b6c7e9ae2 fix: restore agent permissions and Docker skills from feature branch
- Add devops-engineer agent with Docker/DevOps skills
- Add docker-compose, docker-swarm, docker-security, docker-monitoring skills
- Fix orchestrator permissions (add devops-engineer, browser-automation)
- Fix backend-developer, frontend-developer, go-developer task permissions
- Update security-auditor with Docker security checklist
- Sync KILO_SPEC.md agent models with actual definitions
- Update AGENTS.md with go-developer and devops-engineer

Note: Excluded kilo.jsonc, kilo-meta.json, sync script to avoid breaking Kilo Code
2026-04-05 14:17:34 +01:00

8.8 KiB

Kilo Code Agents Reference

This file configures AI agent behavior for the APAW project - a self-improving code pipeline with Gitea logging.

Pipeline Workflow

The main workflow is /pipeline - use it to process issues through all agents automatically.

User: /pipeline 42
Agent: Runs full pipeline for issue #42 with Gitea logging

Commands (Slash Commands)

Command Description Usage
/pipeline <issue> Run full agent pipeline for issue /pipeline 42
/status <issue> Check pipeline status for issue /status 42
/evaluate <issue> Generate performance report /evaluate 42
/plan Creates detailed task plans /plan feature X
/ask Answers codebase questions /ask how does auth work
/debug Analyzes and fixes bugs /debug error in login
/code Quick code generation /code add validation
/research [topic] Run research and self-improvement /research multi-agent

Pipeline Agents (Subagents)

These agents are invoked automatically by /pipeline or manually via @mention:

Core Development

Agent Role When Invoked
@RequirementRefiner Converts vague ideas and bug reports into strict User Stories with acceptance criteria checklists Issue status: new
@HistoryMiner Analyzes git history to find duplicates and past solutions, preventing regression and duplicate work Status: planned
@SystemAnalyst Designs technical specifications, data schemas, and API contracts before implementation Status: researching
@SdetEngineer Writes tests following TDD methodology Status: designed
@LeadDeveloper Primary code writer for backend and core logic Status: testing
@FrontendDeveloper Handles UI implementation with multimodal capabilities When UI work needed
@BackendDeveloper Backend specialist for Node When backend needed
@GoDeveloper Go backend specialist for Gin, Echo, APIs, and database integration When Go backend needed
@DevopsEngineer DevOps specialist for Docker, Kubernetes, CI/CD pipeline automation, and infrastructure management When deployment/infra needed

Quality Assurance

Agent Role When Invoked
@CodeSkeptic Adversarial code reviewer Status: implementing
@TheFixer Iteratively fixes bugs based on specific error reports and test failures When review fails
@PerformanceEngineer Reviews code for performance issues After code-skeptic
@SecurityAuditor Scans for security vulnerabilities, OWASP Top 10, dependency CVEs, and hardcoded secrets After performance
@VisualTester Visual regression testing agent that compares screenshots and detects UI differences using pixelmatch and image diff When UI changes

DevOps & Infrastructure

Agent Role When Invoked
@devops-engineer Docker/Swarm/K8s deployment When deployment needed
@security-auditor Container security scan After deployment config

Cognitive Enhancement

Agent Role When Invoked
@Planner Advanced task planner using Chain of Thought, Tree of Thoughts, and Plan-Execute-Reflect Complex tasks
@Reflector Self-reflection agent using Reflexion pattern - learns from mistakes After each agent
@MemoryManager Manages agent memory systems - short-term (context), long-term (vector store), and episodic (experiences) Context management

Meta & Process

Agent Role When Invoked
@Orchestrator Main dispatcher Manages all agent routing
@ReleaseManager Manages git operations, semantic versioning, branching, and deployments Status: releasing
@Evaluator Scores agent effectiveness after task completion for continuous improvement Status: evaluated
@PromptOptimizer Improves agent system prompts based on performance failures When score < 7
@ProductOwner Manages issue checklists, status labels, tracks progress and coordinates with human users Manages issues
@AgentArchitect Creates, modifies, and reviews new agents, workflows, and skills based on capability gap analysis When gaps identified
@CapabilityAnalyst Analyzes task requirements against available agents, workflows, and skills When starting new task
@WorkflowArchitect Creates and maintains workflow definitions with complete architecture, Gitea integration, and quality gates New workflow needed
@MarkdownValidator Validates and corrects Markdown descriptions for Gitea issues Before issue creation

Status Labels

Pipeline uses Gitea labels to track progress:

  • status: newstatus: plannedstatus: researching → ...
  • Agents add/remove labels automatically

Performance Logging

Each agent logs to Gitea issue comments:

## ✅ lead-developer completed

**Score**: 8/10
**Duration**: 1.2h
**Files**: src/auth.ts, src/user.ts

### Notes
- Clean implementation
- Follows existing patterns
- Tests passing

Efficiency Tracking

Scores saved to .kilo/logs/efficiency_score.json:

{
  "version": "1.0",
  "history": [
    {
      "issue": 42,
      "date": "2024-01-02T10:00:00Z",
      "agents": {
        "lead-developer": 8,
        "code-skeptic": 7,
        "the-fixer": 9
      },
      "iterations": 2,
      "duration_hours": 1.5
    }
  ]
}

Manual Agent Invocation

// Use Task tool to invoke subagent
Task tool with:
  subagent_type: "lead-developer"
  prompt: "Implement authentication for issue #42"

Or via @mention:

@lead-developer implement authentication flow

Environment Variables

Required for Gitea integration:

GITEA_API_URL=https://git.softuniq.eu/api/v1
GITEA_TOKEN=your-token-here

Self-Improvement Cycle

  1. Pipeline runs for each issue
  2. Evaluator scores each agent (1-10)
  3. Low scores (<7) trigger prompt-optimizer
  4. Prompt optimizer analyzes failures and improves prompts
  5. New prompts saved to .kilo/agents/
  6. Next run uses improved prompts

Architecture Files

File Purpose
AGENTS.md This file - main config
.kilo/agents/*.md Agent definitions with prompts
.kilo/commands/*.md Workflow commands
.kilo/rules/*.md Custom rules loaded globally
.kilo/skills/ Skill modules
src/kilocode/ TypeScript API for programmatic use

Skills Reference

Containerization Skills

Skill Purpose Location
docker-compose Multi-container orchestration .kilo/skills/docker-compose/
docker-swarm Production cluster deployment .kilo/skills/docker-swarm/
docker-security Container security hardening .kilo/skills/docker-security/
docker-monitoring Container monitoring/logging .kilo/skills/docker-monitoring/

Node.js Skills

Skill Purpose Location
nodejs-express-patterns Express routing, middleware .kilo/skills/nodejs-express-patterns/
nodejs-auth-jwt JWT authentication .kilo/skills/nodejs-auth-jwt/
nodejs-security-owasp OWASP security .kilo/skills/nodejs-security-owasp/

Database Skills

Skill Purpose Location
postgresql-patterns PostgreSQL patterns .kilo/skills/postgresql-patterns/
sqlite-patterns SQLite patterns .kilo/skills/sqlite-patterns/
clickhouse-patterns ClickHouse patterns .kilo/skills/clickhouse-patterns/

Go Skills

Skill Purpose Location
go-modules Go modules management .kilo/skills/go-modules/
go-concurrency Goroutines and channels .kilo/skills/go-concurrency/
go-testing Go testing patterns .kilo/skills/go-testing/
go-security Go security patterns .kilo/skills/go-security/

Process Skills

Skill Purpose Location
planning-patterns CoT/ToT planning .kilo/skills/planning-patterns/
memory-systems Memory management .kilo/skills/memory-systems/
tool-use Tool usage patterns .kilo/skills/tool-use/
research-cycle Self-improvement cycle .kilo/skills/research-cycle/

Using the TypeScript API

import { 
  PipelineRunner, 
  GiteaClient, 
  decideRouting 
} from './src/kilocode/index.js'

const runner = await createPipelineRunner({
  giteaToken: process.env.GITEA_TOKEN
})

await runner.run({ issueNumber: 42 })

Code Style

  • Use TypeScript for new files
  • Follow existing patterns
  • Write tests before code (TDD)
  • Keep functions under 50 lines
  • Use early returns
  • No comments unless explicitly requested