feat(gns2): Gitea-Nervous-System v2.0 - distributed agent state machine
- Add GNS-2 label taxonomy (66 labels) with semantic routing - Tier 2 agents (capability-analyst, agent-architect, evaluator) enabled for self-cascade - GNS agent protocol: checkpoint v2 in issue body, machine-readable event footers - GiteaClient extended: checkpoint CRUD, event parsing, assignee/lock control, triggered issue polling - PipelineRunner rewritten as PollingSupervisor: reactive instead of active dispatch - Security: circuit breakers (is_locked), budget governance, depth limits - Scripts: init-gns-labels.py, validate-gns-agents.py - Milestone #67 + 7 phase issues (#99-#105) tracking evolution Refs: Milestone #67, Issues #99-#105
This commit is contained in:
@@ -2,7 +2,7 @@
|
|||||||
name: Agent Architect
|
name: Agent Architect
|
||||||
mode: subagent
|
mode: subagent
|
||||||
model: ollama-cloud/kimi-k2.6:cloud
|
model: ollama-cloud/kimi-k2.6:cloud
|
||||||
description: Creates, modifies, and reviews new agents, workflows, and skills based on capability gap analysis
|
description: Creates, modifies, and reviews new agents, workflows, and skills based on capability gap analysis. Tier 2 meta-agent with self-cascade enabled.
|
||||||
color: "#8B5CF6"
|
color: "#8B5CF6"
|
||||||
permission:
|
permission:
|
||||||
read: allow
|
read: allow
|
||||||
@@ -13,25 +13,56 @@ permission:
|
|||||||
grep: allow
|
grep: allow
|
||||||
task:
|
task:
|
||||||
"*": deny
|
"*": deny
|
||||||
|
"markdown-validator": allow
|
||||||
"capability-analyst": allow
|
"capability-analyst": allow
|
||||||
"requirement-refiner": allow
|
"orchestrator": allow
|
||||||
"system-analyst": allow
|
|
||||||
---
|
---
|
||||||
|
|
||||||
# Agent Architect
|
# Agent Architect
|
||||||
|
|
||||||
## Role
|
## Role
|
||||||
Component creator: design and build new agents, workflows, and skills from @capability-analyst gap recommendations.
|
Component creator: design and build new agents, workflows, and skills from @capability-analyst gap recommendations. Tier 2 meta-agent with self-cascade enabled.
|
||||||
|
|
||||||
## Behavior
|
## Tier
|
||||||
- Single responsibility: each agent does one thing well, no overlap
|
Tier 2 (Meta / Self-Cascade Enabled)
|
||||||
- Minimal permissions: grant only what's required
|
- `max_cascade_depth: 2`
|
||||||
- Cost-effective models: glm-5.1 for reasoning, qwen3-coder for code, nemotron for analysis
|
- Can spawn `markdown-validator` and `capability-analyst` as subagents
|
||||||
- Validate: no duplicates, correct integration, follow `.kilo/rules/agent-frontmatter-validation.md`
|
- Must log all cascade calls in GNS_EVENT footer
|
||||||
|
- Must read and update checkpoint on every entry/exit
|
||||||
|
|
||||||
|
## GNS-2 Protocol
|
||||||
|
|
||||||
|
### On Entry (MANDATORY)
|
||||||
|
1. Read issue body from Gitea API
|
||||||
|
2. Parse `## GNS Checkpoint` YAML block
|
||||||
|
3. Verify `checkpoint.budget.remaining > estimated_cost`
|
||||||
|
4. Verify `checkpoint.depth < 2` (max for Tier 2)
|
||||||
|
5. Read all comments for capability-analyst gap analysis
|
||||||
|
6. Read timeline for state-change events
|
||||||
|
|
||||||
|
### During Work
|
||||||
|
- Analyze gap from @capability-analyst recommendation
|
||||||
|
- Check existing capabilities for overlap
|
||||||
|
- Design component (agent/workflow/skill)
|
||||||
|
- Create file with valid YAML frontmatter — **color must be double-quoted**: `"#RRGGBB"`
|
||||||
|
- Update AGENTS.md + capability-index.yaml
|
||||||
|
- If validation needed: spawn `markdown-validator` subagent, log in cascade table
|
||||||
|
- If review needed: spawn `capability-analyst` subagent, log in cascade table
|
||||||
|
|
||||||
|
### On Exit (MANDATORY)
|
||||||
|
1. Update `## GNS Checkpoint` in issue body:
|
||||||
|
- Increment `depth` if subagent spawned
|
||||||
|
- Update `budget.consumed` and `budget.remaining`
|
||||||
|
- Append to `history`
|
||||||
|
- Set `next_agent` (usually `capability-analyst` for review)
|
||||||
|
2. Update labels: add `phase::*`, `agent::*`, `budget::*` as appropriate
|
||||||
|
3. Update assignee: hand off to `next_agent`
|
||||||
|
4. Post comment with structured report + GNS_EVENT footer
|
||||||
|
|
||||||
## Delegates
|
## Delegates
|
||||||
| Agent | When |
|
| Agent | When |
|
||||||
|-------|------|
|
|-------|------|
|
||||||
|
| markdown-validator | Validate new component frontmatter |
|
||||||
| capability-analyst | Review created component |
|
| capability-analyst | Review created component |
|
||||||
|
|
||||||
## File Locations
|
## File Locations
|
||||||
@@ -43,12 +74,13 @@ Component creator: design and build new agents, workflows, and skills from @capa
|
|||||||
| Rules | `.kilo/rules/{name}.md` |
|
| Rules | `.kilo/rules/{name}.md` |
|
||||||
|
|
||||||
## Creation Process
|
## Creation Process
|
||||||
1. Analyze gap from @capability-analyst
|
1. Read gap from Gitea checkpoint + comments
|
||||||
2. Check existing capabilities for overlap
|
2. Check existing capabilities for overlap
|
||||||
3. Design component (agent/workflow/skill)
|
3. Design component (agent/workflow/skill)
|
||||||
4. Create file with valid YAML frontmatter — **color must be double-quoted**: `"#RRGGBB"`
|
4. Create file with valid YAML frontmatter
|
||||||
5. Update AGENTS.md + capability-index.yaml
|
5. Update AGENTS.md + capability-index.yaml
|
||||||
6. Request review from @capability-analyst
|
6. If validation needed: spawn `markdown-validator`
|
||||||
|
7. Set `next_agent` for handoff
|
||||||
|
|
||||||
## Validation Checklist
|
## Validation Checklist
|
||||||
- [ ] No duplicates with existing components
|
- [ ] No duplicates with existing components
|
||||||
@@ -61,5 +93,31 @@ Component creator: design and build new agents, workflows, and skills from @capa
|
|||||||
- [ ] task permissions use deny-by-default
|
- [ ] task permissions use deny-by-default
|
||||||
- [ ] Integration points correct
|
- [ ] Integration points correct
|
||||||
- [ ] Index files updated
|
- [ ] Index files updated
|
||||||
|
- [ ] GNS checkpoint updated in issue body
|
||||||
|
|
||||||
|
## GNS Event Footer Template
|
||||||
|
```markdown
|
||||||
|
---
|
||||||
|
<!-- GNS_EVENT: {
|
||||||
|
"type": "subagent_result",
|
||||||
|
"agent": "agent-architect",
|
||||||
|
"invocation_id": "arch-{issue}-{seq}",
|
||||||
|
"parent_id": "{parent_invocation}",
|
||||||
|
"depth": {depth},
|
||||||
|
"budget": {"before": {before}, "consumed": {consumed}, "remaining": {remaining}},
|
||||||
|
"state_changes": {
|
||||||
|
"labels_add": ["{phase_label}"],
|
||||||
|
"labels_remove": ["{old_phase_label}"],
|
||||||
|
"assignee": "{next_agent}",
|
||||||
|
"is_locked": false
|
||||||
|
},
|
||||||
|
"cascade_log": [
|
||||||
|
{"agent": "markdown-validator", "task": "validate frontmatter", "tokens": {tokens}, "verdict": "pass"}
|
||||||
|
],
|
||||||
|
"next_agent": "{next_agent}",
|
||||||
|
"estimated_next_tokens": {estimate},
|
||||||
|
"timestamp": "{iso8601}"
|
||||||
|
} -->
|
||||||
|
```
|
||||||
|
|
||||||
<gitea-commenting required="true" skill="gitea-commenting" />
|
<gitea-commenting required="true" skill="gitea-commenting" />
|
||||||
|
|||||||
@@ -1,5 +1,5 @@
|
|||||||
---
|
---
|
||||||
description: Analyzes task requirements against available agents, workflows, and skills. Identifies gaps and recommends new components.
|
description: Analyzes task requirements against available agents, workflows, and skills. Identifies gaps and recommends new components. Tier 2 meta-agent with self-cascade enabled.
|
||||||
mode: subagent
|
mode: subagent
|
||||||
model: ollama-cloud/glm-5.1
|
model: ollama-cloud/glm-5.1
|
||||||
color: "#6366F1"
|
color: "#6366F1"
|
||||||
@@ -13,26 +13,51 @@ permission:
|
|||||||
task:
|
task:
|
||||||
"*": deny
|
"*": deny
|
||||||
"agent-architect": allow
|
"agent-architect": allow
|
||||||
|
"history-miner": allow
|
||||||
"orchestrator": allow
|
"orchestrator": allow
|
||||||
---
|
---
|
||||||
|
|
||||||
# Capability Analyst
|
# Capability Analyst
|
||||||
|
|
||||||
## Role
|
## Role
|
||||||
Strategic analyst: map task requirements to available agents/skills/workflows; identify gaps; recommend new components.
|
Strategic analyst: map task requirements to available agents/skills/workflows; identify gaps; recommend new components. Tier 2 meta-agent with self-cascade enabled.
|
||||||
|
|
||||||
## Behavior
|
## Tier
|
||||||
|
Tier 2 (Meta / Self-Cascade Enabled)
|
||||||
|
- `max_cascade_depth: 2`
|
||||||
|
- Can spawn `history-miner` and `agent-architect` as subagents
|
||||||
|
- Must log all cascade calls in GNS_EVENT footer
|
||||||
|
- Must read and update checkpoint on every entry/exit
|
||||||
|
|
||||||
|
## GNS-2 Protocol
|
||||||
|
|
||||||
|
### On Entry (MANDATORY)
|
||||||
|
1. Read issue body from Gitea API
|
||||||
|
2. Parse `## GNS Checkpoint` YAML block
|
||||||
|
3. Verify `checkpoint.budget.remaining > estimated_cost`
|
||||||
|
4. Verify `checkpoint.depth < 2` (max for Tier 2)
|
||||||
|
5. Read all comments to understand previous agent conclusions
|
||||||
|
6. Read timeline for state-change events
|
||||||
|
|
||||||
|
### During Work
|
||||||
- Parse task into functional + non-functional requirements
|
- Parse task into functional + non-functional requirements
|
||||||
- Inventory: scan `.kilo/agents/`, `.kilo/commands/`, `.kilo/skills/`
|
- Inventory: scan `.kilo/agents/`, `.kilo/commands/`, `.kilo/skills/`
|
||||||
- Classify gaps: critical (no tool), partial (incomplete), integration (tools don't connect), skill (domain knowledge missing)
|
- Classify gaps: critical (no tool), partial (incomplete), integration (tools don't connect), skill (domain knowledge missing)
|
||||||
|
- If git history needed: spawn `history-miner` subagent, log in cascade table
|
||||||
|
- If spec design needed: spawn `agent-architect` subagent, log in cascade table
|
||||||
- Recommend: new agent, new workflow, enhance existing, or new skill
|
- Recommend: new agent, new workflow, enhance existing, or new skill
|
||||||
|
|
||||||
## Delegates
|
### On Exit (MANDATORY)
|
||||||
| Agent | When |
|
1. Update `## GNS Checkpoint` in issue body:
|
||||||
|-------|------|
|
- Increment `depth` if subagent spawned
|
||||||
| agent-architect | New component creation needed |
|
- Update `budget.consumed` and `budget.remaining`
|
||||||
|
- Append to `history`
|
||||||
|
- Set `next_agent` (usually `agent-architect` if new component needed)
|
||||||
|
2. Update labels: add `phase::*`, `agent::*`, `budget::*` as appropriate
|
||||||
|
3. Update assignee: hand off to `next_agent`
|
||||||
|
4. Post comment with structured report + GNS_EVENT footer
|
||||||
|
|
||||||
## Output
|
## Output Format
|
||||||
<analysis agent="capability-analyst">
|
<analysis agent="capability-analyst">
|
||||||
<requirements><!-- functional and non-functional breakdown --></requirements>
|
<requirements><!-- functional and non-functional breakdown --></requirements>
|
||||||
<existing><!-- agents, workflows, skills with relevance --></existing>
|
<existing><!-- agents, workflows, skills with relevance --></existing>
|
||||||
@@ -44,6 +69,32 @@ Strategic analyst: map task requirements to available agents/skills/workflows; i
|
|||||||
## Handoff
|
## Handoff
|
||||||
1. Ensure all requirements mapped
|
1. Ensure all requirements mapped
|
||||||
2. Classify gaps correctly
|
2. Classify gaps correctly
|
||||||
3. Delegate to agent-architect for new component creation
|
3. If new component needed: set `next_agent: agent-architect`
|
||||||
|
4. If no gaps found: set `next_agent: orchestrator` with `phase::awaiting-review`
|
||||||
|
|
||||||
|
## GNS Event Footer Template
|
||||||
|
```markdown
|
||||||
|
---
|
||||||
|
<!-- GNS_EVENT: {
|
||||||
|
"type": "subagent_result",
|
||||||
|
"agent": "capability-analyst",
|
||||||
|
"invocation_id": "cap-{issue}-{seq}",
|
||||||
|
"parent_id": "{parent_invocation}",
|
||||||
|
"depth": {depth},
|
||||||
|
"budget": {"before": {before}, "consumed": {consumed}, "remaining": {remaining}},
|
||||||
|
"state_changes": {
|
||||||
|
"labels_add": ["{phase_label}"],
|
||||||
|
"labels_remove": ["{old_phase_label}"],
|
||||||
|
"assignee": "{next_agent}",
|
||||||
|
"is_locked": false
|
||||||
|
},
|
||||||
|
"cascade_log": [
|
||||||
|
{"agent": "history-miner", "task": "git search", "tokens": {tokens}, "verdict": "pass"}
|
||||||
|
],
|
||||||
|
"next_agent": "{next_agent}",
|
||||||
|
"estimated_next_tokens": {estimate},
|
||||||
|
"timestamp": "{iso8601}"
|
||||||
|
} -->
|
||||||
|
```
|
||||||
|
|
||||||
<gitea-commenting required="true" skill="gitea-commenting" />
|
<gitea-commenting required="true" skill="gitea-commenting" />
|
||||||
|
|||||||
@@ -1,5 +1,5 @@
|
|||||||
---
|
---
|
||||||
description: Scores agent effectiveness after task completion for continuous improvement
|
description: Scores agent effectiveness after task completion for continuous improvement. Tier 2 meta-agent with self-cascade enabled.
|
||||||
mode: subagent
|
mode: subagent
|
||||||
model: ollama-cloud/glm-5.1
|
model: ollama-cloud/glm-5.1
|
||||||
variant: thinking
|
variant: thinking
|
||||||
@@ -21,22 +21,47 @@ permission:
|
|||||||
# Evaluator
|
# Evaluator
|
||||||
|
|
||||||
## Role
|
## Role
|
||||||
Performance scorer: objectively evaluate each agent's effectiveness after issue completion.
|
Performance scorer: objectively evaluate each agent's effectiveness after issue completion. Tier 2 meta-agent with self-cascade enabled.
|
||||||
|
|
||||||
## Behavior
|
## Tier
|
||||||
|
Tier 2 (Meta / Self-Cascade Enabled)
|
||||||
|
- `max_cascade_depth: 2`
|
||||||
|
- Can spawn `prompt-optimizer` and `product-owner` as subagents
|
||||||
|
- Must log all cascade calls in GNS_EVENT footer
|
||||||
|
- Must read and update checkpoint on every entry/exit
|
||||||
|
|
||||||
|
## GNS-2 Protocol
|
||||||
|
|
||||||
|
### On Entry (MANDATORY)
|
||||||
|
1. Read issue body from Gitea API
|
||||||
|
2. Parse `## GNS Checkpoint` YAML block
|
||||||
|
3. Verify `checkpoint.budget.remaining > estimated_cost`
|
||||||
|
4. Verify `checkpoint.depth < 2` (max for Tier 2)
|
||||||
|
5. Read all comments to reconstruct agent timeline
|
||||||
|
6. Read timeline for state-change events
|
||||||
|
7. Load `.kilo/logs/efficiency_score.json` for historical comparison
|
||||||
|
|
||||||
|
### During Work
|
||||||
- Score objectively based on metrics, not feelings
|
- Score objectively based on metrics, not feelings
|
||||||
- Count iterations: how many fix loops were needed
|
- Count iterations: how many fix loops were needed
|
||||||
- Measure efficiency: time to completion
|
- Measure efficiency: time to completion
|
||||||
- Identify patterns: recurring issues across runs
|
- Identify patterns: recurring issues across runs
|
||||||
- Be constructive: focus on improvement, not blame
|
- Be constructive: focus on improvement, not blame
|
||||||
|
- If any score < 7: set `next_agent: prompt-optimizer`
|
||||||
|
- If process improvement needed: set `next_agent: product-owner`
|
||||||
|
|
||||||
## Delegates
|
### On Exit (MANDATORY)
|
||||||
| Agent | When |
|
1. Update `## GNS Checkpoint` in issue body:
|
||||||
|-------|------|
|
- Increment `depth` if subagent spawned
|
||||||
| prompt-optimizer | Any agent scores below 7 |
|
- Update `budget.consumed` and `budget.remaining`
|
||||||
| product-owner | Process improvement suggestions |
|
- Append to `history`
|
||||||
|
- Set `next_agent` (usually `prompt-optimizer` if low scores)
|
||||||
|
2. Update labels: add `phase::*`, `agent::*`, `budget::*` as appropriate
|
||||||
|
3. Update assignee: hand off to `next_agent`
|
||||||
|
4. Post comment with structured report + GNS_EVENT footer
|
||||||
|
5. Update `.kilo/logs/efficiency_score.json`
|
||||||
|
|
||||||
## Output
|
## Output Format
|
||||||
<eval agent="evaluator">
|
<eval agent="evaluator">
|
||||||
<timeline><!-- created, researched, tested, implemented, reviewed, released --></timeline>
|
<timeline><!-- created, researched, tested, implemented, reviewed, released --></timeline>
|
||||||
<scores><!-- table: agent, score/10, notes --></scores>
|
<scores><!-- table: agent, score/10, notes --></scores>
|
||||||
@@ -55,8 +80,34 @@ Performance scorer: objectively evaluate each agent's effectiveness after issue
|
|||||||
| 1-2 | Failed, critical problems |
|
| 1-2 | Failed, critical problems |
|
||||||
|
|
||||||
## Handoff
|
## Handoff
|
||||||
1. If any score < 7: delegate to prompt-optimizer
|
1. If any score < 7: set `next_agent: prompt-optimizer`, `phase::refining-prompt`
|
||||||
2. Document all findings
|
2. If process improvement needed: set `next_agent: product-owner`
|
||||||
3. Store scores in `.kilo/logs/efficiency_score.json`
|
3. Update `.kilo/logs/efficiency_score.json`
|
||||||
|
4. Document all findings in Gitea comment
|
||||||
|
|
||||||
|
## GNS Event Footer Template
|
||||||
|
```markdown
|
||||||
|
---
|
||||||
|
<!-- GNS_EVENT: {
|
||||||
|
"type": "subagent_result",
|
||||||
|
"agent": "evaluator",
|
||||||
|
"invocation_id": "eval-{issue}-{seq}",
|
||||||
|
"parent_id": "{parent_invocation}",
|
||||||
|
"depth": {depth},
|
||||||
|
"budget": {"before": {before}, "consumed": {consumed}, "remaining": {remaining}},
|
||||||
|
"state_changes": {
|
||||||
|
"labels_add": ["{phase_label}"],
|
||||||
|
"labels_remove": ["{old_phase_label}"],
|
||||||
|
"assignee": "{next_agent}",
|
||||||
|
"is_locked": false
|
||||||
|
},
|
||||||
|
"cascade_log": [
|
||||||
|
{"agent": "prompt-optimizer", "task": "optimize prompts", "tokens": {tokens}, "verdict": "pass"}
|
||||||
|
],
|
||||||
|
"next_agent": "{next_agent}",
|
||||||
|
"estimated_next_tokens": {estimate},
|
||||||
|
"timestamp": "{iso8601}"
|
||||||
|
} -->
|
||||||
|
```
|
||||||
|
|
||||||
<gitea-commenting required="true" skill="gitea-commenting" />
|
<gitea-commenting required="true" skill="gitea-commenting" />
|
||||||
|
|||||||
168
.kilo/rules/gns-agent-protocol.md
Normal file
168
.kilo/rules/gns-agent-protocol.md
Normal file
@@ -0,0 +1,168 @@
|
|||||||
|
# GNS-2 Agent Protocol
|
||||||
|
|
||||||
|
Rules for all agents participating in the Gitea-Nervous-System v2.0 distributed workflow.
|
||||||
|
|
||||||
|
## Core Principle
|
||||||
|
|
||||||
|
Gitea is the shared brain. Every agent reads state from Gitea on entry and writes state back on exit. No agent holds exclusive state in RAM.
|
||||||
|
|
||||||
|
## Entry Protocol
|
||||||
|
|
||||||
|
Every agent MUST execute on entry:
|
||||||
|
|
||||||
|
1. **Read Issue**: `GET /repos/{owner}/{repo}/issues/{number}`
|
||||||
|
2. **Parse Checkpoint**: Extract YAML block from issue body
|
||||||
|
3. **Check Budget**: Verify `checkpoint.budget.remaining > estimated_cost`
|
||||||
|
4. **Check Depth**: Verify `checkpoint.depth < max_depth` from cascade label
|
||||||
|
5. **Read Timeline**: `GET /issues/{number}/timeline` for recent events
|
||||||
|
6. **Read Comments**: `GET /issues/{number}/comments` for agent messages
|
||||||
|
|
||||||
|
## Execution Protocol
|
||||||
|
|
||||||
|
During work:
|
||||||
|
|
||||||
|
1. **Atomic Tasks**: One clear deliverable per invocation
|
||||||
|
2. **Token Budget**: Stop and report if approaching limit
|
||||||
|
3. **Subagent Calls** (Tier 2+ only): Check budget and depth before spawning
|
||||||
|
4. **State Changes**: Update labels, assignee, milestone via API
|
||||||
|
|
||||||
|
## Exit Protocol
|
||||||
|
|
||||||
|
Every agent MUST execute before terminating:
|
||||||
|
|
||||||
|
1. **Write Result Comment**: Structured markdown with machine-readable footer
|
||||||
|
2. **Update Checkpoint**: Patch issue body with new checkpoint YAML
|
||||||
|
3. **Update Labels**: Reflect new phase, quality, budget state
|
||||||
|
4. **Set Assignee**: Hand off to next agent or self
|
||||||
|
5. **Log Cascade**: If subagents were spawned, include cascade table
|
||||||
|
|
||||||
|
## Comment Format
|
||||||
|
|
||||||
|
```markdown
|
||||||
|
## 🔄 {agent-name} | phase:{phase} | depth:{depth}
|
||||||
|
|
||||||
|
**Event Type**: {subagent_result|state_change|budget_update|security_alert|checkpoint}
|
||||||
|
**Parent**: {parent_invocation_id}
|
||||||
|
**Invocation**: {invocation_id}
|
||||||
|
**Budget**: {before} → {consumed} → {remaining}
|
||||||
|
|
||||||
|
### Action Taken
|
||||||
|
{description}
|
||||||
|
|
||||||
|
### Result
|
||||||
|
```json
|
||||||
|
{result_json}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Next Decision
|
||||||
|
**Recommended next**: @{agent-name}
|
||||||
|
**Rationale**: {why}
|
||||||
|
**Estimated tokens**: {number}
|
||||||
|
**Budget remaining**: {number}
|
||||||
|
|
||||||
|
### Cascade Log (if any)
|
||||||
|
| Agent | Task | Result | Tokens | Verdict |
|
||||||
|
|-------|------|--------|--------|---------|
|
||||||
|
| {agent} | {task} | {result} | {tokens} | ✅/❌ |
|
||||||
|
|
||||||
|
### State Changes
|
||||||
|
- Labels add: {list}
|
||||||
|
- Labels remove: {list}
|
||||||
|
- Assignee: {name}
|
||||||
|
- Milestone: {id}
|
||||||
|
|
||||||
|
---
|
||||||
|
<!-- GNS_EVENT: {machine_readable_json} -->
|
||||||
|
```
|
||||||
|
|
||||||
|
## Machine-Readable Footer
|
||||||
|
|
||||||
|
```html
|
||||||
|
<!-- GNS_EVENT: {
|
||||||
|
"type": "subagent_result|state_change|budget_update|security_alert|checkpoint",
|
||||||
|
"agent": "agent-name",
|
||||||
|
"invocation_id": "cap-042-003",
|
||||||
|
"parent_id": "orch-042-001",
|
||||||
|
"depth": 1,
|
||||||
|
"budget": {"before": 5000, "consumed": 1200, "remaining": 3800},
|
||||||
|
"state_changes": {
|
||||||
|
"labels_add": ["phase::drafting-spec"],
|
||||||
|
"labels_remove": ["phase::gathering-evidence"],
|
||||||
|
"assignee": "agent-architect",
|
||||||
|
"milestone": null,
|
||||||
|
"is_locked": false
|
||||||
|
},
|
||||||
|
"cascade_log": [
|
||||||
|
{"agent": "history-miner", "task": "git search", "tokens": 1200, "verdict": "pass"}
|
||||||
|
],
|
||||||
|
"next_agent": "agent-architect",
|
||||||
|
"estimated_next_tokens": 3000,
|
||||||
|
"timestamp": "2026-05-08T20:00:00Z"
|
||||||
|
} -->
|
||||||
|
```
|
||||||
|
|
||||||
|
## Checkpoint Schema v2
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
checkpoint:
|
||||||
|
version: 2
|
||||||
|
issue: {number}
|
||||||
|
phase: {phase_name}
|
||||||
|
depth: {current_depth}
|
||||||
|
last_agent: {agent_name}
|
||||||
|
last_invocation: {invocation_id}
|
||||||
|
budget:
|
||||||
|
total: {allocated}
|
||||||
|
consumed: {used}
|
||||||
|
remaining: {left}
|
||||||
|
state:
|
||||||
|
labels: [{list}]
|
||||||
|
assignee: {agent_name}
|
||||||
|
milestone: {milestone_id}
|
||||||
|
history:
|
||||||
|
- {agent: name, invocation: id, action: description}
|
||||||
|
next_agent: {agent_name}
|
||||||
|
next_estimated_tokens: {number}
|
||||||
|
created_at: {ISO8601}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Budget Governance
|
||||||
|
|
||||||
|
- Agent MUST check `checkpoint.budget.remaining` before any subagent call
|
||||||
|
- Subagent call rejected if `estimated_cost > remaining * 0.5`
|
||||||
|
- Budget exhaustion → add label `budget::exhausted`, pause, request human approval
|
||||||
|
- Agent MUST update `consumed` and `remaining` in checkpoint after completion
|
||||||
|
|
||||||
|
## Depth Governance
|
||||||
|
|
||||||
|
- `cascade::depth-0`: Leaf agents, no subagent calls
|
||||||
|
- `cascade::depth-1`: One level of subagent calls
|
||||||
|
- `cascade::depth-2`: Two levels of subagent calls
|
||||||
|
- `cascade::depth-n`: Unlimited (orchestrator only)
|
||||||
|
- Depth exceeded → add label `cascade::depth-exceeded`, lock issue
|
||||||
|
|
||||||
|
## Security Rules
|
||||||
|
|
||||||
|
- Agent MUST NOT modify `.kilo/` files without `permission::evolve-system`
|
||||||
|
- Agent MUST NOT call subagents not in `allowed_subagents` list
|
||||||
|
- Agent MUST NOT exceed `max_cascade_depth`
|
||||||
|
- Violation → add label `permission::violation`, `is_locked = true`
|
||||||
|
|
||||||
|
## Recovery
|
||||||
|
|
||||||
|
If agent crashes or orchestrator restarts:
|
||||||
|
|
||||||
|
1. Read issue body → parse checkpoint
|
||||||
|
2. Read timeline → reconstruct events since last checkpoint
|
||||||
|
3. Read comments → parse GNS_EVENT footers
|
||||||
|
4. Resume from `next_agent` in checkpoint
|
||||||
|
5. No state lost — everything is in Gitea
|
||||||
|
|
||||||
|
## Prohibited Actions
|
||||||
|
|
||||||
|
- DO NOT hold state in RAM without writing to Gitea
|
||||||
|
- DO NOT skip comment footer
|
||||||
|
- DO NOT skip checkpoint update
|
||||||
|
- DO NOT exceed budget or depth limits
|
||||||
|
- DO NOT modify checkpoint version
|
||||||
|
- DO NOT hardcode APAW in API calls
|
||||||
117
scripts/init-gns-labels.py
Normal file
117
scripts/init-gns-labels.py
Normal file
@@ -0,0 +1,117 @@
|
|||||||
|
#!/usr/bin/env python3
|
||||||
|
"""
|
||||||
|
GNS-2 Label Initialization Script
|
||||||
|
Idempotent creation of Gitea labels for GNS-2 semantic routing.
|
||||||
|
"""
|
||||||
|
import urllib.request
|
||||||
|
import json
|
||||||
|
import os
|
||||||
|
|
||||||
|
GITEA_API = os.environ.get('GITEA_API_URL', 'https://git.softuniq.eu/api/v1')
|
||||||
|
REPO = 'UniqueSoft/APAW'
|
||||||
|
USER = 'NW'
|
||||||
|
PASS = 'eshkink0t'
|
||||||
|
|
||||||
|
def api(path, data=None, method='GET'):
|
||||||
|
url = f"{GITEA_API}/repos/{REPO}{path}"
|
||||||
|
headers = {'Content-Type': 'application/json'}
|
||||||
|
req = urllib.request.Request(
|
||||||
|
url,
|
||||||
|
data=json.dumps(data).encode() if data else None,
|
||||||
|
headers=headers,
|
||||||
|
method=method
|
||||||
|
)
|
||||||
|
# Basic Auth
|
||||||
|
import base64
|
||||||
|
creds = base64.b64encode(f"{USER}:{PASS}".encode()).decode()
|
||||||
|
req.add_header('Authorization', f'Basic {creds}')
|
||||||
|
try:
|
||||||
|
with urllib.request.urlopen(req) as r:
|
||||||
|
return json.loads(r.read())
|
||||||
|
except urllib.error.HTTPError as e:
|
||||||
|
body = e.read().decode()
|
||||||
|
print(f" HTTP {e.code}: {body}")
|
||||||
|
return None
|
||||||
|
|
||||||
|
LABELS = [
|
||||||
|
# Phase labels
|
||||||
|
{"name": "phase::gathering-evidence", "color": "c2e0c6", "description": "Agent is gathering data"},
|
||||||
|
{"name": "phase::drafting-spec", "color": "0052cc", "description": "Agent is drafting specification"},
|
||||||
|
{"name": "phase::refining-prompt", "color": "fbca04", "description": "Agent is refining prompts"},
|
||||||
|
{"name": "phase::awaiting-review", "color": "d93f0b", "description": "Agent awaits review"},
|
||||||
|
{"name": "phase::executing", "color": "0e8a16", "description": "Agent is executing task"},
|
||||||
|
{"name": "phase::verifying", "color": "5319e7", "description": "Agent is verifying results"},
|
||||||
|
# Agent labels
|
||||||
|
{"name": "agent::orchestrator", "color": "7C3AED", "description": "Owned by orchestrator"},
|
||||||
|
{"name": "agent::capability-analyst", "color": "6366F1", "description": "Owned by capability-analyst"},
|
||||||
|
{"name": "agent::agent-architect", "color": "10B981", "description": "Owned by agent-architect"},
|
||||||
|
{"name": "agent::lead-developer", "color": "DC2626", "description": "Owned by lead-developer"},
|
||||||
|
{"name": "agent::code-skeptic", "color": "059669", "description": "Owned by code-skeptic"},
|
||||||
|
{"name": "agent::the-fixer", "color": "D97706", "description": "Owned by the-fixer"},
|
||||||
|
{"name": "agent::evaluator", "color": "8B5CF6", "description": "Owned by evaluator"},
|
||||||
|
{"name": "agent::history-miner", "color": "6B7280", "description": "Owned by history-miner"},
|
||||||
|
{"name": "agent::system-analyst", "color": "2563EB", "description": "Owned by system-analyst"},
|
||||||
|
{"name": "agent::sdet-engineer", "color": "0891B2", "description": "Owned by sdet-engineer"},
|
||||||
|
# Budget labels
|
||||||
|
{"name": "budget::sufficient", "color": "0e8a16", "description": "Token budget sufficient"},
|
||||||
|
{"name": "budget::warning", "color": "fbca04", "description": "Token budget low"},
|
||||||
|
{"name": "budget::exhausted", "color": "b60205", "description": "Token budget exhausted"},
|
||||||
|
# Permission labels
|
||||||
|
{"name": "permission::read-only", "color": "cfd3d7", "description": "Read-only access"},
|
||||||
|
{"name": "permission::write-code", "color": "0052cc", "description": "Can write code"},
|
||||||
|
{"name": "permission::write-config", "color": "5319e7", "description": "Can write config"},
|
||||||
|
{"name": "permission::evolve-system", "color": "b60205", "description": "Can evolve system"},
|
||||||
|
{"name": "permission::violation", "color": "b60205", "description": "Security violation"},
|
||||||
|
# Cascade labels
|
||||||
|
{"name": "cascade::depth-0", "color": "cfd3d7", "description": "No subagent calls"},
|
||||||
|
{"name": "cascade::depth-1", "color": "c2e0c6", "description": "1-level subagent calls"},
|
||||||
|
{"name": "cascade::depth-2", "color": "0052cc", "description": "2-level subagent calls"},
|
||||||
|
{"name": "cascade::depth-n", "color": "5319e7", "description": "Unlimited subagent calls"},
|
||||||
|
{"name": "cascade::depth-exceeded", "color": "b60205", "description": "Depth limit exceeded"},
|
||||||
|
# Quality labels
|
||||||
|
{"name": "quality::pass", "color": "0e8a16", "description": "Quality check passed"},
|
||||||
|
{"name": "quality::fail", "color": "b60205", "description": "Quality check failed"},
|
||||||
|
{"name": "quality::needs-fix", "color": "fbca04", "description": "Needs fixes"},
|
||||||
|
{"name": "quality::blocked", "color": "d73a4a", "description": "Blocked by quality"},
|
||||||
|
# Evolution labels
|
||||||
|
{"name": "evolution::model-change", "color": "8B5CF6", "description": "Model change evolution"},
|
||||||
|
{"name": "evolution::new-agent", "color": "10B981", "description": "New agent evolution"},
|
||||||
|
{"name": "evolution::new-skill", "color": "2563EB", "description": "New skill evolution"},
|
||||||
|
{"name": "evolution::new-workflow", "color": "7C3AED", "description": "New workflow evolution"},
|
||||||
|
{"name": "evolution::prompt-opt", "color": "D97706", "description": "Prompt optimization evolution"},
|
||||||
|
# Memory labels
|
||||||
|
{"name": "memory::checkpoint", "color": "0052cc", "description": "Checkpoint stored"},
|
||||||
|
{"name": "memory::stale", "color": "fbca04", "description": "Checkpoint stale"},
|
||||||
|
{"name": "memory::fresh", "color": "0e8a16", "description": "Checkpoint fresh"},
|
||||||
|
{"name": "memory::recoverable", "color": "c2e0c6", "description": "Checkpoint recoverable"},
|
||||||
|
]
|
||||||
|
|
||||||
|
def main():
|
||||||
|
print("GNS-2 Label Initialization")
|
||||||
|
print(f"Target: {REPO}")
|
||||||
|
print()
|
||||||
|
|
||||||
|
existing = api("/labels")
|
||||||
|
existing_names = {l['name'] for l in (existing or [])}
|
||||||
|
print(f"Existing labels: {len(existing_names)}")
|
||||||
|
|
||||||
|
created = 0
|
||||||
|
skipped = 0
|
||||||
|
for label in LABELS:
|
||||||
|
if label['name'] in existing_names:
|
||||||
|
print(f" SKIP: {label['name']}")
|
||||||
|
skipped += 1
|
||||||
|
continue
|
||||||
|
result = api("/labels", label, 'POST')
|
||||||
|
if result:
|
||||||
|
print(f" CREATE: {label['name']} ({label['color']})")
|
||||||
|
created += 1
|
||||||
|
else:
|
||||||
|
print(f" FAIL: {label['name']}")
|
||||||
|
|
||||||
|
print()
|
||||||
|
print(f"Done: {created} created, {skipped} skipped")
|
||||||
|
print(f"Total labels: {len(existing_names) + created}")
|
||||||
|
|
||||||
|
if __name__ == '__main__':
|
||||||
|
main()
|
||||||
74
scripts/validate-gns-agents.py
Normal file
74
scripts/validate-gns-agents.py
Normal file
@@ -0,0 +1,74 @@
|
|||||||
|
#!/usr/bin/env python3
|
||||||
|
"""
|
||||||
|
GNS-2 Agent Protocol Validator
|
||||||
|
Validates that agents follow Gitea-Nervous-System v2.0 protocol.
|
||||||
|
"""
|
||||||
|
import re
|
||||||
|
import sys
|
||||||
|
import yaml
|
||||||
|
import glob
|
||||||
|
|
||||||
|
CHECKPOINT_PATTERN = re.compile(r'## GNS Checkpoint\s*```yaml\s*(.*?)```', re.DOTALL)
|
||||||
|
EVENT_PATTERN = re.compile(r'<!-- GNS_EVENT:\s*(.*?)\s*-->', re.DOTALL)
|
||||||
|
|
||||||
|
def validate_agent_file(path):
|
||||||
|
with open(path) as f:
|
||||||
|
content = f.read()
|
||||||
|
|
||||||
|
errors = []
|
||||||
|
agent_name = path.split('/')[-1].replace('.md', '')
|
||||||
|
|
||||||
|
# Check frontmatter
|
||||||
|
if not content.startswith('---'):
|
||||||
|
errors.append('Missing YAML frontmatter')
|
||||||
|
else:
|
||||||
|
parts = content.split('---')
|
||||||
|
if len(parts) >= 2:
|
||||||
|
try:
|
||||||
|
fm = yaml.safe_load(parts[1])
|
||||||
|
if not fm.get('description'):
|
||||||
|
errors.append('Missing description in frontmatter')
|
||||||
|
if 'mode' not in fm:
|
||||||
|
errors.append('Missing mode in frontmatter')
|
||||||
|
if 'task' not in str(fm.get('permission', {})):
|
||||||
|
errors.append('Missing task permission')
|
||||||
|
except Exception as e:
|
||||||
|
errors.append(f'Invalid YAML frontmatter: {e}')
|
||||||
|
|
||||||
|
# Check GNS protocol sections
|
||||||
|
if 'GNS Checkpoint' not in content:
|
||||||
|
errors.append('Missing GNS Checkpoint section')
|
||||||
|
if 'GNS_EVENT' not in content:
|
||||||
|
errors.append('Missing GNS_EVENT footer example')
|
||||||
|
if 'gns-agent-protocol' not in content.lower() and 'GNS' not in content:
|
||||||
|
errors.append('Agent not updated for GNS-2 protocol')
|
||||||
|
|
||||||
|
return errors
|
||||||
|
|
||||||
|
def main():
|
||||||
|
print("GNS-2 Agent Protocol Validator")
|
||||||
|
print()
|
||||||
|
|
||||||
|
all_valid = True
|
||||||
|
for path in glob.glob('.kilo/agents/*.md'):
|
||||||
|
errors = validate_agent_file(path)
|
||||||
|
agent_name = path.split('/')[-1].replace('.md', '')
|
||||||
|
|
||||||
|
if errors:
|
||||||
|
print(f"❌ {agent_name}: {len(errors)} errors")
|
||||||
|
for err in errors:
|
||||||
|
print(f" - {err}")
|
||||||
|
all_valid = False
|
||||||
|
else:
|
||||||
|
print(f"✅ {agent_name}")
|
||||||
|
|
||||||
|
print()
|
||||||
|
if all_valid:
|
||||||
|
print("All agents pass GNS-2 validation")
|
||||||
|
return 0
|
||||||
|
else:
|
||||||
|
print("Some agents need GNS-2 protocol update")
|
||||||
|
return 1
|
||||||
|
|
||||||
|
if __name__ == '__main__':
|
||||||
|
sys.exit(main())
|
||||||
@@ -86,6 +86,8 @@ export interface Issue {
|
|||||||
created_at: string
|
created_at: string
|
||||||
updated_at: string
|
updated_at: string
|
||||||
html_url?: string
|
html_url?: string
|
||||||
|
is_locked?: boolean
|
||||||
|
milestone?: Milestone | null
|
||||||
}
|
}
|
||||||
|
|
||||||
export interface CreateIssueOptions {
|
export interface CreateIssueOptions {
|
||||||
@@ -517,8 +519,192 @@ export class GiteaClient {
|
|||||||
)
|
)
|
||||||
}
|
}
|
||||||
|
|
||||||
async setIssueMilestone(issueNumber: number, milestoneId: number | null): Promise<Issue> {
|
// ==================== Issue Assignees ====================
|
||||||
return this.updateIssue(issueNumber, { milestone: milestoneId ?? 0 })
|
|
||||||
|
async getAssignee(issueNumber: number): Promise<string | null> {
|
||||||
|
const issue = await this.getIssue(issueNumber)
|
||||||
|
return issue.assignees && issue.assignees.length > 0 ? issue.assignees[0].login : null
|
||||||
|
}
|
||||||
|
|
||||||
|
async setAssignee(issueNumber: number, assignee: string | null): Promise<Issue> {
|
||||||
|
return this.updateIssue(issueNumber, { assignees: assignee ? [assignee] : [] })
|
||||||
|
}
|
||||||
|
|
||||||
|
// ==================== Issue Lock / Circuit Breaker ====================
|
||||||
|
|
||||||
|
async lockIssue(issueNumber: number): Promise<Issue> {
|
||||||
|
return this.updateIssue(issueNumber, { is_locked: true })
|
||||||
|
}
|
||||||
|
|
||||||
|
async unlockIssue(issueNumber: number): Promise<Issue> {
|
||||||
|
return this.updateIssue(issueNumber, { is_locked: false })
|
||||||
|
}
|
||||||
|
|
||||||
|
async isLocked(issueNumber: number): Promise<boolean> {
|
||||||
|
const issue = await this.getIssue(issueNumber)
|
||||||
|
return issue.is_locked || false
|
||||||
|
}
|
||||||
|
|
||||||
|
// ==================== GNS-2 Checkpoint Protocol ====================
|
||||||
|
|
||||||
|
private CHECKPOINT_PATTERN = /## GNS Checkpoint\s*```yaml\s*([\s\S]*?)```/
|
||||||
|
|
||||||
|
async getCheckpoint(issueNumber: number): Promise<any | null> {
|
||||||
|
const issue = await this.getIssue(issueNumber)
|
||||||
|
const match = this.CHECKPOINT_PATTERN.exec(issue.body)
|
||||||
|
if (!match) return null
|
||||||
|
try {
|
||||||
|
// Simple YAML-like parsing - in production use a YAML parser
|
||||||
|
const yaml = match[1]
|
||||||
|
const lines = yaml.split('\n').filter(l => l.trim() && !l.trim().startsWith('#'))
|
||||||
|
const result: any = {}
|
||||||
|
let current: any = result
|
||||||
|
let indentStack: { obj: any; indent: number }[] = [{ obj: result, indent: -1 }]
|
||||||
|
|
||||||
|
for (const line of lines) {
|
||||||
|
const indent = line.search(/\S/)
|
||||||
|
const trimmed = line.trim()
|
||||||
|
const [key, ...valParts] = trimmed.split(':')
|
||||||
|
const val = valParts.join(':').trim()
|
||||||
|
|
||||||
|
while (indentStack.length > 1 && indent <= indentStack[indentStack.length - 1].indent) {
|
||||||
|
indentStack.pop()
|
||||||
|
}
|
||||||
|
current = indentStack[indentStack.length - 1].obj
|
||||||
|
|
||||||
|
if (val === '') {
|
||||||
|
// Nested object
|
||||||
|
const newObj: any = {}
|
||||||
|
current[key.trim()] = newObj
|
||||||
|
indentStack.push({ obj: newObj, indent: indent })
|
||||||
|
} else if (val.startsWith('[') && val.endsWith(']')) {
|
||||||
|
// Array
|
||||||
|
current[key.trim()] = val.slice(1, -1).split(',').map(s => s.trim())
|
||||||
|
} else if (val === 'true' || val === 'false') {
|
||||||
|
current[key.trim()] = val === 'true'
|
||||||
|
} else if (!isNaN(Number(val))) {
|
||||||
|
current[key.trim()] = Number(val)
|
||||||
|
} else {
|
||||||
|
current[key.trim()] = val
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return result
|
||||||
|
} catch {
|
||||||
|
return null
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
async updateCheckpoint(issueNumber: number, checkpoint: any): Promise<Issue> {
|
||||||
|
const issue = await this.getIssue(issueNumber)
|
||||||
|
const yamlBlock = `## GNS Checkpoint\n\`\`\`yaml\n${this.toYaml(checkpoint)}\n\`\`\``
|
||||||
|
|
||||||
|
let newBody: string
|
||||||
|
if (this.CHECKPOINT_PATTERN.test(issue.body)) {
|
||||||
|
newBody = issue.body.replace(this.CHECKPOINT_PATTERN, yamlBlock)
|
||||||
|
} else {
|
||||||
|
newBody = issue.body + '\n\n' + yamlBlock
|
||||||
|
}
|
||||||
|
|
||||||
|
return this.updateIssue(issueNumber, { body: newBody })
|
||||||
|
}
|
||||||
|
|
||||||
|
private toYaml(obj: any, indent = 0): string {
|
||||||
|
const spaces = ' '.repeat(indent)
|
||||||
|
let result = ''
|
||||||
|
for (const [key, val] of Object.entries(obj)) {
|
||||||
|
if (val === null || val === undefined) {
|
||||||
|
result += `${spaces}${key}:\n`
|
||||||
|
} else if (Array.isArray(val)) {
|
||||||
|
if (val.length === 0) {
|
||||||
|
result += `${spaces}${key}: []\n`
|
||||||
|
} else {
|
||||||
|
result += `${spaces}${key}:\n`
|
||||||
|
for (const item of val) {
|
||||||
|
if (typeof item === 'object') {
|
||||||
|
result += `${spaces}- ${this.toYaml(item, indent + 1).trimStart()}`
|
||||||
|
} else {
|
||||||
|
result += `${spaces}- ${item}\n`
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
} else if (typeof val === 'object') {
|
||||||
|
result += `${spaces}${key}:\n`
|
||||||
|
result += this.toYaml(val, indent + 1)
|
||||||
|
} else {
|
||||||
|
result += `${spaces}${key}: ${val}\n`
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return result
|
||||||
|
}
|
||||||
|
|
||||||
|
async clearCheckpoint(issueNumber: number): Promise<Issue> {
|
||||||
|
const issue = await this.getIssue(issueNumber)
|
||||||
|
const newBody = issue.body.replace(this.CHECKPOINT_PATTERN, '')
|
||||||
|
return this.updateIssue(issueNumber, { body: newBody })
|
||||||
|
}
|
||||||
|
|
||||||
|
// ==================== GNS-2 Event Parsing ====================
|
||||||
|
|
||||||
|
private GNS_EVENT_PATTERN = /<!-- GNS_EVENT:\s*({[\s\S]*?})\s*-->/g
|
||||||
|
|
||||||
|
async getGNSEvents(issueNumber: number): Promise<any[]> {
|
||||||
|
const comments = await this.getComments(issueNumber)
|
||||||
|
const events: any[] = []
|
||||||
|
|
||||||
|
for (const comment of comments) {
|
||||||
|
let match
|
||||||
|
while ((match = this.GNS_EVENT_PATTERN.exec(comment.body)) !== null) {
|
||||||
|
try {
|
||||||
|
events.push(JSON.parse(match[1]))
|
||||||
|
} catch {
|
||||||
|
// skip malformed events
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return events
|
||||||
|
}
|
||||||
|
|
||||||
|
async getLastGNSEvent(issueNumber: number): Promise<any | null> {
|
||||||
|
const events = await this.getGNSEvents(issueNumber)
|
||||||
|
return events.length > 0 ? events[events.length - 1] : null
|
||||||
|
}
|
||||||
|
|
||||||
|
// ==================== Polling: Triggered Issues ====================
|
||||||
|
|
||||||
|
async getTriggeredIssues(options?: {
|
||||||
|
labels?: string[]
|
||||||
|
assignee?: string
|
||||||
|
milestone?: number
|
||||||
|
updated_after?: string
|
||||||
|
is_locked?: boolean
|
||||||
|
}): Promise<Issue[]> {
|
||||||
|
const params = new URLSearchParams()
|
||||||
|
params.set('state', 'open')
|
||||||
|
|
||||||
|
if (options?.labels) {
|
||||||
|
params.set('labels', options.labels.join(','))
|
||||||
|
}
|
||||||
|
if (options?.assignee) {
|
||||||
|
params.set('assignee', options.assignee)
|
||||||
|
}
|
||||||
|
if (options?.milestone) {
|
||||||
|
params.set('milestone', String(options.milestone))
|
||||||
|
}
|
||||||
|
if (options?.updated_after) {
|
||||||
|
params.set('since', options.updated_after)
|
||||||
|
}
|
||||||
|
|
||||||
|
const issues = await this.request<Issue[]>(
|
||||||
|
'GET',
|
||||||
|
`/repos/${encodeURIComponent(this.owner)}/${encodeURIComponent(this.repo)}/issues?${params.toString()}`
|
||||||
|
)
|
||||||
|
|
||||||
|
if (options?.is_locked !== undefined) {
|
||||||
|
return issues.filter(i => (i.is_locked || false) === options.is_locked)
|
||||||
|
}
|
||||||
|
|
||||||
|
return issues
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -1,68 +1,42 @@
|
|||||||
// kilocode_change - integrated module
|
// kilocode_change - integrated module
|
||||||
// Pipeline runner - orchestrates agent workflow with Gitea logging
|
// Pipeline runner - GNS-2 Polling Supervisor for distributed agent workflow
|
||||||
|
|
||||||
import type { AgentRole } from "./index"
|
import type { AgentRole } from "./index"
|
||||||
import { decideRouting, formatAgentTag, type IssueContext, type RoutingDecision } from "./router"
|
|
||||||
import { type IssueStatus } from "./workflow"
|
|
||||||
import {
|
|
||||||
saveEfficiencyScore,
|
|
||||||
type EfficiencyScore,
|
|
||||||
hasLowScore,
|
|
||||||
findPromptOptimizationTargets
|
|
||||||
} from "./prompt-loader"
|
|
||||||
import {
|
|
||||||
calculateOverallScore,
|
|
||||||
generateRecommendations,
|
|
||||||
type AgentPerformance,
|
|
||||||
type EvaluationResult
|
|
||||||
} from "./evaluator"
|
|
||||||
import {
|
import {
|
||||||
GiteaClient,
|
GiteaClient,
|
||||||
logPipelineStep,
|
logPipelineStep,
|
||||||
logAgentPerformance,
|
logAgentPerformance,
|
||||||
detectRepository
|
detectRepository
|
||||||
} from "./gitea-client"
|
} from "./gitea-client"
|
||||||
import * as fs from "fs"
|
|
||||||
import * as path from "path"
|
|
||||||
|
|
||||||
export interface PipelineConfig {
|
export interface PipelineConfig {
|
||||||
giteaToken?: string
|
giteaToken?: string
|
||||||
giteaApiUrl?: string
|
giteaApiUrl?: string
|
||||||
efficiencyThreshold?: number
|
efficiencyThreshold?: number
|
||||||
autoLog?: boolean
|
autoLog?: boolean
|
||||||
|
pollIntervalMs?: number
|
||||||
}
|
}
|
||||||
|
|
||||||
export interface PipelineRunOptions {
|
export interface PipelineRunOptions {
|
||||||
issueNumber: number
|
issueNumber: number
|
||||||
initialStatus?: IssueStatus
|
milestone?: number
|
||||||
files?: string[]
|
|
||||||
testResults?: { passed: number; failed: number }
|
|
||||||
}
|
}
|
||||||
|
|
||||||
export interface PipelineResult {
|
export interface PipelineResult {
|
||||||
success: boolean
|
success: boolean
|
||||||
finalAgent: AgentRole | null
|
finalAgent: string | null
|
||||||
finalStatus: string
|
finalStatus: string
|
||||||
agentsUsed: AgentRole[]
|
agentsUsed: string[]
|
||||||
totalSteps: number
|
totalSteps: number
|
||||||
errors: string[]
|
errors: string[]
|
||||||
}
|
}
|
||||||
|
|
||||||
export interface Checkpoint {
|
export class PollingSupervisor {
|
||||||
issueNumber: number
|
|
||||||
phase: string
|
|
||||||
agentName: string
|
|
||||||
filesModified: string[]
|
|
||||||
status: string
|
|
||||||
timestamp: string
|
|
||||||
nextAgent: string | null
|
|
||||||
}
|
|
||||||
|
|
||||||
export class PipelineRunner {
|
|
||||||
private client: GiteaClient
|
private client: GiteaClient
|
||||||
private efficiencyThreshold: number
|
private efficiencyThreshold: number
|
||||||
private autoLog: boolean
|
private autoLog: boolean
|
||||||
private initialized: boolean = false
|
private initialized: boolean = false
|
||||||
|
private pollInterval: number
|
||||||
|
|
||||||
constructor(config: PipelineConfig = {}) {
|
constructor(config: PipelineConfig = {}) {
|
||||||
this.client = new GiteaClient({
|
this.client = new GiteaClient({
|
||||||
@@ -71,6 +45,7 @@ export class PipelineRunner {
|
|||||||
})
|
})
|
||||||
this.efficiencyThreshold = config.efficiencyThreshold ?? 7
|
this.efficiencyThreshold = config.efficiencyThreshold ?? 7
|
||||||
this.autoLog = config.autoLog ?? true
|
this.autoLog = config.autoLog ?? true
|
||||||
|
this.pollInterval = config.pollIntervalMs ?? 30000 // 30 seconds
|
||||||
}
|
}
|
||||||
|
|
||||||
async initialize(): Promise<void> {
|
async initialize(): Promise<void> {
|
||||||
@@ -81,240 +56,236 @@ export class PipelineRunner {
|
|||||||
this.initialized = true
|
this.initialized = true
|
||||||
}
|
}
|
||||||
|
|
||||||
async run(options: PipelineRunOptions): Promise<PipelineResult> {
|
/**
|
||||||
|
* GNS-2 Polling Supervisor
|
||||||
|
*
|
||||||
|
* Instead of actively dispatching agents in a while-loop,
|
||||||
|
* the supervisor periodically polls Gitea for issues that
|
||||||
|
* need attention based on labels, assignees, and comments.
|
||||||
|
*/
|
||||||
|
async supervise(options: PipelineRunOptions): Promise<PipelineResult> {
|
||||||
await this.initialize()
|
await this.initialize()
|
||||||
|
|
||||||
const agentsUsed: AgentRole[] = []
|
const agentsUsed: string[] = []
|
||||||
const errors: string[] = []
|
const errors: string[] = []
|
||||||
let currentStatus: IssueStatus = options.initialStatus ?? "new"
|
|
||||||
let currentAgent: AgentRole | null = null
|
|
||||||
let steps = 0
|
let steps = 0
|
||||||
const maxSteps = 20 // Prevent infinite loops
|
const maxSteps = 100 // Safety limit
|
||||||
|
|
||||||
let ctx: IssueContext = await this.buildIssueContext(options)
|
// Main polling loop
|
||||||
|
|
||||||
while (steps < maxSteps) {
|
while (steps < maxSteps) {
|
||||||
steps++
|
steps++
|
||||||
|
|
||||||
const decision = decideRouting(ctx)
|
|
||||||
|
|
||||||
if (!decision.nextAgent) {
|
// Check if issue is locked (circuit breaker)
|
||||||
break
|
const isLocked = await this.client.isLocked(options.issueNumber)
|
||||||
|
if (isLocked) {
|
||||||
|
await this.logEvent(options.issueNumber, '🔒', 'Issue locked by circuit breaker. Manual review required.')
|
||||||
|
return {
|
||||||
|
success: false,
|
||||||
|
finalAgent: null,
|
||||||
|
finalStatus: 'blocked',
|
||||||
|
agentsUsed,
|
||||||
|
totalSteps: steps,
|
||||||
|
errors: [...errors, 'Issue locked by circuit breaker']
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
currentAgent = decision.nextAgent
|
// Get current issue state
|
||||||
agentsUsed.push(currentAgent)
|
const issue = await this.client.getIssue(options.issueNumber)
|
||||||
|
const checkpoint = await this.client.getCheckpoint(options.issueNumber)
|
||||||
|
const lastEvent = await this.client.getLastGNSEvent(options.issueNumber)
|
||||||
|
|
||||||
if (this.autoLog) {
|
// Check if workflow is complete
|
||||||
await logPipelineStep(
|
if (issue.state === 'closed') {
|
||||||
this.client,
|
return {
|
||||||
|
success: errors.length === 0,
|
||||||
|
finalAgent: lastEvent?.agent || null,
|
||||||
|
finalStatus: 'completed',
|
||||||
|
agentsUsed,
|
||||||
|
totalSteps: steps,
|
||||||
|
errors,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check budget exhaustion
|
||||||
|
if (checkpoint?.budget?.remaining !== undefined && checkpoint.budget.remaining <= 0) {
|
||||||
|
await this.client.addLabels(options.issueNumber, ['budget::exhausted'])
|
||||||
|
await this.client.lockIssue(options.issueNumber)
|
||||||
|
await this.logEvent(options.issueNumber, '💰', 'Budget exhausted. Issue locked.')
|
||||||
|
return {
|
||||||
|
success: false,
|
||||||
|
finalAgent: lastEvent?.agent || null,
|
||||||
|
finalStatus: 'budget_exhausted',
|
||||||
|
agentsUsed,
|
||||||
|
totalSteps: steps,
|
||||||
|
errors: [...errors, 'Budget exhausted']
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Determine next action based on issue state
|
||||||
|
const nextAction = await this.determineNextAction(issue, checkpoint, lastEvent)
|
||||||
|
|
||||||
|
if (nextAction.type === 'invoke_agent') {
|
||||||
|
const agentName = nextAction.agent!
|
||||||
|
if (!agentsUsed.includes(agentName)) {
|
||||||
|
agentsUsed.push(agentName)
|
||||||
|
}
|
||||||
|
|
||||||
|
await this.logEvent(
|
||||||
options.issueNumber,
|
options.issueNumber,
|
||||||
`${formatAgentTag(currentAgent)}`,
|
'🚀',
|
||||||
"started",
|
`Invoking ${agentName} (depth: ${checkpoint?.depth || 0}, budget: ${checkpoint?.budget?.remaining || 'unknown'})`
|
||||||
decision.instructions
|
|
||||||
)
|
)
|
||||||
|
|
||||||
|
// Update assignee to target agent
|
||||||
|
await this.client.setAssignee(options.issueNumber, agentName)
|
||||||
|
|
||||||
|
// In GNS-2, the agent itself will read the issue and act
|
||||||
|
// The supervisor just marks that the agent has been triggered
|
||||||
|
// The agent should respond by posting a comment
|
||||||
|
|
||||||
|
} else if (nextAction.type === 'wait') {
|
||||||
|
// Wait for agent to respond
|
||||||
|
await new Promise(resolve => setTimeout(resolve, this.pollInterval))
|
||||||
|
continue
|
||||||
|
|
||||||
|
} else if (nextAction.type === 'stuck') {
|
||||||
|
// Issue hasn't been updated in a while
|
||||||
|
await this.logEvent(options.issueNumber, '⏰', 'Process appears stuck. Last activity older than threshold.')
|
||||||
|
errors.push('Process stuck')
|
||||||
|
|
||||||
|
} else if (nextAction.type === 'complete') {
|
||||||
|
return {
|
||||||
|
success: errors.length === 0,
|
||||||
|
finalAgent: lastEvent?.agent || null,
|
||||||
|
finalStatus: 'completed',
|
||||||
|
agentsUsed,
|
||||||
|
totalSteps: steps,
|
||||||
|
errors,
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
currentStatus = decision.status as IssueStatus
|
// Wait before next poll
|
||||||
await this.client.setStatus(options.issueNumber, currentStatus)
|
await new Promise(resolve => setTimeout(resolve, this.pollInterval))
|
||||||
|
|
||||||
ctx = await this.buildIssueContext(options)
|
|
||||||
}
|
}
|
||||||
|
|
||||||
return {
|
return {
|
||||||
success: errors.length === 0,
|
success: false,
|
||||||
finalAgent: currentAgent,
|
finalAgent: null,
|
||||||
finalStatus: currentStatus,
|
finalStatus: 'max_steps_reached',
|
||||||
agentsUsed,
|
agentsUsed,
|
||||||
totalSteps: steps,
|
totalSteps: steps,
|
||||||
errors,
|
errors: [...errors, `Max steps (${maxSteps}) reached`],
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
private async buildIssueContext(options: PipelineRunOptions): Promise<IssueContext> {
|
/**
|
||||||
const issue = await this.client.getIssue(options.issueNumber)
|
* Determine what to do next based on issue state
|
||||||
const comments = await this.client.getComments(options.issueNumber)
|
*/
|
||||||
|
private async determineNextAction(
|
||||||
|
issue: any,
|
||||||
|
checkpoint: any | null,
|
||||||
|
lastEvent: any | null
|
||||||
|
): Promise<{ type: 'invoke_agent' | 'wait' | 'stuck' | 'complete'; agent?: string }> {
|
||||||
|
|
||||||
return {
|
const now = new Date()
|
||||||
status: issue.labels.find(l => l.name.startsWith("status:"))?.name.replace("status: ", "") ?? "new",
|
const lastUpdated = new Date(issue.updated_at)
|
||||||
labels: issue.labels.map(l => l.name),
|
const minutesSinceUpdate = (now.getTime() - lastUpdated.getTime()) / 60000
|
||||||
checklists: this.parseChecklists(issue.body),
|
|
||||||
comments: comments.map(c => c.body),
|
// If issue was just updated and it's not by the supervisor, wait
|
||||||
files: options.files ?? [],
|
if (minutesSinceUpdate < 1) {
|
||||||
testResults: options.testResults,
|
return { type: 'wait' }
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// If no checkpoint exists, this is a new issue
|
||||||
|
if (!checkpoint) {
|
||||||
|
return { type: 'invoke_agent', agent: 'requirement-refiner' }
|
||||||
|
}
|
||||||
|
|
||||||
|
// If last event specifies next_agent, invoke them
|
||||||
|
if (lastEvent?.next_agent) {
|
||||||
|
// Check if next agent has already responded
|
||||||
|
const comments = await this.client.getComments(issue.number)
|
||||||
|
const hasResponded = comments.some(
|
||||||
|
c => c.user?.login === lastEvent.next_agent ||
|
||||||
|
c.body.includes(`## 🔄 ${lastEvent.next_agent}`)
|
||||||
|
)
|
||||||
|
|
||||||
|
if (!hasResponded) {
|
||||||
|
return { type: 'invoke_agent', agent: lastEvent.next_agent }
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check status labels for routing
|
||||||
|
const statusLabels = issue.labels.filter((l: any) => l.name.startsWith('status::'))
|
||||||
|
const status = statusLabels[0]?.name.replace('status::', '') || 'new'
|
||||||
|
|
||||||
|
// Map status to agent (fallback when checkpoint/event doesn't specify)
|
||||||
|
const statusToAgent: Record<string, string> = {
|
||||||
|
'new': 'requirement-refiner',
|
||||||
|
'planned': 'history-miner',
|
||||||
|
'researching': 'system-analyst',
|
||||||
|
'designed': 'sdet-engineer',
|
||||||
|
'testing': 'lead-developer',
|
||||||
|
'implementing': 'code-skeptic',
|
||||||
|
'reviewing': 'performance-engineer',
|
||||||
|
'fixing': 'the-fixer',
|
||||||
|
'releasing': 'release-manager',
|
||||||
|
'evaluated': 'evaluator',
|
||||||
|
'completed': 'orchestrator',
|
||||||
|
}
|
||||||
|
|
||||||
|
const nextAgent = statusToAgent[status]
|
||||||
|
if (nextAgent && status !== 'completed') {
|
||||||
|
return { type: 'invoke_agent', agent: nextAgent }
|
||||||
|
}
|
||||||
|
|
||||||
|
// If completed or no next agent, mark as complete
|
||||||
|
if (status === 'completed') {
|
||||||
|
return { type: 'complete' }
|
||||||
|
}
|
||||||
|
|
||||||
|
// If stuck for more than 10 minutes
|
||||||
|
if (minutesSinceUpdate > 10) {
|
||||||
|
return { type: 'stuck' }
|
||||||
|
}
|
||||||
|
|
||||||
|
return { type: 'wait' }
|
||||||
}
|
}
|
||||||
|
|
||||||
private parseChecklists(body: string): { completed: number; total: number } {
|
/**
|
||||||
const lines = body.split("\n")
|
* Poll multiple issues for a milestone
|
||||||
const checkItems = lines.filter(l => l.match(/- \[[ x]\]/i))
|
*/
|
||||||
const completed = checkItems.filter(l => l.match(/- \[x\]/i)).length
|
async superviseMilestone(milestoneId: number): Promise<PipelineResult[]> {
|
||||||
|
|
||||||
return { completed, total: checkItems.length }
|
|
||||||
}
|
|
||||||
|
|
||||||
async logEvaluation(
|
|
||||||
issueNumber: number,
|
|
||||||
performances: AgentPerformance[],
|
|
||||||
iterations: number,
|
|
||||||
durationHours: number
|
|
||||||
): Promise<void> {
|
|
||||||
await this.initialize()
|
await this.initialize()
|
||||||
|
|
||||||
const agents: Record<string, number> = {}
|
const triggered = await this.client.getTriggeredIssues({
|
||||||
for (const perf of performances) {
|
milestone: milestoneId,
|
||||||
agents[perf.agent] = perf.score
|
labels: ['status::new', 'status::planned', 'status::researching', 'status::designed', 'status::testing'],
|
||||||
}
|
is_locked: false,
|
||||||
|
|
||||||
const result: EvaluationResult = {
|
|
||||||
issue: issueNumber,
|
|
||||||
date: new Date().toISOString(),
|
|
||||||
agents,
|
|
||||||
iterations,
|
|
||||||
duration_hours: durationHours,
|
|
||||||
summary: calculateOverallScore(performances).toString(),
|
|
||||||
recommendations: generateRecommendations({
|
|
||||||
issue: issueNumber,
|
|
||||||
date: new Date().toISOString(),
|
|
||||||
agents,
|
|
||||||
iterations,
|
|
||||||
duration_hours: durationHours,
|
|
||||||
summary: "",
|
|
||||||
recommendations: [],
|
|
||||||
}),
|
|
||||||
}
|
|
||||||
|
|
||||||
await saveEfficiencyScore({
|
|
||||||
issue: result.issue,
|
|
||||||
date: result.date,
|
|
||||||
agents: result.agents,
|
|
||||||
iterations: result.iterations,
|
|
||||||
duration_hours: result.duration_hours,
|
|
||||||
})
|
})
|
||||||
|
|
||||||
|
const results: PipelineResult[] = []
|
||||||
|
for (const issue of triggered) {
|
||||||
|
const result = await this.supervise({ issueNumber: issue.number, milestone: milestoneId })
|
||||||
|
results.push(result)
|
||||||
|
}
|
||||||
|
|
||||||
|
return results
|
||||||
|
}
|
||||||
|
|
||||||
|
private async logEvent(issueNumber: number, emoji: string, message: string): Promise<void> {
|
||||||
if (this.autoLog) {
|
if (this.autoLog) {
|
||||||
const overallScore = calculateOverallScore(performances)
|
await this.client.createComment(issueNumber, {
|
||||||
const scoreEmoji = overallScore >= 8 ? "🟢" : overallScore >= 5 ? "🟡" : "🔴"
|
body: `${emoji} **Supervisor**: ${message}\n\n\`\`\`\nTimestamp: ${new Date().toISOString()}\n\`\`\``
|
||||||
|
})
|
||||||
let comment = `## ${scoreEmoji} Pipeline Evaluation Report
|
|
||||||
|
|
||||||
**Issue**: #${issueNumber}
|
|
||||||
**Overall Score**: ${overallScore}/10
|
|
||||||
**Duration**: ${durationHours.toFixed(1)}h
|
|
||||||
**Iterations**: ${iterations}
|
|
||||||
|
|
||||||
### Agent Scores
|
|
||||||
|
|
||||||
| Agent | Score |
|
|
||||||
|-------|-------|
|
|
||||||
`
|
|
||||||
for (const perf of performances) {
|
|
||||||
const emoji = perf.score >= 8 ? "🟢" : perf.score >= 5 ? "🟡" : "🔴"
|
|
||||||
comment += `| ${emoji} ${perf.agent} | ${perf.score}/10 |\n`
|
|
||||||
}
|
|
||||||
|
|
||||||
if (result.recommendations.length > 0) {
|
|
||||||
comment += `\n### Recommendations\n\n`
|
|
||||||
for (const rec of result.recommendations) {
|
|
||||||
comment += `- ${rec}\n`
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
await this.client.createComment(issueNumber, { body: comment })
|
|
||||||
|
|
||||||
const lowScorers = performances.filter(p => p.score < this.efficiencyThreshold)
|
|
||||||
if (lowScorers.length > 0) {
|
|
||||||
const targets = lowScorers.map(p => `@${p.agent}`).join(", ")
|
|
||||||
await this.client.createComment(issueNumber, {
|
|
||||||
body: `⚠️ **Prompt Optimization Needed**\n\nThe following agents scored below ${this.efficiencyThreshold}/10: ${targets}\n\nConsider running prompt optimization after this issue is closed.`
|
|
||||||
})
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
async checkForDuplicates(issueNumber: number, keywords: string[]): Promise<{
|
|
||||||
hasDuplicates: boolean
|
|
||||||
relatedIssues: number[]
|
|
||||||
}> {
|
|
||||||
await this.initialize()
|
|
||||||
|
|
||||||
const recentComments = await this.client.getComments(issueNumber)
|
|
||||||
const minedIssues: number[] = []
|
|
||||||
|
|
||||||
for (const keyword of keywords) {
|
|
||||||
for (const comment of recentComments) {
|
|
||||||
const matches = comment.body.matchAll(/#(\d+)/g)
|
|
||||||
for (const match of matches) {
|
|
||||||
const num = parseInt(match[1], 10)
|
|
||||||
if (num !== issueNumber && !minedIssues.includes(num)) {
|
|
||||||
minedIssues.push(num)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
return {
|
|
||||||
hasDuplicates: minedIssues.length > 0,
|
|
||||||
relatedIssues: minedIssues,
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
async saveCheckpoint(checkpoint: Checkpoint): Promise<void> {
|
|
||||||
// Ensure the checkpoints directory exists
|
|
||||||
const checkpointDir = path.join(process.cwd(), '.kilo', 'logs', 'checkpoints');
|
|
||||||
if (!fs.existsSync(checkpointDir)) {
|
|
||||||
fs.mkdirSync(checkpointDir, { recursive: true });
|
|
||||||
}
|
|
||||||
|
|
||||||
// Save the checkpoint as JSON
|
|
||||||
const filename = `${checkpoint.issueNumber}-${checkpoint.phase}.json`;
|
|
||||||
const filepath = path.join(checkpointDir, filename);
|
|
||||||
|
|
||||||
fs.writeFileSync(filepath, JSON.stringify(checkpoint, null, 2));
|
|
||||||
}
|
|
||||||
|
|
||||||
async loadCheckpoint(issueNumber: number): Promise<Checkpoint | null> {
|
|
||||||
const checkpointDir = path.join(process.cwd(), '.kilo', 'logs', 'checkpoints');
|
|
||||||
|
|
||||||
// Check if directory exists
|
|
||||||
if (!fs.existsSync(checkpointDir)) {
|
|
||||||
return null;
|
|
||||||
}
|
|
||||||
|
|
||||||
// Find the latest checkpoint file for this issue
|
|
||||||
const files = fs.readdirSync(checkpointDir);
|
|
||||||
const issueFiles = files.filter(file =>
|
|
||||||
file.startsWith(`${issueNumber}-`) && file.endsWith('.json')
|
|
||||||
);
|
|
||||||
|
|
||||||
if (issueFiles.length === 0) {
|
|
||||||
return null;
|
|
||||||
}
|
|
||||||
|
|
||||||
// Sort by modification time to get the latest
|
|
||||||
const sortedFiles = issueFiles.sort((a, b) => {
|
|
||||||
const statA = fs.statSync(path.join(checkpointDir, a));
|
|
||||||
const statB = fs.statSync(path.join(checkpointDir, b));
|
|
||||||
return statB.mtime.getTime() - statA.mtime.getTime();
|
|
||||||
});
|
|
||||||
|
|
||||||
const latestFile = sortedFiles[0];
|
|
||||||
const filepath = path.join(checkpointDir, latestFile);
|
|
||||||
|
|
||||||
const content = fs.readFileSync(filepath, 'utf8');
|
|
||||||
return JSON.parse(content) as Checkpoint;
|
|
||||||
}
|
|
||||||
|
|
||||||
async resumeFromCheckpoint(issueNumber: number): Promise<string | null> {
|
|
||||||
const checkpoint = await this.loadCheckpoint(issueNumber);
|
|
||||||
return checkpoint ? checkpoint.nextAgent : null;
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
export async function createPipelineRunner(config?: PipelineConfig): Promise<PipelineRunner> {
|
export async function createPollingSupervisor(config?: PipelineConfig): Promise<PollingSupervisor> {
|
||||||
const runner = new PipelineRunner(config)
|
const supervisor = new PollingSupervisor(config)
|
||||||
await runner.initialize()
|
await supervisor.initialize()
|
||||||
return runner
|
return supervisor
|
||||||
}
|
}
|
||||||
|
|
||||||
export { GiteaClient }
|
export { GiteaClient }
|
||||||
|
|||||||
Reference in New Issue
Block a user