Agents: 6,235 → 1,454 lines (-77%). Each agent compressed to Role/Behavior/Delegates/Output/Handoff format. Gitea commenting extracted to shared block (.kilo/shared/gitea-commenting.md). Self-evolution protocol extracted to shared block (.kilo/shared/self-evolution.md). Gitea API client centralized (.kilo/shared/gitea-api.md). Rules: 2,358 → 1,189 lines (-50%). Deleted sdet-engineer.md (duplicate of agent) and orchestrator-self-evolution.md (moved to shared/). Compressed docker (549→26), flutter (521→28), go (283→21), nodejs (271→27), code-skeptic (59→14) to checklists with skill references. Fitness: 54/54 tests pass, 29/29 agents validated, fitness=0.92
7.5 KiB
Prompt Engineering Rules
Guidelines for crafting effective prompts for AI agents.
General Principles
Be Clear and Specific
- Clearly state what you want the agent to do
- Avoid ambiguity and vague instructions
- Include specific file paths, function names, and line numbers when relevant
Bad: Fix the code.
Good: Fix the bug in the calculateTotal function that causes it to return incorrect results when cart is empty.
Provide Context
- Use context mentions (
@/path/to/file) to refer to specific files - Include relevant code snippets or error messages
- Reference related issues or previous changes
Good: @/src/utils.ts Refactor the calculateTotal function to use async/await instead of callbacks.
Break Down Tasks
- Divide complex tasks into smaller, well-defined steps
- Use numbered lists for multi-step instructions
- Specify the order of operations
Good:
1. First, analyze the current implementation
2. Identify performance bottlenecks
3. Refactor the main loop to use caching
4. Add unit tests for the new implementation
5. Verify the changes don't break existing tests
Give Examples
- If you have a specific coding style in mind, provide examples
- Show the expected input/output format
- Include code snippets that demonstrate the pattern
Specify Output Format
- If you need output in a particular format, specify it
- Common formats: JSON, Markdown, TypeScript interfaces
- Include example structure when necessary
Thinking vs. Doing
Guide agents through a "think-then-do" process:
Analyze Phase
Ask the agent to analyze the current code, identify problems, or plan the approach.
Prompt Pattern:
Analyze the current implementation of [feature].
Identify:
- Potential issues
- Areas for improvement
- Security vulnerabilities
Plan Phase
Have the agent outline the steps it will take to complete the task.
Prompt Pattern:
Before making any changes, provide a plan:
1. What files will be modified
2. What functions will be added/changed
3. What dependencies are needed
4. What tests should be written
Execute Phase
Instruct the agent to implement the plan, one step at a time.
Prompt Pattern:
Now implement the plan:
1. Start with [first step]
2. Then [second step]
...
Review Phase
Carefully review the results of each step before proceeding.
Prompt Pattern:
Review the changes:
- Do they meet the requirements?
- Are there any side effects?
- Do the tests pass?
Custom Instructions
Global Custom Instructions
Apply to all agents and modes. Place in .kilo/rules/global.md or AGENTS.md.
Examples:
- Enforce coding style guidelines
- Specify preferred libraries
- Define project-specific conventions
Mode-Specific Custom Instructions
Apply only to specific agents. Place in .kilo/agents/[agent].md.
Examples:
- Lead Developer: "Always write tests before code (TDD)"
- Code Skeptic: "Check for security vulnerabilities first"
- Frontend Developer: "Use Tailwind CSS for all styling"
Handling Ambiguity
When Request is Unclear
The agent should:
- Ask clarifying questions using the
questiontool - Not make assumptions without user confirmation
- Provide options for the user to choose from
Example:
I need clarification:
- Option A: Implement as a new module
- Option B: Add to existing service
- Option C: Create a separate microservice
Providing Multiple Options
When presenting choices:
- Use clear, concise labels
- Provide descriptions for each option
- Recommend a default choice
- Allow custom input
Providing Feedback
Rejecting Actions
When the agent proposes an incorrect action:
- Explain why the action is wrong
- Provide the correct approach
- Give an example if helpful
Example:
This approach won't work because [reason].
Instead, use [correct approach].
Here's an example: [code snippet]
Rewording Requests
If initial prompt doesn't produce desired results:
- Be more specific about requirements
- Add constraints or boundaries
- Provide examples of expected output
- Break into smaller sub-tasks
Prompt Patterns
Feature Request Pattern
## Feature: [Feature Name]
### Requirements
1. [Requirement 1]
2. [Requirement 2]
### Acceptance Criteria
- [ ] [Criterion 1]
- [ ] [Criterion 2]
### Files to Modify
- `path/to/file1.ts`
- `path/to/file2.ts`
### Constraints
- Use [library/framework]
- Follow [pattern/style]
- Must be backward compatible
Bug Fix Pattern
## Bug: [Bug Description]
### Current Behavior
[What's happening now]
### Expected Behavior
[What should happen]
### Steps to Reproduce
1. [Step 1]
2. [Step 2]
### Files Involved
- `path/to/file.ts` (line X)
### Error Message
[Stack trace or error message]
Refactoring Pattern
## Refactor: [What to Refactor]
### Current Implementation
[Brief description or code snippet]
### Target Implementation
[What it should become]
### Reason
[Why this refactoring is needed]
### Files to Update
- [File list]
### Requirements
- Maintain existing functionality
- Keep tests passing
- Update documentation
Examples
Good Prompts
Feature Implementation:
@/src/components/Button.tsxRefactor theButtoncomponent to support three variants: primary, secondary, and danger. Use the design tokens from@/src/styles/tokens.ts.
Bug Fix:
@problemsFix the TypeError incalculateTotalwhen the cart is empty. The function should return 0 for empty arrays.
Architecture:
Create a new service for handling user notifications. Follow the pattern in
@/src/services/EmailService.ts. The service should support email, SMS, and push notifications.
Bad Prompts
Too Vague:
Fix the button. (What's wrong with it?)
No Context:
Write some Python code. (What should it do?)
Too Broad:
Fix everything. (What is "everything"?)
No Constraints:
Create a new feature. (What feature? For what?)
Prompt Anti-Patterns
Avoid These Patterns
-
Kitchen Sink Prompts
- Asking for too many things at once
- Better: Break into multiple focused prompts
-
Missing Constraints
- Not specifying required libraries or patterns
- Better: Always mention relevant constraints
-
Assuming Knowledge
- Expecting the agent to know project context
- Better: Provide
@filereferences
-
Ignoring Output
- Not reviewing agent's changes
- Better: Always review and provide feedback
-
No Iteration
- Accepting first result without refinement
- Better: Iterate on prompts for better results
Context Mentions
Use context mentions to provide specific file references:
| Mention | Description |
|---|---|
@file.js |
Reference a specific file |
@/src/utils/ |
Reference a directory |
@problems |
Reference all errors in the file |
@git |
Reference git history |
@terminal |
Reference terminal output |
Iterative Prompting
When initial results aren't perfect:
- Identify Issues: What's wrong with the current output?
- Clarify Requirements: Add more specific instructions
- Provide Examples: Show the expected format or pattern
- Break Down: Split complex tasks into smaller ones
- Refine: Adjust constraints or add context
Verification Checklist
After receiving agent output:
- Does it meet all stated requirements?
- Are there any syntax errors?
- Does it follow project conventions?
- Are edge cases handled?
- Is it backward compatible?
- Are tests included/updated?
- Is documentation updated?