Skills, Prompts & Agents Guide\n\n## Quick Start\n\nYou have a powerful system for extending this project. Here's how to use it:\n\nNeed to...\n- Add OpenAI provider? → Read Skill, Use Prompt, Call Agent\n- Optimize latency? → Read Skill, Use Prompt, Call Agent\n- Add tests? → Use Agent directly\n- Explore code? → Call Explore Agent\n- Update docs? → Call mkdocs-content Agent\n\n---\n\n## Understanding the System\n\n### Skills: \"How-To Guides\"\nSkills are detailed guides with best practices and code examples.\n\nFiles: .github/skills/[domain]/SKILL.md\n\nAvailable Skills:\n- llm-integration - Add new LLM providers\n- performance-optimization - Optimize latency and cost\n- testing-strategy - Write robust tests\n- observability - Monitor in production\n- error-handling - Build resilient systems\n\nWhen to use:\n- Before starting a task in that domain\n- To understand best practices\n- To avoid common pitfalls\n- To learn the patterns\n\nExample: \n\nWant to add Redis caching? \n→ Read: .github/skills/performance-optimization/SKILL.md\n→ Section: \"Add Redis for L2 Cache\"\n\n\n---\n\n### Prompts: \"Step-by-Step Instructions\"\nPrompts are detailed walkthroughs showing exactly what to do.\n\nFiles: /prompts/[action].md\n\nAvailable Prompts:\n- add-provider.md - Add new LLM provider (8 steps)\n- add-use-case.md - Create new feature (10 steps)\n- optimize-performance.md - Improve latency/cost (8 steps)\n- add-observability.md - Add monitoring (10 steps)\n\nWhen to use:\n- When ready to implement\n- To follow exact steps\n- To create checklists\n- To ensure nothing is missed\n\nExample:\n\nReady to add OpenAI provider?\n→ Read: /prompts/add-provider.md\n→ Follow Step 1-8\n→ Check off checklist\n\n\n---\n\n### Agents: \"AI Specialists\"\nAgents are autonomous AI assistants that handle complex tasks.\n\nAvailable Agents:\n- llm-integration - Provider setup and configuration\n- performance-tuning - Optimization and tuning\n- mkdocs-content - Documentation creation\n- test-generation - Test writing\n- Explore - Codebase exploration\n\nWhen to use:\n- When you want AI to handle complex work\n- When you need specific expertise\n- When you want multiple changes coordinated\n- When you want code generation\n\nExample:\n\nComplex task: Add multimodel load balancing\n→ Use Agent: llm-integration\n→ Prompt: \"Implement load balancing across OpenAI, Claude, and Ollama\"\n→ Agent handles: Implementation + tests + docs\n\n\n---\n\n## Workflow: How to Use Together\n\n### Scenario 1: Add OpenAI Provider\n\n\nYou: \"I want to add OpenAI\"\n ↓\n1. Read Skill: llm-integration/SKILL.md\n (Learn provider pattern, cost tracking, multi-provider setup)\n ↓\n2. Read Prompt: prompts/add-provider.md\n (Understand 8 steps)\n ↓\n3. Option A (DIY): Follow prompt steps manually\n Option B (Smart): Use Agent\n → @llm-integration \"Add OpenAI GPT-4 provider with cost tracking\"\n ↓ \n4. Agent creates code + tests + docs\n ↓\n5. You review and merge\n\n\n### Scenario 2: Reduce Latency from 800ms to <500ms\n\n\nYou: \"Latency is 800ms, need <500ms\"\n ↓\n1. Read Skill: performance-optimization/SKILL.md\n (Understand caching, batching, async)\n ↓\n2. Read: optimize-performance.md Step 1 (Measure baseline)\n (Capture current metrics)\n ↓\n3. Read: optimize-performance.md Step 2 (Identify bottleneck)\n (Understand what's slow: AI call? DB? Building context?)\n ↓\n4. Use Agent: @performance-tuning\n \"Reduce latency from 800ms to <500ms. Bottleneck is AI call taking 750ms\"\n ↓\n5. Agent implements: L1 cache + possibly async\n ↓\n6. You measure and validate impact\n\n\n### Scenario 3: Create New Use Case (Review Sentiment)\n\n\nYou: \"I want to add review sentiment analysis\"\n ↓\n1. Read Skill: llm-integration/SKILL.md\n (Refresh provider knowledge)\n ↓\n2. Read: docs/future/use-cases.md → Review Sentiment section\n (Understand scope, architecture, metrics)\n ↓\n3. Read Prompt: prompts/add-use-case.md\n (10-step walkthrough)\n ↓\n4. Option A: Follow steps manually\n Option B: Partially use Agent\n → @test-generation \"Create tests for ReviewSentimentService\"\n → @mkdocs-content \"Document review sentiment use case\"\n ↓\n5. You implement service (most valuable learning)\n ↓\n6. Agent generates tests and docs\n ↓\n7. You optimize with @performance-tuning if needed\n\n\n---\n\n## Decision Tree\n\n\nWhat do you need to do?\n│\n├─ Add new LLM provider\n│ ├─ Read: llm-integration/SKILL.md\n│ ├─ Read: prompts/add-provider.md\n│ └─ Use Agent: @llm-integration\n│\n├─ Improve latency/cost\n│ ├─ Read: performance-optimization/SKILL.md\n│ ├─ Read: prompts/optimize-performance.md\n│ └─ Use Agent: @performance-tuning\n│\n├─ Create new use case\n│ ├─ Read: docs/future/use-cases.md\n│ ├─ Read: prompts/add-use-case.md\n│ └─ Use Agents: @test-generation, @mkdocs-content\n│\n├─ Write tests\n│ └─ Use Agent: @test-generation\n│\n├─ Update documentation\n│ └─ Use Agent: @mkdocs-content\n│\n├─ Understand codebase\n│ └─ Use Agent: @Explore\n│\n└─ Something else?\n └─ Ask Copilot directly\n\n\n---\n\n## Quick Reference\n\n### Skills by Domain\n\n| Domain | Skill File | Key Topics |\n|--------|-----------|------------|\n| LLM Providers | llm-integration/SKILL.md | Interface pattern, cost tracking, provider selection |\n| Performance | performance-optimization/SKILL.md | Caching (3-level), batching, async |\n| Testing | testing-strategy/SKILL.md | Unit vs integration, mocking, fixtures |\n| Observability | observability/SKILL.md | Metrics, logging, tracing, alerting |\n| Error Handling | error-handling/SKILL.md | Timeouts, retries, circuit breaker |\n\n### Prompts by Task\n\n| Task | Prompt File | Steps |\n|------|------------|-------|\n| Add Provider | add-provider.md | 8 steps |\n| New Use Case | add-use-case.md | 10 steps |\n| Optimize | optimize-performance.md | 8 steps |\n| Monitoring | add-observability.md | 10 steps |\n\n### Agents by Purpose\n\n| Purpose | Agent | Capabilities |\n|---------|-------|---------------|\n| Provider Setup | llm-integration | Code + tests + docs |\n| Optimization | performance-tuning | Caching + batching + async |\n| Documentation | mkdocs-content | Create + update structure |\n| Test Writing | test-generation | Unit + integration tests |\n| Code Search | Explore | Fast codebase exploration |\n\n---\n\n## Best Practices\n\n### ✅ Do\n1. Read before implementing\n - Skills → Prompts → Code\n - Understand \"why\" before \"how\"\n\n2. Use agents for tedious work\n - Code generation\n - Test writing\n - Documentation\n - Let you focus on design\n\n3. Review agent output\n - Not perfect, but good starting point\n - Adjust to your needs\n - Learn from generated code\n\n4. Document decisions\n - Add to decision matrix\n - Explain trade-offs\n - Help future developers\n\n5. Iterate based on metrics\n - Implement → Measure → Optimize\n - Use skills to guide optimization\n\n### ❌ Don't\n1. Skip reading the skill\n - Prompts won't make sense without it\n - Miss important best practices\n - Repeat mistakes\n\n2. Use wrong agent\n - performance-tuning won't create docs\n - mkdocs-content won't write code\n - Match agent to task\n\n3. Accept agent output blindly\n - Review and validate\n - Test thoroughly\n - Understand what it did\n\n4. Ignore patterns\n - Project has established patterns\n - Skills teach these patterns\n - Follow them for consistency\n\n5. Skip testing\n - AI can help write tests\n - But you should validate\n - Test the tester\n\n---\n\n## Common Workflows\n\n### Workflow 1: Add Feature (Balanced)\nGoal: Implement new use case with proper setup\n\n\n1 hour: Read docs/future/use-cases.md section\n30 min: Read prompts/add-use-case.md\n2 hours: Implement service layer (most learning)\n1 hour: Use @test-generation for tests\n30 min: Use @mkdocs-content for docs\n30 min: Optimize with @performance-tuning if needed\n\n\n### Workflow 2: Quick Optimization\nGoal: Reduce costs, minimal time\n\n\n15 min: Ask @performance-tuning: \"Cost is $X/day, optimize\"\n2 hours: Agent implements caching\n30 min: You measure impact\n15 min: Verify in production\n\n\n### Workflow 3: Provider Swap\nGoal: Switch from OpenAI to Claude\n\n\n1 hour: Read llm-integration/SKILL.md\n30 min: @llm-integration \"Add Anthropic Claude provider\"\n30 min: Update config and tests\n15 min: Run API tests\n\n\n---\n\n## Learning Path\n\n### Month 1: Fundamentals\n- [ ] Read all skill files\n- [ ] Read prompts (don't implement yet)\n- [ ] Review existing code patterns\n- [ ] Understand decision matrices\n\n### Month 2: Implementation\n- [ ] Implement one prompt manually (e.g., add provider)\n- [ ] Implement one use case (e.g., review sentiment)\n- [ ] Use agents for tests and docs\n- [ ] Measure and document\n\n### Month 3: Optimization\n- [ ] Use performance-tuning agent\n- [ ] Implement caching for existing features\n- [ ] Add observability hooks\n- [ ] Measure business impact\n\n### Month 4: Advanced\n- [ ] Implement Tier 2 use case (dynamic pricing, fraud detection)\n- [ ] Use multiple agents together\n- [ ] Create variations of patterns\n- [ ] Teach others\n\n---\n\n## FAQ\n\nQ: Should I read the skill or prompt first?\nA: Always skill first. It explains the \"why\". Prompt shows the \"how\".\n\nQ: Can I skip reading and just use agents?\nA: You can, but you'd miss learning. Better: skim skill → use agent → review output → learn.\n\nQ: What if agent makes a mistake?\nA: Review output, fix it, and continue. Agents are smart but not perfect.\n\nQ: Which agent should I use?\nA: Match agent to task (see decision tree). When in doubt, ask @Explore first.\n\nQ: How do I measure if optimization worked?\nA: prompts/optimize-performance.md Step 8: Measure Impact. Compare metrics before/after.\n\nQ: Can I use agents for something not listed?\nA: Yes! Agents are flexible. Try asking and see if they can help.\n\n---\n\n## Next Steps\n\n1. Pick one skill to master (llm-integration recommended)\n2. Read the skill file (30 minutes)\n3. Read the corresponding prompt (30 minutes)\n4. Implement it or let agent help (1-3 hours)\n5. Measure the result (15 minutes)\n6. Document what you learned (15 minutes)\n7. Pick next skill and repeat\n\n---\n\nYou're now equipped to extend this project in any direction! 🚀\n"