<!--
PREAMBLE: Dev3 Team Shared Context & Standards
Reference patterns from gstack SKILL.md.tmpl structure
- Front matter conventions: YAML metadata with name, description, allowed-tools, triggers, usage
- Template placeholders: {{PREAMBLE}} pattern used across gstack skills
- Scope: dev3-team common paths, team structure, workflow rules, QC standards
-->

# Dev3 Team Shared Context & Standards

## System Paths

### Workspace Root
```
WORKSPACE_ROOT: /home/jay/workspace
```

### Team Directories
```
Team workspace:     /home/jay/workspace/teams/{team_id}/
Memory location:    /home/jay/workspace/memory/
Task reports:       /home/jay/workspace/memory/reports/{task_id}.md
Task completion:    /home/jay/workspace/memory/events/{task_id}.done
Skills directory:   /home/jay/workspace/skills/
Shared resources:   /home/jay/workspace/skills/shared/
```

### QC Tools
```
QC verification:    python3 /home/jay/workspace/teams/dev3/qc/qc_verify.py
QC rules:           /home/jay/workspace/teams/shared/QC-RULES.md
Finish task script: ./finish-task.sh
Task timer:         task-timer command (start/end)
```

---

## Team Structure

### dev1-team
- **Role**: 아누 (Orchestrator)
- **Backend**: 불칸 (Vulcan)
- **Frontend**: 이리스 (Iris)

### dev2-team
- **Team Lead**: 헤파이스토스 (Hephaestus)
- **Backend**: 아레스 (Ares)
- **Frontend**: 아테나 (Athena)
- **QA/Tester**: 아르테미스 (Artemis)

### dev3-team
- **Team Lead**: 다그다 (Dagda)
- **Backend**: 루 (Lu)
- **Frontend**: 브리짓 (Bridget)
- **UX/UI**: 아네 (Ane)
- **QA/Tester**: 모리건 (Morrigan)

---

## Core Work Rules

### Rule 1: Report Before Done
**Every task completion requires a report in `/home/jay/workspace/memory/reports/{task_id}.md`**
- Document what was accomplished
- Include quantitative evidence (test results, metrics, error counts)
- List files modified
- Provide reproduction steps or verification methods
- **No report = No completion**

### Rule 2: .done File Generation
Create `.done` file ONLY after verification passes:
```bash
python3 /home/jay/workspace/teams/dev3/qc/qc_verify.py --gate {task_id} --report reports/{task_id}.md
touch /home/jay/workspace/memory/events/{task_id}.done
```
- Do NOT create .done file before report writing
- Do NOT create .done file without passing QC gate
- .done file = formal completion marker

### Rule 3: Fantasy Approval Prohibited
**Never claim completion without quantitative evidence:**
- ❌ "I think this works" → Fantasy Approval
- ❌ "Should be fine" → Fantasy Approval
- ❌ "Probably tested" → Fantasy Approval
- ✅ "Test results: PASS (12 tests)" → Valid evidence
- ✅ "Verification: 0 pyright errors" → Valid evidence
- ✅ "diff before/after shows [specific change]" → Valid evidence

**Rationalization Prevention:**
| Excuse | Reality |
|--------|---------|
| "It's a small change" | Small changes cause big regressions |
| "I manually tested it" | Manual testing is unrepeatable |
| "The tests probably pass" | Run them and show the output |
| "I don't have a test framework" | Create one or write manual verification script |
| "This is urgent, skip checks" | Unverified urgent fixes become bigger emergencies |

**Consequence**: Task reported without evidence = Task marked incomplete, reassigned.

---

## QC Standards (Quality Gate Checklist)

### Gate 1: pyright (Type Safety)
```bash
pyright . --warnings
```
- **PASS criteria**: 0 errors (warnings allowed if documented)
- **FAIL**: Any error blocks .done file generation
- **Evidence**: Include full pyright output in report

### Gate 2: pytest (Functional Testing)
```bash
pytest -v
```
- **PASS criteria**: All tests pass (0 failures, 0 skipped without approval)
- **FAIL**: Any failure blocks .done file generation
- **Evidence**: Include summary line: "Passed: X/X tests"
- **Skipped tests**: Requires team lead approval before task completion

### Gate 3: Minimum Issue Finding (Code Review)
**Every task implementation must discover and resolve ≥3 issues:**
- Code quality issues (style, readability, naming)
- Logic issues (edge cases, boundary conditions)
- Architecture issues (coupling, performance, maintainability)
- Security issues (input validation, access control)
- Test gaps (untested code paths, edge cases)

**Evidence format in report:**
```markdown
## Issues Found & Resolved

1. **Issue**: [description] | **Status**: Fixed
   - File: path/to/file:line
   - Fix: [what was changed]
   - Verification: [how confirmed]

2. **Issue**: [description] | **Status**: Fixed
   ...
```

**Minimum requirement**: Report ≥3 issues or task marked incomplete.

### Gate 4: Code Review Verification
Before creating .done file:
- [ ] Read your entire report word-by-word
- [ ] Does it match the actual implementation? (not aspirational, not partial)
- [ ] Are the file paths in the report accurate?
- [ ] Can someone reproduce your verification steps from the report?
- [ ] Is there any "should work" language? (Fix it)

---

## Skill & Tool References

### Systematic Debugging Skill
**Location**: `/home/jay/workspace/skills/systematic-debugging/SKILL.md`
- Apply when: Bug encountered, test failure, unexpected behavior
- Process: Root Cause → Pattern Analysis → Hypothesis → Implementation
- Rule: NO FIXES WITHOUT ROOT CAUSE INVESTIGATION FIRST

### TDD Enforcement Skill
**Location**: `/home/jay/workspace/skills/tdd-enforcement/SKILL.md`
- Apply when: Creating new functions, classes, modules (Lv.2+)
- Cycle: RED (failing test) → GREEN (minimal passing code) → REFACTOR
- Audit trail: tdd_check verifier validates file modification timestamps

### Verification Before Completion Skill
**Location**: `/home/jay/workspace/skills/verification-before-completion/SKILL.md`
- Apply when: Before .done file creation
- 4 Gates: Runtime verification → Test passage → Report accuracy → Scope check
- Rule: ALL 4 GATES REQUIRED, NO EXCEPTIONS

---

## Workflow Commands

### Task Lifecycle
```bash
# Start task (begins time tracking)
task-timer start {task_id}

# During implementation
# ... write code, commit changes, run tests

# Before completion: Run verification
python3 /home/jay/workspace/teams/dev3/qc/qc_verify.py --gate {task_id}

# After verification passes: Write report
# (example: /home/jay/workspace/memory/reports/{task_id}.md)

# End task (stops time tracking, records completion)
task-timer end {task_id}

# Finalize (creates .done marker)
./finish-task.sh {task_id}
```

### QC Verification Command
```bash
# Verify task passes QC gates before .done creation
python3 /home/jay/workspace/teams/dev3/qc/qc_verify.py --gate {task_id}

# Output: PASS | WARN | FAIL
# PASS/WARN: .done file can be created
# FAIL: Resolve issues, re-run until PASS
```

### Manual Test Example
```bash
# When test framework unavailable, create verification script:
cd /home/jay/workspace
python3 -m pytest {test_file} -v
# OR
npm test {test_file}
# OR
bash verify-{feature}.sh  # custom verification script
```

---

## Common Task Patterns

### Pattern 1: New Feature (Lv.2+)
1. **Write tests first** (TDD-enforcement skill applies)
   - RED: Create test file, run → fails
   - GREEN: Minimal implementation → tests pass
   - REFACTOR: Code quality → tests still pass
2. **Verify before completing** (verification-before-completion skill applies)
   - Runtime: Feature actually works
   - Tests: pytest all pass
   - Review: Find ≥3 issues, fix them
   - Report: Document everything

### Pattern 2: Bug Fix
1. **Investigate systematically** (systematic-debugging skill applies)
   - Root Cause: What's actually broken?
   - Pattern Analysis: Known bug type?
   - Hypothesis: Testable claim about the problem
   - Implementation: Minimal fix (not cosmetic)
2. **Verify after fixing**
   - Regression test: New test that fails without the fix
   - Full test suite: All tests pass
   - Reproduction: Original bug no longer occurs

### Pattern 3: Refactoring
1. **Scope clearly**: Only files mentioned in task
2. **Preserve behavior**: All tests pass
3. **Measure improvement**: Performance metrics, code duplication reduction, etc.
4. **Report impact**: Before/after metrics, file count, line count change

---

## Red Flags (Stop & Escalate)

**STOP WORK immediately if any of these occur:**

1. **Incomplete hypothesis testing**
   - Symptom: "I'll just try changing X"
   - Action: Return to systematic-debugging skill Phase 1

2. **Unverified completion claims**
   - Symptom: "This should work"
   - Action: Run tests, show output, or don't claim completion

3. **Scope creep**
   - Symptom: Modifying files outside the task description
   - Action: Document and report to team lead before committing

4. **Skip verification**
   - Symptom: "Tests are slow, I'll verify manually"
   - Action: Run full test suite or explain why it's impossible

5. **Multiple fix attempts failing**
   - Symptom: 3+ fixes, problem persists
   - Action: Use systematic-debugging skill to re-examine architecture

---

## References & Related Documents

- **Full QC Rules**: `/home/jay/workspace/teams/shared/QC-RULES.md`
- **dev3-team Specific**: `/home/jay/workspace/teams/dev3/`
- **gstack Skill Examples**: Reference `/tmp/gstack/` for enterprise-grade patterns (investigate, review, qa skills)

---

**Version**: 1.0 | **Created**: 2026-03-23 | **Author**: Ane (UX/UI, dev3-team)
