Tutorial: Setting Up a Large Codebase
This tutorial shows how to configure a large codebase (500+ files, multiple domains) for effective agentic AI development — from agent configuration file hierarchies to custom agents.
Step 1: Audit Your Codebase
Section titled “Step 1: Audit Your Codebase”Before configuring anything, understand what you’re working with:
Use sub-agents to analyze this codebase:1. How many files and lines of code?2. What are the main domains/modules?3. What's the tech stack?4. How is testing set up?5. What are the build and dev commands?6. Any unusual patterns or gotchas?
Save to .sdlc/research/codebase-audit.mdStep 2: Create the Agent Configuration File Hierarchy
Section titled “Step 2: Create the Agent Configuration File Hierarchy”Root Configuration File (Under 60 Lines)
Section titled “Root Configuration File (Under 60 Lines)”Create your root agent configuration file (see Tool Configuration Reference for the correct filename for your tool):
# [Project Name]
## Stack[language], [framework], [ORM], [test framework], [package manager]
## Commands- Dev: `[command]`- Test single: `[command] <path>`- Test all: `[command]`- Types: `[command]`- Lint: `[command]`- Build: `[command]`
## Architecture- src/[domain1]/ — [purpose]- src/[domain2]/ — [purpose]- src/[domain3]/ — [purpose]- src/shared/ — [purpose]
## Conventions- [Convention 1 the agent can't infer]- [Convention 2 that differs from defaults]- [Convention 3 — project-specific gotcha]
## IMPORTANT- [Rule that must never be violated]- [Another critical rule]Domain-Specific Configuration Files
Section titled “Domain-Specific Configuration Files”For each major domain, create a focused configuration file inside that directory:
# API Layer- All endpoints return { data, error, meta } envelope- Use zod schemas for request validation- Auth middleware via route groups- One file per resource (users.ts, orders.ts)- Integration tests in __tests__/ (co-located)# Service Layer- Pure business logic, no HTTP or DB concerns- Accept and return domain types (not HTTP types)- Use Result<T, E> for error handling- One service per domain concept# Repository Layer- Drizzle ORM for all queries- One repo per database table- Never expose raw SQL outside repo layer- Transactions via transaction() helperStep 3: Create Skills
Section titled “Step 3: Create Skills”Move domain-specific knowledge that isn’t needed every session into skills. See the Tool Configuration Reference for your tool’s skills directory location.
mkdir -p skills/{api-development,database-ops,deployment,testing}Example skill:
---name: database-opsdescription: Database operations, migrations, and schema management---
## Migration Workflow1. Create migration: `pnpm drizzle-kit generate`2. Apply migration: `pnpm drizzle-kit push`3. Verify: `pnpm drizzle-kit studio` (opens browser)
## Schema Conventions- All tables have: id (ULID), createdAt, updatedAt- Soft deletes via deletedAt column (never hard delete user data)- Foreign keys always have ON DELETE CASCADE or ON DELETE SET NULL- Indexes on all foreign key columns and frequently queried fields
## Common Gotchas- Migrations are immutable once committed to main- Test DB resets between test files, not between tests- Use transactions for multi-table operationsStep 4: Create Custom Agents
Section titled “Step 4: Create Custom Agents”Store agents in your tool’s agent directory. See the Tool Configuration Reference for the correct location.
mkdir -p agentsEssential agents for large codebases:
---name: researcherdescription: Deep codebase research with structured outputtools: Read, Grep, Glob, Bash# use a cost-efficient model for this task---Research the specified topic thoroughly.
## Output Format### Overview[2-3 sentence summary]
### Key Files| File | Purpose | Lines |
### Architecture[How the components fit together]
### Patterns[Design patterns and conventions used]
### Risks & Gotchas[Things to watch out for]---name: reviewerdescription: Comprehensive code reviewtools: Read, Grep, Glob, Bash# use a cost-efficient model for this task---Review the specified code for quality and correctness.
## Checklist- [ ] Logic errors and edge cases- [ ] Security (OWASP Top 10)- [ ] Performance (N+1, memory, indexes)- [ ] Test coverage- [ ] Pattern consistency- [ ] Scope (no unnecessary changes)
Provide file:line references for all findings.---name: implementerdescription: TDD feature implementationtools: Read, Write, Edit, Bash, Grep, Globisolation: worktree---Implement the specified feature using TDD.
1. Read the plan/spec if provided2. Write failing tests first3. Implement minimum code to pass4. Run full test suite5. Run typecheck and lint6. Commit with descriptive messageStep 5: Set Up Verification Infrastructure
Section titled “Step 5: Set Up Verification Infrastructure”In your agent’s settings/permissions file, configure hooks that run after file edits:
{ "hooks": { "PostToolUse": [ { "matcher": "Edit|Write", "command": "pnpm tsc --noEmit 2>&1 | head -20" } ] }}Workflow Artifacts Directory
Section titled “Workflow Artifacts Directory”mkdir -p .sdlc/{specs,plans,research}/templatesCreate templates to standardize workflow artifacts.
Step 6: Configure Permissions
Section titled “Step 6: Configure Permissions”In your agent’s settings/permissions file, allow frequently used, safe commands to reduce prompt fatigue.
Verification
Section titled “Verification”Test your setup with a small task by asking your agent to research the codebase:
Use sub-agents to research how user authentication works in this project.Then switch to planning mode and ask for a plan:
I want to add a "forgot password" feature.Create a plan based on the research.If the agent’s research is accurate and the plan makes sense, your setup is working.