AI Workflow · Module 10
Team AI Standards
"AI amplifies what's already there. Build the standards before AI amplifies the gaps."
The research on AI and teams points to the same finding: AI amplifies existing processes, good and bad. Teams with clear requirements, good architecture, and functional review processes ship faster with AI. Teams with unclear ownership, poor documentation, and inconsistent standards ship faster — directly into more chaos.
AI doesn't fix broken engineering culture. It makes the brokenness more visible, and faster.
This means the most important AI decision for a team isn't which tool to use — it's establishing a shared standard before different developers build incompatible habits. The team that aligns early compounds their advantage. The team that lets everyone do their own thing fragments.
The Problem: Individual Divergence at Scale
When ten developers use AI without a shared standard, you end up with ten different approaches running in the same codebase:
The fix: establish a Team AI Code of Conduct before these habits form. Changing existing habits is 10× harder than setting norms early.
The Team AI Code of Conduct
A short, practical document every developer signs off on. Not a policy manual — a set of working agreements.
Shared Configuration: Rules That Enforce Themselves
The most powerful team standard is one that doesn't require humans to remember it.
- Use named exports only
- Error handling via AppError class
- All DB queries parameterized
- No inline styles — CSS Modules
- Tests must cover happy path + 2 edge cases
/secrets/
/migrations/
*.env*
/infrastructure/
/payment-processor/
These files go in version control. New team members get the full AI standards configuration on their first git clone.
Prompt Playbooks: Stop Reinventing the Wheel
Every team develops effective prompts over time. Without a system, that knowledge lives in individual chat histories and is lost when someone leaves.
A Prompt Playbook is a shared library of battle-tested prompts for your team's most common tasks.
Request body: [schema]
Response: [shape] on success, AppError on failure
Auth: requireAuth middleware (user must be logged in)
Authorization: check that user owns resource [id]
Follows: @src/routes/existing-example.ts pattern
Test: renders correctly with default props, handles loading state,
handles empty data, handles error state, fires onX callbacks.
Use our @test/helpers/render.tsx (not @testing-library/react directly).
Mock pattern: @test/mocks/[service].ts
Focus on: input validation, auth/authz gaps, SQL injection, sensitive data exposure.
For each issue: line number + problem + fix.
@[files changed in this PR]
Store playbooks in .claude/commands/ or a shared prompts/ directory in the repo. When a prompt produces great results, add it. When it becomes outdated, update it. This is living documentation.
AI-Enhanced Code Review: What Changes as a Reviewer
When your team uses AI, the code review process changes — not because the bar lowers, but because the focus shifts.
- Architectural fit with the system
- Business logic correctness
- Security reasoning
- Long-term maintainability decisions
- Knowledge transfer (does the team understand this?)
- Basic style violations
- Missing error handling
- Performance anti-patterns
- Missing null checks
- Obvious security issues
The result: PR reviews get faster because they're operating on higher-quality code. Human reviewers spend their limited attention on the high-value decisions that require judgment — not the basic issues that an AI pre-review already caught.
Onboarding New Developers into an AI-First Team
A developer who onboards into an AI-first team with a clear standard gets productive faster — and builds good habits from day one instead of importing bad ones from previous teams.
Next in AI Workflow
Part 11 — The Dependency Trap
Six months of AI coding and your debugging speed has quietly dropped 40%. The Dependency Trap is real — and the developers who catch it early are the ones who stay dangerous in the market.