Skip to main content
AI-Developer → AI Workflow#3 of 4

Part 3 — The Specification Framework: Write Prompts That Generate Production-Ready Code on the First Try

Vague prompt: 40 minutes of refinement cycles. Precise prompt: production-ready code on the first try. The difference isn't the AI — it's the 5-part specification architecture that eliminates every source of ambiguity before the AI writes a single line.

March 19, 2026
13 min read
#Prompt Engineering#AI Coding#Specification#Developer Productivity#One-Shot Prompts#AI Workflow#Code Quality

AI Workflow · Module 3

The Specification Framework

"Every ambiguity in your prompt is a coin flip on quality."

5 Parts Spec Architecture
2 min To write a spec
20 min Saved per vague prompt

Most developers think a detailed prompt takes longer to write. The data says the opposite: a 2-minute specification eliminates the 20-minute refinement loop that follows every vague request.

The paradox of prompting is that investing more upfront saves you far more time overall. When you give the AI every piece of information it needs to succeed, it succeeds. When you leave gaps, the AI fills them with its best guess — and its guesses are based on the most common patterns in training data, not your specific codebase, team conventions, or business requirements.

This article gives you the 5-part Specification Framework that eliminates ambiguity before the AI writes a single line.


The Precision Paradox: Why Vague Prompts Cost More Time

Compare these two prompts for the same task:

❌ Vague Prompt (5 seconds to write)
"Create a user authentication function"
→ Generic bcrypt + JWT (your stack might use argon2 + sessions)
→ No mention of your existing User model
→ Inconsistent error handling format
→ Needs 5–10 refinement prompts
Total time: 35–50 minutes
✅ Specification Prompt (2 minutes to write)
5-part spec (shown below)
→ Matches your stack and conventions
→ Uses your existing User model
→ Handles all edge cases you specified
→ Ready to commit on first output
Total time: 4–6 minutes

The 5-Part Specification Architecture

Every gap you leave in a prompt is a place where the AI guesses. This framework closes every gap.

CONTEXT
What is the environment?
Tech stack, framework version, existing patterns, architecture style. Without this, the AI defaults to the most generic possible implementation.
REQUIREMENTS
What must it do?
Functional behavior, user stories, inputs and outputs. Define the "what" completely. Leave nothing implied.
CONSTRAINTS
How must it be built?
Performance requirements, forbidden patterns, dependencies (add vs avoid), style rules, security policies. This is where most prompts fail — the AI doesn't know your team's rules.
EXAMPLES
What does it look like in practice?
Input → output pairs. One sentence of explanation can't match the precision of two concrete examples. Show, don't just tell.
SUCCESS CRITERIA
How do you know it's done?
What tests must pass, what edge cases must be handled, what quality standards must be met. This tells the AI how to evaluate its own output.

Full Example: From Vague to Specification

The task: Build a data table component.

Layer 1 — Vague (what most people write):

"Create a data table component."

Layer 2 — With technology:

"Create a data table component using React 18 and TypeScript with TanStack Table v8."

Layer 3 — Add performance constraint:

"...that handles 10,000+ rows without lag, using virtualization."

Layer 4 — Add style rules:

"...following our existing shadcn/ui patterns, no new UI dependencies."

Layer 5 — Full Specification (what actually produces great code):

CONTEXT:
- React 18, TypeScript 5, TanStack Table v8
- Existing project uses shadcn/ui — no new UI library dependencies
- State management via React Query — no Zustand or Redux
- Team convention: named exports only, no default exports

REQUIREMENTS:
- DataTable<T> generic component accepting any row type
- Accepts: columns (ColumnDef<T>[]), data (T[]), optional onRowClick
- Sortable columns (click header to sort, click again to reverse)
- Filterable via a search input above the table (client-side)
- Pagination: 10/25/50 rows per page selector + prev/next buttons
- Loading state: show skeleton rows (5 rows, 3 columns) when isLoading prop is true
- Empty state: show "No results found" message with optional empty action slot

CONSTRAINTS:
- Must handle 10,000+ rows with virtualization (use @tanstack/react-virtual)
- WCAG 2.1 AA compliance — table must be keyboard-navigable
- All copy must be in a strings prop (not hardcoded) for i18n
- Do NOT use any inline styles — CSS Modules only
- Column widths must be configurable per column definition

EXAMPLES:
- <DataTable columns={userColumns} data={users} onRowClick={(row) => navigate(`/users/${row.id}`)} />
- <DataTable columns={orderColumns} data={[]} isLoading={true} strings={{ empty: "No orders yet" }} />

SUCCESS CRITERIA:
- Sorts 10,000 rows in under 100ms
- Passes keyboard navigation test (Tab to search, Tab to headers, Enter to sort)
- Renders skeleton correctly when isLoading
- Empty state renders when data.length === 0 and isLoading is false

That specification took roughly 3 minutes to write. The result is a component that's ready to ship.


Constraint Layering for Complex Features

When a feature is large, don't write the full specification at once. Build it in layers across a conversation.

Layer-by-Layer: Shopping Cart Feature
Prompt 1
Create the file structure for a shopping cart feature: CartService (logic), useCart hook (state), CartItem and CartSummary components. Stubs only — no implementation.
Prompt 2
Implement CartService.addItem(productId, quantity) using the stub. It must check stock availability via the existing InventoryService, throw CartError with code INSUFFICIENT_STOCK if unavailable.
Prompt 3
Implement useCart hook wrapping CartService. Expose: items, total, itemCount, addItem, removeItem, clearCart. Use React Query for server sync.
Prompt 4
Write unit tests for CartService covering: addItem happy path, INSUFFICIENT_STOCK scenario, removing a non-existent item (should no-op), and clearCart.

Each prompt has one clear scope. The AI keeps the context from previous turns. You remain in architectural control throughout — never handing over the entire feature in one shot.


Three Advanced Patterns

Once you've mastered the basic specification, add these to your toolkit:

Diff-Based Refinement
For modifying existing code. Specify exactly what changes and — critically — what must NOT change.
"Refactor the error handling in [function] to use our AppError class. Do not change the function signature or the success path logic."
Example-Driven
For data transformations and business logic. Show input → output pairs instead of describing the rule.
"Transform this data:
Input: {raw: '2026-03-19T14:30:00Z'}
Output: {date: 'Mar 19', time: '2:30 PM'}
Handle null → {date: '—', time: '—'}"
Template Instantiation
Reference an existing implementation and specify how to adapt it. Maintains architectural consistency without repeating your conventions.
"Create a NotificationService following the exact pattern of EmailService in @services/email.ts, but for push notifications via Firebase FCM."

The One-Sentence Test

Before sending any prompt, ask yourself: Could two different developers interpret this prompt differently and produce two different, incompatible outputs?

If yes — add more specification. If no — you're ready.

The developers who get extraordinary output from AI are not the ones with the best tools. They're the ones who learned that every vague word in a prompt is an open decision handed to the AI. The Specification Framework is how you keep those decisions where they belong — with you.


Next: The Confident Junior — How AI Fails and How to Catch It → — Understanding AI's failure modes before they make it into your codebase.

MH

Mohamed Hamed

20 years building production systems — the last several deep in AI integration, LLMs, and full-stack architecture. I write what I've actually built and broken. If this was useful, the next one goes to LinkedIn first.

Follow on LinkedIn →