How We Keep 800+ Pages Consistent Across Dozens of Build Sessions
The process that ensures page 1 and page 800 read like they were written by the same person, in the same session, with the same standards.
March 17, 2026 | BuildTech Advisor SEO Engine
Building 800+ unique, quality-scored pages doesn't happen in one sitting. It happens across dozens of Claude Code sessions over 8-16 weeks. Every session starts fresh. Claude doesn't remember yesterday's session. It doesn't automatically know what was built, what's next, or what tone the last 50 pages used.
Without a system, this is what breaks:
The core insight: Consistency doesn't live in Claude's memory. It can't. Memory resets every session. Consistency lives in the files. Three files, read at the start of every session, are the entire quality control system.
The entire 800+ page build is governed by three persistent files. Claude reads them at the start of every session. They never reset. They accumulate knowledge, state, and constraints across every session.
What it is: The rules. The operating manual. Never changes session-to-session (unless we improve it). Contains: the full content matrix (18 categories, 15 trades, 50 comparisons), locked prompt templates for every page type, the 10 critical gates, quality scoring checklists, schema templates, internal linking rules, image specs, and the batch workflow. Think of it as the recipe book. Every session follows the same recipe.
What it is: The state. Updated at the end of every session. Tells Claude exactly where we are: which pages are built, which are deployed, which failed quality gates, what was built last session, and what's next. Every page has a status (not_started, in_progress, built, deployed), a quality score, word count, and notes. Think of it as the checklist on the kitchen wall. It's the memory between sessions.
What it is: The output. The actual HTML pages already built. Before writing any new page, Claude reads 3 existing pages from the same cluster. This recalibrates tone, depth, and style to match what's already built. Think of it as tasting the last batch before making the next one. The existing pages ARE the style guide.
Why this works: The playbook provides the rules (constant). The tracker provides the state (updated every session). The built pages provide the calibration (growing over time). Together, they give every session the same starting context, regardless of whether it's session 1 or session 40.
Paul's entire job at the start of every BTA session is to type one sentence:
That single sentence triggers the entire system. Here's exactly what happens:
Loads all rules, templates, scoring criteria, matrix definitions, schema templates, and linking rules. This takes ~2 minutes and grounds the entire session in the same standards as every previous session.
Sees every page's status: what's built, what's deployed, what failed, what's next. Reads the last_session_notes field, which is a handoff note from the previous session explaining exactly where things stand.
Tells Paul: "Phase 2 is in progress. 12 of 18 category pages are complete. Next batch: Safety, BIM, Time Tracking, Daily Reporting, Payroll, Procurement. Ready?" This is based entirely on the tracking file, not memory.
Claude selects the cluster, reads 3 existing pages for calibration, and begins building. Every page follows the locked prompt template. Every page gets scored against 10 critical gates and the quality checklist.
Claude updates every page's status, word count, quality score, and writes a handoff note for the next session. The cycle is complete and ready to repeat.
Paul's total effort per session: Type one sentence to start. Say "go" to confirm the batch. Optionally spot-check a page. That's it. The system runs itself.
Drift is the enemy of a multi-session build. Here are the six mechanisms built into the process that prevent it:
1. Locked Prompt Templates
Every page type (category, trade, matrix, comparison, FAQ) has a fixed prompt template in the playbook. The template specifies word count, heading structure, required sections, entity triple count, FAQ count, linking rules, and CTA language. Claude doesn't freestyle. It follows the template. Session 1 and session 40 use the identical prompt structure.
2. Angle Locking
Before writing any matrix page, its unique angle is defined and recorded in the tracking file. This creates an explicit constraint: "Project Management for Electricians focuses on panel schedule integration and NEC compliance. It must NOT overlap with Project Management for Plumbers, which focuses on permit tracking and parts inventory."
Every new page in a cluster is written with the sibling angles as exclusion rules. This is how 15 "Project Management for [Trade]" pages stay genuinely unique.
3. Calibration Reads
Before writing any new content, Claude reads 3 existing completed pages from a related cluster. This isn't optional. It's a required step in the playbook. Reading existing pages recalibrates tone, depth, level of specificity, and formatting to match what's already been built. It's the equivalent of a writer re-reading their last chapter before writing the next one.
4. Critical Gates (Binary, Not Subjective)
Every page passes through 10 critical gates. These are binary pass/fail checks. Not "does this feel good enough?" but "does this page have exactly one H1 tag?" There is no subjectivity. No judgment calls. No fatigue factor. Gate 7 on page 800 is the same gate as on page 1.
| # | Gate | Failure Trigger |
|---|---|---|
| 1 | H1 present | Zero or more than one H1 tag |
| 2 | Title tag | Missing or empty, not 50-60 chars |
| 3 | Schema valid | JSON-LD missing or structurally broken |
| 4 | No duplicate content | Overlaps with sibling page angles |
| 5 | Mobile responsive | Viewport not set or fixed-width elements |
| 6 | Canonical tag | Missing or pointing to wrong URL |
| 7 | HTTPS | Any URL using http:// |
| 8 | NAP consistent | Business name differs from profile |
| 9 | No orphan page | Zero inbound links from other pages |
| 10 | Word count | Below minimum for page type |
5. Quality Scoring (Three Pillars)
Beyond the binary gates, every page is scored across three pillars. The page must score 80+ overall and 60+ on every individual pillar. This catches pages that technically pass all gates but are thin, repetitive, or poorly structured.
Content Quality carries the highest weight because that's where drift actually shows up. Entity density, uniqueness, E-E-A-T signals, direct-answer quality, FAQ depth. These are the factors that separate page 400 from a lazy copy of page 12.
6. Session Handoff Notes
At the end of every session, Claude writes a handoff note in the tracking file. This is a message from this session's Claude to next session's Claude. It includes: what was built, any issues encountered, any quality concerns, and exactly what should be built next.
Here's an actual example of how session 15 (building trade pages) would play out:
Total Paul time: ~5 minutes. Total pages built: 6. Total quality score: 80+ on every page. Tracking file updated for next session.
The system runs itself, but Paul is the quality backstop. Every few sessions, spend 5 minutes on these checks:
Spot-Check a Random Page
Pick any completed page. Skim it. Does the opening feel fresh? Does it cover a distinct angle from its siblings? Are the product references specific (not generic)? If something feels off, flag it. Claude logs the correction and adjusts.
Read the Handoff Notes
The last_session_notes field in the tracking file is Claude's handoff to the next session. If the notes are vague ("built some pages, moving on"), that's a red flag. Good handoff notes are specific: which pages, what scores, any issues, what's next. Tell Claude to be more detailed if they're thin.
Check the Numbers
The tracking file has a summary section:
If total_failed_gates is climbing, quality might be slipping. If total_built isn't growing, something's blocking progress. These numbers tell the story at a glance.
Flag Corrections Early
If Paul notices something ("the trade pages are too technical" or "these comparison pages all start the same way"), he tells Claude. Claude logs the correction in the playbook and adjusts all future pages. One correction in session 10 prevents the same mistake in sessions 11-40.
The one thing that kills the system: Skipping the tracking file update at the end of a session. If Claude doesn't update the tracker, the next session starts blind. Paul should always confirm: "Update the tracker before we end."
At ~3 sessions per week, 5-8 pages per session (15-24 pages/week), here's how the 800+ pages roll out:
| Phase | Pages | Timeline |
|---|---|---|
| Phase 1: Core | 10 | Week 1 (2 sessions) |
| Phase 2: Pillars | 37 | Weeks 2-4 (5-6 sessions) |
| Phase 3: Matrix + Comparisons | 215 | Weeks 4-12 (20-25 sessions) |
| Phase 4: FAQ Multiplication | 548+ | Weeks 8-16 (overlaps Phase 3) |
| Drip Deployment | All | Weeks 4-20 (continuous) |
FAQ pages are shorter (300-500 words) so they build faster: 15-20 per session vs 5-8 for content pages. Phase 4 overlaps Phase 3 because FAQ pages are extracted from pages built in Phase 3.
Three shareable documents cover the entire BTA SEO Engine project:
The bottom line: The proposal says "here's what we're building." The playbook says "here are the exact specifications." This guide says "here's how we deliver 800+ pages without a single one falling through the cracks." Three documents. One system. Zero guesswork.
This document is confidential and was prepared exclusively for the BuildTech Advisor partnership. It contains proprietary processes, systems, and methodologies belonging to PM Consulting Inc. and is not intended for distribution.