PM Consulting Inc. | AI-Employee.ca | ContractorMarketingEngine.ca

Session Management Guide

How We Keep 800+ Pages Consistent Across Dozens of Build Sessions

The process that ensures page 1 and page 800 read like they were written by the same person, in the same session, with the same standards.

March 17, 2026 | BuildTech Advisor SEO Engine

The Problem We're Solving

Building 800+ unique, quality-scored pages doesn't happen in one sitting. It happens across dozens of Claude Code sessions over 8-16 weeks. Every session starts fresh. Claude doesn't remember yesterday's session. It doesn't automatically know what was built, what's next, or what tone the last 50 pages used.

Without a system, this is what breaks:

The core insight: Consistency doesn't live in Claude's memory. It can't. Memory resets every session. Consistency lives in the files. Three files, read at the start of every session, are the entire quality control system.

The Three Files That Run Everything

The entire 800+ page build is governed by three persistent files. Claude reads them at the start of every session. They never reset. They accumulate knowledge, state, and constraints across every session.

MD

The Playbook

~/.claude/playbooks/bta-seo-engine.md

What it is: The rules. The operating manual. Never changes session-to-session (unless we improve it). Contains: the full content matrix (18 categories, 15 trades, 50 comparisons), locked prompt templates for every page type, the 10 critical gates, quality scoring checklists, schema templates, internal linking rules, image specs, and the batch workflow. Think of it as the recipe book. Every session follows the same recipe.

YML

The Progress Tracker

~/.claude/tracking/bta-seo-progress.yaml

What it is: The state. Updated at the end of every session. Tells Claude exactly where we are: which pages are built, which are deployed, which failed quality gates, what was built last session, and what's next. Every page has a status (not_started, in_progress, built, deployed), a quality score, word count, and notes. Think of it as the checklist on the kitchen wall. It's the memory between sessions.

PGS

The Built Pages

~/Sites/buildtechadvisor-seo/

What it is: The output. The actual HTML pages already built. Before writing any new page, Claude reads 3 existing pages from the same cluster. This recalibrates tone, depth, and style to match what's already built. Think of it as tasting the last batch before making the next one. The existing pages ARE the style guide.

Why this works: The playbook provides the rules (constant). The tracker provides the state (updated every session). The built pages provide the calibration (growing over time). Together, they give every session the same starting context, regardless of whether it's session 1 or session 40.

The Quick-Start: One Sentence, Every Session

Paul's entire job at the start of every BTA session is to type one sentence:

Paul: Let's work on BTA SEO. Read the playbook at ~/.claude/playbooks/bta-seo-engine.md
and the tracking file at ~/.claude/tracking/bta-seo-progress.yaml. Tell me what's next.

That single sentence triggers the entire system. Here's exactly what happens:

Step 1: Claude reads the Playbook

Loads all rules, templates, scoring criteria, matrix definitions, schema templates, and linking rules. This takes ~2 minutes and grounds the entire session in the same standards as every previous session.

Step 2: Claude reads the Tracking File

Sees every page's status: what's built, what's deployed, what failed, what's next. Reads the last_session_notes field, which is a handoff note from the previous session explaining exactly where things stand.

Step 3: Claude reports what's next

Tells Paul: "Phase 2 is in progress. 12 of 18 category pages are complete. Next batch: Safety, BIM, Time Tracking, Daily Reporting, Payroll, Procurement. Ready?" This is based entirely on the tracking file, not memory.

Step 4: Paul says "go"

Claude selects the cluster, reads 3 existing pages for calibration, and begins building. Every page follows the locked prompt template. Every page gets scored against 10 critical gates and the quality checklist.

Step 5: Session ends, tracking file updated

Claude updates every page's status, word count, quality score, and writes a handoff note for the next session. The cycle is complete and ready to repeat.

Paul's total effort per session: Type one sentence to start. Say "go" to confirm the batch. Optionally spot-check a page. That's it. The system runs itself.

Six Things That Prevent Drift

Drift is the enemy of a multi-session build. Here are the six mechanisms built into the process that prevent it:

1. Locked Prompt Templates

Every page type (category, trade, matrix, comparison, FAQ) has a fixed prompt template in the playbook. The template specifies word count, heading structure, required sections, entity triple count, FAQ count, linking rules, and CTA language. Claude doesn't freestyle. It follows the template. Session 1 and session 40 use the identical prompt structure.

# Example: Every matrix page uses this exact structure

REQUIREMENTS:
- 1,200-1,800 words
- H1: "Best [Category] Software for [Trade] Contractors"
- Direct-answer paragraph in first 200 words naming top 2-3 options
- Sections: Why [trade] needs specialized [category], Top picks,
  Key features, What to avoid, Integration considerations
- Reference specific products: [PRODUCT LIST]
- 3-5 FAQs unique to this category-trade combination
- 15-25 entity triples woven naturally
- Link UP to parent category and trade pages
- Link ACROSS to sibling matrix pages
- CTA: "Take our free 7-minute AI assessment"
- DO NOT repeat content from parent pages.
  This page covers the INTERSECTION only.

2. Angle Locking

Before writing any matrix page, its unique angle is defined and recorded in the tracking file. This creates an explicit constraint: "Project Management for Electricians focuses on panel schedule integration and NEC compliance. It must NOT overlap with Project Management for Plumbers, which focuses on permit tracking and parts inventory."

Every new page in a cluster is written with the sibling angles as exclusion rules. This is how 15 "Project Management for [Trade]" pages stay genuinely unique.

# In the tracking file:

- slug: software/project-management/electricians
  unique_angle: "Panel schedule integration, NEC compliance,
    service vs project workflow split"
  must_NOT_overlap_with:
    - software/project-management/plumbers # permit tracking, parts inventory
    - software/project-management/hvac # equipment lifecycle, maintenance contracts

3. Calibration Reads

Before writing any new content, Claude reads 3 existing completed pages from a related cluster. This isn't optional. It's a required step in the playbook. Reading existing pages recalibrates tone, depth, level of specificity, and formatting to match what's already been built. It's the equivalent of a writer re-reading their last chapter before writing the next one.

4. Critical Gates (Binary, Not Subjective)

Every page passes through 10 critical gates. These are binary pass/fail checks. Not "does this feel good enough?" but "does this page have exactly one H1 tag?" There is no subjectivity. No judgment calls. No fatigue factor. Gate 7 on page 800 is the same gate as on page 1.

#GateFailure Trigger
1H1 presentZero or more than one H1 tag
2Title tagMissing or empty, not 50-60 chars
3Schema validJSON-LD missing or structurally broken
4No duplicate contentOverlaps with sibling page angles
5Mobile responsiveViewport not set or fixed-width elements
6Canonical tagMissing or pointing to wrong URL
7HTTPSAny URL using http://
8NAP consistentBusiness name differs from profile
9No orphan pageZero inbound links from other pages
10Word countBelow minimum for page type

5. Quality Scoring (Three Pillars)

Beyond the binary gates, every page is scored across three pillars. The page must score 80+ overall and 60+ on every individual pillar. This catches pages that technically pass all gates but are thin, repetitive, or poorly structured.

25%
Page Structure
40%
Content Quality
35%
SEO Optimization

Content Quality carries the highest weight because that's where drift actually shows up. Entity density, uniqueness, E-E-A-T signals, direct-answer quality, FAQ depth. These are the factors that separate page 400 from a lazy copy of page 12.

6. Session Handoff Notes

At the end of every session, Claude writes a handoff note in the tracking file. This is a message from this session's Claude to next session's Claude. It includes: what was built, any issues encountered, any quality concerns, and exactly what should be built next.

# Example handoff note in the tracking file:

last_updated: 2026-04-02
last_session_notes: |
  Built 6 category pages (safety through procurement).
  All passed gates. Procurement scored 78 on first pass -
  the integration section was too generic. Rewrote with
  specific ProcurePro and BuildOps features. Now scores 84.
  All 18 categories complete.
  
  Next session: start trade pages. Begin with General
  Contractors and Electricians (highest priority).
  Read the PM category pages as calibration - they set
  the tone well.

What a Typical Session Looks Like

Here's an actual example of how session 15 (building trade pages) would play out:

Paul Does

  • 0:00 - Types the quick-start sentence
  • 0:03 - Claude reports: "18 categories done. Next: trade pages. Starting with GC and Electricians. Ready?"
  • 0:03 - Paul says: "Go"
  • 0:04 to 1:30 - Paul does other work while Claude builds
  • 1:30 - Claude presents 6 completed pages with gate results
  • 1:31 - Paul spot-checks 1 page, looks good
  • 1:32 - "Looks good. Update the tracker."
  • Done.

Claude Does

  • 0:00 - Reads playbook (rules)
  • 0:01 - Reads tracking file (state)
  • 0:02 - Identifies next batch from tracker
  • 0:03 - Reads 3 completed category pages (calibration)
  • 0:04 - Defines unique angles for each trade page
  • 0:05 - Generates entity triples per page
  • 0:06 to 1:25 - Writes 6 pages using locked templates
  • 1:25 - Runs 10 gates on each page
  • 1:28 - Runs quality scoring checklist
  • 1:30 - Presents results
  • 1:32 - Updates tracking file + handoff note

Total Paul time: ~5 minutes. Total pages built: 6. Total quality score: 80+ on every page. Tracking file updated for next session.

What Paul Should Watch For

The system runs itself, but Paul is the quality backstop. Every few sessions, spend 5 minutes on these checks:

Spot-Check a Random Page

Pick any completed page. Skim it. Does the opening feel fresh? Does it cover a distinct angle from its siblings? Are the product references specific (not generic)? If something feels off, flag it. Claude logs the correction and adjusts.

Read the Handoff Notes

The last_session_notes field in the tracking file is Claude's handoff to the next session. If the notes are vague ("built some pages, moving on"), that's a red flag. Good handoff notes are specific: which pages, what scores, any issues, what's next. Tell Claude to be more detailed if they're thin.

Check the Numbers

The tracking file has a summary section:

summary:
  total_planned: 810
  total_built: 147
  total_deployed: 89
  total_failed_gates: 3

If total_failed_gates is climbing, quality might be slipping. If total_built isn't growing, something's blocking progress. These numbers tell the story at a glance.

Flag Corrections Early

If Paul notices something ("the trade pages are too technical" or "these comparison pages all start the same way"), he tells Claude. Claude logs the correction in the playbook and adjusts all future pages. One correction in session 10 prevents the same mistake in sessions 11-40.

The one thing that kills the system: Skipping the tracking file update at the end of a session. If Claude doesn't update the tracker, the next session starts blind. Paul should always confirm: "Update the tracker before we end."

The Full Build Timeline

At ~3 sessions per week, 5-8 pages per session (15-24 pages/week), here's how the 800+ pages roll out:

PhasePagesTimeline
Phase 1: Core10Week 1 (2 sessions)
Phase 2: Pillars37Weeks 2-4 (5-6 sessions)
Phase 3: Matrix + Comparisons215Weeks 4-12 (20-25 sessions)
Phase 4: FAQ Multiplication548+Weeks 8-16 (overlaps Phase 3)
Drip DeploymentAllWeeks 4-20 (continuous)

FAQ pages are shorter (300-500 words) so they build faster: 15-20 per session vs 5-8 for content pages. Phase 4 overlaps Phase 3 because FAQ pages are extracted from pages built in Phase 3.

~40
Total Sessions
~14
Weeks to Complete
810+
Pages Delivered

The Complete Document Set

Three shareable documents cover the entire BTA SEO Engine project:

The bottom line: The proposal says "here's what we're building." The playbook says "here are the exact specifications." This guide says "here's how we deliver 800+ pages without a single one falling through the cracks." Three documents. One system. Zero guesswork.

This document is confidential and was prepared exclusively for the BuildTech Advisor partnership. It contains proprietary processes, systems, and methodologies belonging to PM Consulting Inc. and is not intended for distribution.

Paul
PM Consulting Inc. PMConsulting.ca paul@pmconsulting.ca (705) 491-2627
|
|