CrawlQStudio

Dogfooding case study · Engineering + brand

How We Rebuilt CrawlQ’s SEO With GraQle.

Eight pull requests. Thirty pages shipped. Two hundred and three thousand Google Search Console impressions preserved. Written as the methodology reference, not a marketing post.

  • 8

    Pull requests shipped across a single sprint — all merged green

  • 30

    Static pages live, from a starting baseline of 6 — a 400% surface expansion

  • 202,947

    Google Search Console impressions preserved via same-URL content rewrites across 6 blog ports

  • 0

    301 redirects on the six highest-impression legacy URLs — same URL, new content, inherited rank

  • +0.21

    Confidence lift on GraQle reasoning after teaching 11 foundation KG nodes (0.57 → 0.78)

  • 100%

    Amplify production build pass rate across 8 merges — zero rollbacks, zero hotfixes

The problem we had to solve

CrawlQ was a five-year-old brand with a loyal user base, a mature knowledge graph, and roughly 176 URLs indexed in Google Search Console. The platform was evolving into CrawlQ Studio — a materially different product with a new narrative (“brand governance for AI content”) replacing the old category framing (“AI market research and content tools”). The new marketing site was launched. The content library was not.

Within a week of the launch, every legacy URL returned a 404. The flagship blog post — a 3,500-word guide on generative AI for market research that had accumulated 128,523 impressionsand was ranking for head queries like “ai market research” and “generative ai market research” — was dark. The remaining five Tier-1 and Tier-2 posts contributed another 74,424 impressions. If those URLs stayed dark for more than 90 days, Google would de-index them and the brand would reset its authority to zero on the category terms it had taken five years to earn.

Time pressure was real. The work was not trivial. We needed a repeatable methodology that preserved legacy authority AND projected the new narrative into topical authority for the new category.

The approach

We used GraQle— the open-source knowledge-graph engine that powers CrawlQ Studio’s brand-governance features — as the senior decision maker on every significant choice. The discipline was simple: no code ships without a GraQle recommendation, no recommendation is final without a confidence score, and every shipped outcome teaches the graph for the next PR.

In practice, that turned into four tool patterns applied consistently:

  • graq_context to pull the relevant nodes from the graph before every significant decision — voice rules, ADRs, product features, prior lessons. Grounds every next step in what we already know.
  • graq_reason with debate mode enabled to force multiple reasoning agents to argue a decision and surface contradictions before implementation. This is the step that catches the subtle mistakes — the ones a linter cannot see.
  • graq_review on every material diff — security, correctness, style, tests, performance, concurrency. Catches the common mistakes before the Pull Request opens.
  • graq_learn after every merge — teaching the graph what shipped, which pattern worked, and which lesson to surface the next time a similar problem appears.

The eight pull requests

Each PR answered a specific question and taught the graph a specific lesson. Listed below is the delivery log, in order. Every PR description in the repository carries the full methodology for reproducibility.

  1. PR #39Foundation

    /content-hub, /products, custom 404, llms.txt, sister-domain footer

  2. PR #40Tiered redirects

    21 Tier-A 301s live globally; 8 feature-KG nodes seeded

  3. PR #41Flagship port

    /blog/market-research-using-generative-ai/ — 128,523 impressions preserved at same URL

  4. PR #42Tier-1 batch

    Gen Z authenticity (22,099) + CrawlQ vs Conductor (18,771)

  5. PR #43Tier-2 batch

    Top 10 AI writing (11,410) + content-marketing ROI (11,297) + content automation (9,847)

  6. PR #44Pillar hubs

    4 pillar hubs published — full topical authority architecture

  7. PR #45Moat activation

    BRAND Score methodology + SCORCH + EU AI Act whitepaper + KG Dataset

  8. PR #46Audit fixes

    Nav + Footer + Hero H1 rebuilt; LearnRail homepage section

Three decisions GraQle got right

1. Same-URL ports for the high-impression legacy posts

The conventional playbook says: launch new content, set up 301 redirects from the legacy URLs, accept a transitional impression loss. GraQle’s debate-mode analysis argued differently. Ported content at the same URL is read by Google as a content update, not a URL move. The authority accrued at the original URL transfers to the rewritten post without the 10-20% equity loss that even a well-designed 301 chain introduces.

We preserved all 18 original H3 section IDs byte-for-byte on the flagship post so deep-links from prior backlinks still landed correctly. We preserved the first three sentences of the two highest-impression anchor sections verbatim — GraQle flagged that as the load-bearing content most likely to be quoted in backlinks from external sites. The rest was reframed through the new brand-governance lens.

Outcome: 202,947 impressions preserved at the original URLs. Zero 301 redirects on the six Tier-1 and Tier-2 posts.

2. Tiered redirect strategy

We had a redirect payload of 169 rules ready to apply to CloudFront. GraQle reasoning split them into three tiers: Tier-A for system pages and taxonomy URLs where the new destination was unambiguous (21 rules), Tier-B for blog-post URLs where the decision to port or redirect needed to be made per URL (149 rules), Tier-C for six top-performer URLs that must never be redirected.

Only Tier-A applied immediately. Tier-B was held in a separate JSON file and processed one URL at a time as each blog was ported. Tier-C was explicitly blocked from ever receiving a redirect. This discipline eliminated the single most common failure mode in SEO migrations: premature redirection of a URL before its replacement content is live.

3. Four-pillar topical authority architecture

After porting the legacy content, the question was where to project the new narrative. GraQle synthesised a four-pillar structure: Brand Governance for AI Content (primary), EU AI Act for Marketing Compliance (compliance angle), Brand Intelligence and Market Research (legacy bridge), Brand-Safe Content Generation (implementation). Each pillar cross-links UP to the primary pillar, sideways to exactly one adjacent pillar, and DOWN to all six ported blogs as evidence. The six blogs each link back to the relevant pillar.

This is not a generic hub-and-spoke pattern. It is a four-dimensional linking graph where every node has a clear topical role and every link has a justification. Google’s reasonable-surfer model rewards this pattern; generic topic clusters do not.

What went wrong — and what we learned

Nothing catastrophic. Three real mistakes, each of which turned into a KG lesson node for the next similar project.

Stale main branch. After PR #41 merged, the next feature branch was cut from a stale local main. The downstream PR was missing the flagship sitemap entry. GraQle caught it during the review round; we fixed it before merge. Lesson: always git fetch origin; git rebase origin/main before cutting a new feature branch. Now encoded as KG lessonlesson_20260422T085752.

Banned-vocabulary lint.The project has a lint rule that blocks loss-aversion terms (“reduction”, “removed”, “deprecated”) from user-facing strings. PR #43 shipped with the word “reduction” in a FAQ answer. The lint caught it before the build succeeded. We replaced it with “fewer rework cycles” — semantically identical, not loss-framed. The guard worked exactly as designed, and we added the test case to the guard’s examples.

Content published, link graph not updated. The most expensive mistake. We published 24 new content pages (pillars, moat activation pages, ported blogs) without updating the sitewide Nav and Footer. For a week, Google could not distribute PageRank to any of the new cluster. PR #46 fixed it — the audit that caught the gap was a GraQle debate-mode reassessment after the seventh PR. The lesson is nowlesson_20260422T183227: audit internal linking AFTER publishing content clusters, not before. Future projects will schedule the audit in the sprint plan rather than discover it in week two.

Where CrawlQ and GraQle converge

The methodology documented on this page is not a one-off engineering story. It is the operating model behind how CrawlQ Studio’s BRAND Scoreworks at runtime for every customer. The graph we used to ship 30 pages in eight PRs is structurally the same graph customers’ content is scored against when they generate assets in Studio. We dogfood the architecture we sell.

GraQle is the reasoning engine. CrawlQ Studio is the application layer. Together they make brand-governed AI content operational — not a policy document in a drawer, not a scoring rubric in a deck, but code that runs in production, scores every output, and logs the reasoning chain so the work is defensible.

Future convergence: customers of CrawlQ Studio will increasingly see the underlying graph exposed — workspace-scoped reasoning across their foundation documents, their brand rules, their campaign history, their research archive. The same four tools we used to rebuild our own site (context, reason, review, learn) become the customer’s operating model for running a defensible AI content function.

Try the architecture we use ourselves

Score every AI output against your own documents.

CrawlQ Studio runs on European infrastructure, grounds every output in your foundation documents, and publishes a five-dimension BRAND Score on every generation. Free tier — no credit card to start.

Frequently asked questions

What did GraQle actually do in this rebuild?

GraQle provided four things a human team cannot deliver at the same consistency: (1) knowledge-graph reasoning across a 14,200-node graph that held every product feature, voice rule, ADR, and prior lesson; (2) debate mode, where four independent reasoning agents argue a decision and surface contradictions before code is written; (3) Sentinel review on every diff, catching security and logic issues a linter cannot; (4) continuous teaching, where every shipped outcome feeds new nodes back into the graph so the next PR is smarter. Combined, these converted a three-week project into an eight-PR sprint with zero rollbacks.

Why didn't you just use generic AI tools for this?

Generic AI tools answer from the public internet. GraQle answers from our own knowledge graph — the actual code, the actual ADRs, the actual voice rules, the actual lessons from prior projects. The difference matters when the question is 'which 4-pillar architecture fits our moats?' A generic model guesses; GraQle grounds the answer in the real nodes. On this specific rebuild, GraQle's recommendation for the hybrid same-URL strategy was anchored in a knowledge-graph node titled 'Legacy preservation rule' that we had taught it six months earlier. That rule converted into an 85% impression preservation outcome instead of the sub-80% that generic redirect strategies typically deliver.

How much of the content was AI-generated?

The structure was AI-planned, the schemas were AI-generated, and the cross-linking was AI-validated. The copy was human-edited on every page. GraQle produced scored drafts; senior review adjusted tone, cut jargon, and enforced the brand voice rules before anything shipped. This is brand-governed AI content in practice — not 'AI wrote it and hoped for the best', but 'AI produced the draft, scored it, and a human cleared it to publish'. The same methodology customers use inside CrawlQ Studio.

Is GraQle available to CrawlQ customers?

GraQle is an open-source code-intelligence platform, published separately at graqle.com. It is the reasoning engine that powers the knowledge graph behind CrawlQ Studio's brand-governance features. Customers of CrawlQ Studio inherit the reasoning engine as part of the product; developers can use GraQle directly for their own projects.

Can this methodology be replicated?

Yes — the full methodology is documented in the eight PR descriptions linked from this page. Every PR lists the GraQle tools used, the confidence score, the debate outcome, and the Sentinel findings. The pattern is reproducible by any team with access to GraQle and a willingness to teach the knowledge graph before generating content. The key discipline is teaching the graph the foundation nodes first — features, voice rules, moats, pillars — before asking for recommendations.