CrawlQStudio

Platform walk-through · Six stages · Four use cases

How CrawlQ Studio actually works, end to end.

Six stages from foundation document to scored published artefact. Four canonical use cases — research, content, compliance, sales — walked through the same pipeline. Every step is inspectable; every output carries its own audit trail.

Six stages, one pipeline

The same pipeline runs for every Campaign, every output, every channel. The shape is invariant; only the scope and the grounding documents change per use case.

Ingest — Brand Memory fills with foundation documents

Positioning statements, persona research, case studies, voice rules, competitive intelligence, product documentation, customer interview transcripts — all flow into the brand knowledge graph as interconnected nodes. This is the single source of truth every subsequent step reads from.

Artefact: Brand Memory knowledge graph with entity-level lineagePrimitive: Brand Memory

Scope — Campaign declares the work

A Campaign is the unit of work. It names the target persona, the channel, the stage of the funnel, the KPI, the regulatory regime, and the knowledge-graph filter. Campaigns are first-class objects with their own BRAND Score trend and their own audit log — so one team can run twelve of them in parallel without cross-contamination.

Artefact: Scoped Campaign with knowledge-graph filter and KPIPrimitive: Canvas

Generate — Athena answers from the graph

Athena is the research and drafting agent. Every generation reads from the Campaign-scoped subgraph, never from the public internet. Draft research, draft copy, draft competitive analysis, draft campaign briefs — each arrives with the grounding documents attached and the reasoning trace inspectable.

Artefact: Draft output with grounding citations and reasoning tracePrimitive: Athena

Score — BRAND Score and TRACE gate the output

Every draft is scored on BRAND Score (Fidelity, Reasoning, Audience, Novelty, Deliverability) and on TRACE (Transparency, Reasoning, Auditability, Compliance, Explainability). Outputs above the threshold pass through; outputs below go back through Canvas for another generation pass. The threshold is the brand's published standard, not an internal heuristic.

Artefact: Scored output with per-dimension breakdown and pass/fail gatePrimitive: BRAND Score

Publish — scored artefact ships with audit trail

The output ships to the channel — landing page, email, ad variant, sales deck, analyst briefing. Every recipient can inspect the audit trail: which model answered, which documents grounded it, which prompt produced it, which scores it cleared, which compliance tier it reached. The output is defensible end to end.

Artefact: Published artefact + Merkle-chain audit trailPrimitive: CrawlQ Studio

Learn — feedback updates Brand Memory

Performance signals — engagement, conversion, retention — flow back into Brand Memory as weighted edges. Each approved output strengthens the grounding the next Campaign inherits; each rejected output tightens the scoring threshold. Over a quarter the system compounds — quality rises while cost per scored output falls.

Artefact: Updated knowledge graph + tightened scoring thresholdPrimitive: Brand Memory + BRAND Score trend

Four canonical use cases

The same six-stage pipeline applied to four buyer questions. Each links to the pillar where the use case is covered in detail.

Market research grounded in your knowledge graph

A product team runs the same market-research brief weekly against the knowledge graph. Athena produces the analysis; Canvas orchestrates the weekly cadence; the BRAND Score gates the output for Fidelity and Reasoning depth; the audit trail goes to the VP of Product alongside the insight. Generic AI market research is loud; governed AI market research compounds.

Read the flagship research methodology →

Brand-safe content across every channel

A marketing team ships fifty channel-specific variants of one campaign brief. Athena generates; Canvas ensures each channel inherits the right register; BRAND Score scores every variant; SCORCH visual audit scores the images. Any variant below threshold goes back through. What ships is consistent across channels without hand-editing fifty times.

Brand-safe content generation pillar →

EU AI Act + GDPR compliance by default

A compliance officer running the programme does not need to check each output manually. Canvas enforces the scoring gates, the audit trail records every decision with grounding documents attached, and EU data residency is architectural. A supervisory review request is answered from the audit export, not from a weekend of forensics.

EU AI Act marketing compliance pillar →

Sales enablement with scored truth

RevOps wires Brand Memory into Salesforce via the Agentforce adapter. Einstein responses ground in the customer's own case studies and positioning. Every sales-automation decision carries a TRACE score a rep, a customer, and a compliance officer can inspect. Sales agents stop freelancing claims; they cite source.

Salesforce agentic training →

See the pipeline on your own use case

A scoping call picks one use case and walks the six stages on your stack.

No slide demo. We take a real output you care about, walk it through Brand Memory → Canvas → Athena → BRAND Score → Publish → Learn, and hand you the scored artefact and the audit trail on the call.

Frequently asked questions

How long does it take to get to the first scored output?

An afternoon on the free tier, typically a week with foundation documents ingested end-to-end. Brand Memory builds from the materials you already have — positioning, personas, case studies — so the onboarding cost is the time it takes to point the ingest at the right folder.

Can I bring my own models?

Yes. The platform routes across frontier and open-weight models. You bring the keys; we bring the routing, grounding, scoring, and audit. Most customers run a mix — frontier models on high-stakes generation, cheaper models on routine drafting, with the BRAND Score deciding which to use per task.

What happens when an output fails the BRAND Score gate?

It goes back through Canvas for another generation pass. The failure reason is surfaced per-dimension — if Fidelity scored low, the next pass re-anchors voice; if Reasoning depth scored low, the next pass pulls more grounding. The loop converges on a scored output rather than asking a human to re-write from scratch.

Is the audit trail exportable?

Yes. Every Campaign's audit log is exportable as JSON (machine-readable for supervisory reviews) or as a formatted PDF brief (for board or legal). The Merkle-chain tamper-evidence travels with the export so the recipient can verify the trail has not been modified.

How do I start?

Book a scoping call at https://calendly.com/crawlq-ai-demo/book-a-demo-call. We confirm use case, regulatory scope, and your existing AI stack. From there the typical path is a pilot Campaign grounded in your foundation documents, running alongside your current system for a measurable comparison.