IQ
State of the Framework • Internal Audit
The Graph, the Instrument, and the Three Lines of Work
ShurIQ’s state of the framework, audited 2026-05-15.
Internal
May 2026
Author: Jonny Dubowsky
Shur Creative Partners
91,278
RDF triples (live)
1,672
AI-agent companies modeled
+19.9%
Triples vs. 2026-03-27
The Audit
I’ve been running parallel tracks for eight weeks and the picture has gone fuzzy. The triplestore is live, the dashboard is live, the dimension scoring is live. They run alongside each other, and the wiring between them needs to be made explicit. This audit names what we’ve built, what’s working as designed, and the three lines of work that bring the thesis to ground.
01
ShurIQ is an intelligence instrument grounded in a knowledge graph.
The thesis: every client engagement produces two outputs — the visible deliverable and the durable asset deposit feeding the graph.
Reports are demonstration; the graph is the moat. Each engagement leaves behind a stack ranking, a brief, or a diagnosis on the visible side, and validated nodes, edges, and provenance on the durable side. The visible work earns the engagement. The durable deposit compounds.
The thesis turns on four measurable variables:
D
Node Density
How many statements live in the graph.
A
Ontological Alpha
How distinct the domain schema is from public ontologies (Wikidata, DBpedia, Schema.org).
E
Extraction Efficiency
Cost-per-node decoupled from client cycles, so the graph grows whether revenue flows or not.
λ
Decay Rate
How fast structural advantage erodes in a vertical — sets refresh cadence and competitor catch-up cost.
Year-five target: roughly one billion nodes at $0.05/node, at which point asset value crosses cumulative service revenue and ShurIQ transitions from a service firm to an infrastructure asset.
02
Three pieces of the instrument are live.
The graph, the ontology, and the dashboard each ship today. The wiring between them is the work.
The graph is real. Live at localhost:7200/repositories/sbpi-agents, 91,278 RDF triples, up 19.9% from the 76,147 snapshot in the 2026-03-27 investor deck. 1,672 AI agent companies modeled with five dimensions each, 8,360 dimension scores, full PROV-O attestation chains on every score.
The ontology is real. Thirteen OWL classes, thirteen object properties, twenty-nine datatype properties, six SHACL node shapes, all namespaced under shurai.com/ontology/ai-agent-sbpi#. The schema has no public equivalent — variable A holds.
The dashboard is real. The customer-facing pilot ships to shuriq-brand-dashboards.pages.dev. ReelShort W19 went live last week in both Original and Lyapunov modes. AHA and Reffer W20 deliverables exist and are ready to deploy.
Chronology — how the framework came together
| When |
Move |
| 2026-03-21 | Semantic layer foundation. First version of sbpi.ttl. Oxigraph chosen as the initial endpoint. |
| 2026-03-27 | Billion-Node thesis formalized in the investor pitch. |
| 2026-03-28 | Discourse grammar (Claim → Evidence → Source) added. The /evidence-trail skill born. |
| 2026-04-02 | W13 prediction validation pulled into the triplestore. |
| 2026-05-02 | System architecture iteration. Account, project, and report hierarchy designed. |
| 2026-05-11 | Data model v0.7. Schema migration 0006 added accounts, projects, kg_layers, business_model_canvas, kg_deltas tables. |
| 2026-05-13–14 | Brand dashboard gateway parallel build. ReelShort live; AHA and Reffer ready. |
03
Two pieces look like gaps and are actually the firewall.
Naming them is part of making the framework explicit.
The customer dashboard does not query the graph. Customers read SBPI dashboards rendered from per-brand markdown and state/current.json. The graph at localhost:7200 is invisible to that surface. This is intentional — customer value is the insight, the score, the narrative cards. The graph is the substrate, and exposing it to customers would change what we sell. The graph stays internal infrastructure.
Reports are demonstration; the graph is the moat. The firewall keeps the moat moat-shaped.
State of the Framework, 2026-05-15
The discourse grammar runs agent-side, not user-side. The /evidence-trail skill and the discourse layer in the ontology give the agent a principled way to deposit provenance as it populates the graph. Customers and engineers don’t need to walk those chains today. Treat the discourse layer as agent scaffolding — it accumulates audit value over time without ever appearing in a customer surface.
These two choices belong in the canonical framework. They are the IP firewall in action.
04
Three concrete lines remain. Each is shippable. Each ties to a thesis variable.
Line 1
Make GraphDB the canonical source for SBPI scoring
Advances variables A (ontological alpha) and D (node density). Unblocks Line 2.
Today, the stack rank for the micro-drama vertical lives in state/current.json and the per-brand markdown profiles. The AI-agent vertical lives in GraphDB. Two truths, one moat.
The principled approach: GraphDB becomes the canonical source for every vertical’s SBPI scoring. Per-vertical ontologies (micro-drama-sbpi#, ai-agent-sbpi#, nonprofit-sbpi#) all share the same SBPI v3 base classes — ScoreRecord, DimensionScore, Attestation, AgentCompany or BrandEntity — and differ only in which Dimension instances they reference. Cross-vertical comparison becomes a SPARQL UNION over named graphs rather than a custom-built service.
The dashboard does not change. It continues to read from state/current.json for static-deploy speed. The JSON regenerates nightly from GraphDB via a derived-snapshot pipeline. Customer-facing latency stays at static-site speed; the graph is the source of record.
Concrete moves:
- Define micro-drama-sbpi# ontology mirroring ai-agent-sbpi# structure, with the five vertical-specific Dimension instances (Distribution Power, Content Strength, Community Strength, Narrative Ownership, Monetization Infrastructure).
- Build micro_drama_to_rdf.py (or equivalent) that reads state/current.json and writes RDF.
- Write a nightly job that exports GraphDB → state/current.json so the dashboard’s read path stays unchanged.
- Backfill weekly snapshots for the eight weeks of micro-drama scoring already in the vault.
Output: a single canonical source for SBPI scoring across every vertical, with dashboard read-path performance preserved.
Line 2
Track λ (decay rate) and surface it in reporting
Advances variable λ directly. Makes the Lyapunov dashboard layer load-bearing.
The thesis variable λ — how fast structural advantage erodes — is currently unmeasurable. The graph holds one week (W12-2026 for the AI-agent vertical), and the micro-drama vertical isn’t in the graph at all. The delta field is populated on every ScoreRecord but nothing differentiates it from a guess.
The mechanism for tracking λ is already partly built. forWeek is in the schema. delta is in the schema. What’s missing is multi-week data and the derived metrics that compute decay from it.
The proposed structure:
- A new class DecayMetric that connects two ScoreRecords across weeks for the same company on the same dimension. Properties: forCompany, fromWeek, toWeek, forDimension, delta, decayRate (annualized), volatility (rolling std-dev across N weeks).
- A SPARQL window query that computes DecayMetric instances at write time when a new week ingests, rather than at read time.
- A per-vertical λ aggregate: median decayRate across all companies in the vertical, used as the vertical’s refresh-cadence parameter.
Surfacing λ in reporting:
- The Lyapunov layer in the dashboard already shows V trajectory and basin_grip_delta. Wire those derived values to the DecayMetric pipeline instead of the current hand-rolled JSON. Same visual; principled source.
- The weekly stack-rank editorial gets a “movers” section grounded in the highest |decayRate| companies for the week.
- The 1B-journey viewport (internal-only) shows λ per vertical as a measured rate.
Output: λ goes from a guess to a measured rate. The Lyapunov layer becomes load-bearing.
Line 3
Name the collective-intelligence framing in the canonical narrative
Advances no specific D/A/E/λ variable. Makes the existing variables legible.
ShurIQ’s name inherits from Engelbart’s Augmenting Human Intellect — the 1962 framing that the purpose of the instrument is to amplify groups’ ability to think together about complex problems. That line currently lives in private notes and pitch drafts. It is missing from the customer-facing surfaces, the investor deck, and the explainer.
The reframing is small and load-bearing:
- ShurIQ is the instrument organizations use to read themselves.
- The graph is shared memory; the dashboards are shared lens; the reports are shared deliberation surface.
- Every client engagement leaves the client more capable of seeing their own structural position, not just temporarily informed of it.
The practical edits:
- The lead paragraph of every customer-facing surface (dashboard, editorial briefs, explainer) names this. One or two sentences, not a section.
- The investor deck adds the Engelbart line as the founding-thesis frame — the moat (the graph) and the surface (the instrument) are two views of one collective-intelligence amplifier.
- The category-of-one positioning sharpens: ShurIQ is the only intelligence platform whose architecture treats compounding organizational memory as a first-class asset.
Output: the existing variables become explainable to anyone outside the room.
05
Use SBPI exclusively.
Brand Power Score (BPS) is the v1 origin and remains valid as historical context in the lineage line. Every active artifact, deck, dashboard, and report references SBPI v3 with vertical-applied weights.
The dimension language follows the vertical:
- AI agents — Model Capability, Market Traction, Platform Ecosystem, Autonomy Depth, Capital & Defensibility.
- Micro-drama — Distribution Power, Content Strength, Community Strength, Narrative Ownership, Monetization Infrastructure.
The score is always SBPI.
06
Three of four variables hinge on the same underlying operation.
Get more weeks of data into the graph across more verticals. Lines 1 and 2 together convert most of the framework’s “we could” into “we are.”
| Variable |
Status |
Line that advances it |
| D — Node Density |
91,278 / 1,200,000,000 (0.0076% of Y1 target per vertical) |
Line 1 |
| A — Ontological Alpha |
Established for AI-agent vertical |
Line 1 |
| E — Extraction Efficiency |
Modeled, not running |
Standing auto-research |
| λ — Decay Rate |
Unmeasurable (n=1 week) |
Line 2 |
Line 3 makes the variables explainable to anyone outside the room.
Appendix
For internal eyes
Methodology, schema state, and open questions.
Ontology summary — AI-agent vertical, live state 2026-05-15
Thirteen OWL classes under shurai.com/ontology/ai-agent-sbpi#:
- AgentCompany (1,672 instances)
- ScoreRecord (1,672 instances)
- DimensionScore (8,360 instances)
- Attestation (1,672 instances)
- Dimension (5 instances — Model Capability, Market Traction, Platform Ecosystem, Autonomy Depth, Capital & Defensibility)
- Tier (5 instances)
- DomainCategory (12 instances)
- AIAgentVertical, plus supporting classes
ScoreRecord predicates: forCompany, forWeek, inTier, hasDimensionScore, hasAttestation, compositeScore, delta.
DimensionScore predicates: forDimension, dimensionValue.
Attestation predicates: confidence, sourceType, plus PROV-O wiring (prov:wasGeneratedBy, prov:Entity, prov:Activity).
Dimension averages — W12-2026, n=1,672
| Dimension |
Avg |
Min |
Max |
| Model Capability | 47.61 | 40 | 90 |
| Market Traction | 46.60 | 25 | 90 |
| Autonomy Depth | 44.49 | 25 | 100 |
| Capital & Defensibility | 43.39 | 40 | 75 |
| Platform Ecosystem | 39.92 | 30 | 95 |
Platform Ecosystem trails by 7.7 points — most YC AI agent companies are individual products, not ecosystem plays. Model Capability tops, consistent with self-reported framing in YC application data.
Top composite scores — W12-2026
| Rank |
Company |
Composite |
| 1 | Persana AI | 69 |
| 2 | Fiber AI | 68 |
| 3 | Rescale | 66 |
| 3 | Warmly | 66 |
| 5 | Fini | 65 |
| 6 | Tenyks | 64.75 |
| 7 | Athelas | 64.5 |
| 7 | careCycle | 64.5 |
| 7 | Mutiny | 64.5 |
| 10 | MindsDB | 63.5 |
Score range: 25 to 69. No company exceeds 70 — the rubric was designed with headroom.
Domain distribution
Software dominates the cohort at 631 of 1,672 (37.7%). Productivity (296), Health (222), and Finance (172) round out the major segments. Twelve domain categories total.
Proposed DecayMetric class — Line 2
sbpi:DecayMetric a owl:Class ;
rdfs:subClassOf prov:Entity ;
rdfs:label "Decay Metric" ;
rdfs:comment "Cross-week delta between two ScoreRecords for the same company on the same dimension. Computed at ingest." .
sbpi:forCompany rdfs:domain sbpi:DecayMetric ; rdfs:range sbpi:AgentCompany .
sbpi:fromWeek rdfs:domain sbpi:DecayMetric ; rdfs:range sbpi:Week .
sbpi:toWeek rdfs:domain sbpi:DecayMetric ; rdfs:range sbpi:Week .
sbpi:forDimension rdfs:domain sbpi:DecayMetric ; rdfs:range sbpi:Dimension .
sbpi:dimensionDelta rdfs:domain sbpi:DecayMetric ; rdfs:range xsd:decimal .
sbpi:decayRate rdfs:domain sbpi:DecayMetric ; rdfs:range xsd:decimal .
sbpi:volatilityWindow rdfs:domain sbpi:DecayMetric ; rdfs:range xsd:integer .
Architectural decisions worth re-examining
- Oxigraph (2026-03-21 initial selection) versus GraphDB (current running instance) — the switch was made without written rationale. Working stack stays unless scale forces a change.
- Discourse grammar (Claim, Evidence, Source plus supports, opposes, informs, triggers, predicts, annotates, grounds, supersedes, createdBy) — agent-side scaffolding. Joinable to AgentCompany scoring but separately tracked. Stays internal.
Open questions for Line 1
- Does the auto-research cycle deposit into the same named graph as client engagements, or into an auto: named graph for hygiene?
- The dashboard renderer reads JSON. The nightly export job runs where — local cron, GitHub Action, or a Cloudflare Worker triggered cron?
- Is the historical W11–W19 micro-drama data clean enough to backfill in one pass, or does it need a review gate per week?
Open questions for Line 2
- Window size for volatility — 4 weeks, 8 weeks, 12 weeks?
- Annualization formula — simple (delta × 52) or compounded?
- Per-dimension λ or aggregate composite λ — both, sequenced?