Coframe Naming Discussion — Vision Edition¶
A condensed record of the naming conversation, summarizing the early naming-and-domain analysis lightly and capturing the architectural vision that emerged in the latter half of the discussion at higher fidelity.
Participants: HW (project lead), CL (Claude).
Part I — Naming and domain (summarized)¶
The starting question¶
HW asked CL to evaluate "Coframe" and "Coframe-jr" as names, given the framework's architectural ambition, the etymology (Column-Operation-Frame), and the possibility of standards-positioning.
What the early analysis surfaced¶
CL evaluated "Coframe" against several criteria — distinctiveness, teach-on-first-encounter, collision risk, register flexibility, and capacity to adjective into a standards reference like "Coframe Conformance Level AAA." Three concerns came out of this:
- The Column-Operation-Frame etymology is private — not what new readers infer from the name.
- A mathematical "coframe" already exists in differential geometry (a dual basis to a frame field), creating a real collision for mathematically-literate audiences.
- "Coframe-jr" specifically — with lowercase jr and hyphen — has multiple problems: it gets miscapitalized in third-party writing, the "junior" connotation suggests less mature rather than focused, and the convention is unusual enough that it produces inconsistencies.
CL's recommendation collapsed to: at minimum, fix the Jr naming to a standard editions pattern (Coframe Core / Coframe Pro). Whether to rename Coframe itself depended on how seriously the standards-aspiration was being taken — for product-naming, Coframe is fine; for standards-naming, it has measurable friction.
HW chose Coframe Core / Coframe Pro. That settled the editions question.
The domain question¶
HW asked about coframe-core.io versus coframe.io (taken). CL pushed back on coframe-core.io for three reasons: the project deserves the unmodified URL, hyphens weaken domains, and committing the editions split into URL structure constrains future evolution. Better alternatives, in rough order: try acquiring coframe.io (parked, likely available at low-to-mid four figures); fall back to coframe.dev (also parked, also acquisition-eligible); fall back further to .is only after both fail.
The architectural recommendation: one canonical domain anchors the project, with editions as paths underneath (/core, /pro) — matching how GitLab, PostgreSQL, and Elastic structure theirs.
Brainstorming alternatives¶
HW asked for a deeper brainstorm. CL generated candidates from eight angles — algebra-rooted names, anchor/entity-rooted, lattice/grain-structure-rooted, verification-rooted, family-genealogy-rooted, invented compressions like Coframe itself, grammar-rooted, and AI-positioning-rooted.
The candidates that survived stress-testing were:
- Anchor — strongest thesis-alignment, accessible, common-word search-friction risk, mild Anchor-modeling collision.
- Stratum — distinctive, decent fit, slightly esoteric.
- Coframe (kept) — known costs but already invested-in.
- Theorem — high-gravitas wildcard; sets quality expectations the framework actually meets.
The choice between accessibility-first (Anchor) and rigor-first (Operad) depended on which audience the project wanted to target.
This part of the conversation was useful but ultimately not where the breakthrough came. The real shift happened next.
Part II — The architectural vision (in detail)¶
HW's reframe: Coframe as one member of a broader framework¶
HW introduced a substantively different framing:
Coframe is clearly focused on relational structured data — table, dataframe — the most popular data (rectangular) form that humans consume today, in industrial settings. However, the full natural scope of data is MUCH broader, and philosophically, data is at the very root of phenomena existence. Just in the area of database today, structural data is bigger than relational data model (key-value store, graph, object...). My sense of the coframe foundation is that it is extensible to all of structural data model, its underlying concepts has clearly going beyond the relational and tabular data. I think there is a possibility that I am going to extend coframe further into a broader framework, that goes beyond "column" and "frame", but "operator" may stay. At that point, coframe will be a member of a family spec.
CL's response: stress-testing the universality claim¶
CL took the claim seriously and stress-tested it. The relational specifics in Coframe are: schemas as virtual tables, ColumnSpec as the unit of declaration, FD-DAG as dimensional structure, family-name as a column-level concept, Frame-QL as a tabular query language. These are implementations of more general concepts:
- An observation anchored to an entity-set, with a missingness signature.
- Operations that take a predecessor observation to a successor with a well-defined relationship between their anchorings and their identities.
- A genealogy of observations linked by operations.
- Structural relations (identical / sibling / cousin) governing substitutability.
- Partition-invariance as the algebraic property determining when navigation across anchorings is name-preserving.
- Integrity verification as the framework's commitment that declared structure matches attested data.
None of these require rectangularity. CL walked through how each translates across data shapes:
- Graph data: nodes and edges as entities; node/edge properties as observations; graph algorithms as operators with anchoring relationships.
- Key-value/document data: documents as entities; fields as observations; aggregations as operators.
- Tensor/array data: array indices as anchors; elements as observations; reductions along axes as operators (with partition-invariance well-known there — sum-along-axis vs. mean-along-axis).
- Time-series/event-stream data: events as entities; properties as observations; windowing as anchored transformations.
CL's conclusion: the framework's grammar layer genuinely generalizes. Coframe's contribution is the relational instance; the broader framework would be a grammar layer for structured data of any shape.
This implied a two-tier naming structure: a family name above (for the universal claim), and member names below (for specific instances over specific data shapes). Following Apache Arrow's pattern (Arrow Flight, Arrow Compute, Arrow Dataset under the Arrow family) or Kubernetes' pattern (Pods, Services, Deployments under Kubernetes).
The entity-family-operator triple¶
HW then added a critical piece:
I should mention that there is another term that will stay: "entity". Entity manage the "key" space, and "family" manage the "value" space, and "operator" manage the operational linkage.
CL recognized this as a complete decomposition of structured data observation. Three orthogonal primitives, each managing one universal aspect:
- Entity — the key space. What an observation is about. The thing being identified.
- Family — the value space. What's observed about entities. The thing being measured or attributed.
- Operator — the operational linkage. The transformations that take observations to observations, with well-defined relationships between entity-anchorings and family-membership.
This isn't a list of three concepts. It's a complete decomposition of what structured data observation requires. Every act of structured observation has a what-is-this-about (entity), a what-am-I-recording (family/value), and — when you compose observations or derive new ones — a how-are-these-related (operator). You can't have structured data observation with fewer than these three primitives; you don't need more.
The claim moved from "these primitives generalize across data shapes" to "these three primitives are the foundation of any structural-verification regime over any data shape". That's a category-defining claim.
What the triple changed about naming¶
With the triple articulated, CL revisited the family-name candidates:
- Anchor — names entity specifically; pulls one corner; partial.
- Operad — names operator specifically; partial in a different direction.
- Datum — closer to the unit-of-observation idea; doesn't telegraph operator.
None of these names the triple itself. They each pick a corner.
CL's revised lead candidate: Triad. It directly names the three-fold structure. Triad framework, Triad Conformance Level AAA, Triad-conformant data. It pulls all three primitives into view by virtue of naming them as a triad. It scales across data shapes — a relational triad over (rows, columns, aggregations); a graph triad over (nodes, properties, traversals); a document triad over (docs, fields, transformations). Each member instantiates the triad differently. It's accessible without being academic, carries gravitas, harmonizes with the existing entity/family/operator vocabulary.
The architectural picture this opened up:
- Triad — the family. Universal grammar layer for structured-data analytical observation. Foundation: the (entity, family, operator) triple. Provides verification regime, conformance levels, integrity-condition catalog.
- Coframe — the relational/tabular member. Coframe instantiates the Triad over rectangular data.
- Coframe Core / Coframe Pro — the editions of Coframe.
- Future members — graph, document, stream, tensor instantiations.
The standards positioning becomes naturally cleaner: Triad Conformance Level AAA — defined once at the family level, applicable to any member instance. "This data is Triad-conformant at Level AAA" is a substantive, transferable claim across data shapes.
CL noted: what HW had articulated with the entity-family-operator triple was closer to a foundational specification than to a product framework — comparable to how relational algebra grounds relational databases, or type theory grounds programming languages. The naming should match that level of ambition.
The input/output asymmetry — Coframe's durable scope¶
HW added a refinement that turned out to be important:
Even with Triad existing, coframe can still be a member of a family with its distinctive focus on the most popular data form today. Coframe itself can gradually and naturally extending to all columnars data forms, to multiple diverse backend data engines (fundamentally, it requires the engine to be able to host a columnspec / or a data series spec, that has (name, entity) and responds to operator). Even key-value stores can join the play. Coframe may stay for a long time, because its longevity is connected to the tabular data output form, which will stay longer than tabular data as input.
However, the AI Agent as consumer may open up to a broader set of queries that deviate from tabular data significantly.
CL recognized this as separating two distinct tabular-data questions:
Tabular as input. What shape the source data is in: rectangular tables, dataframes, columnar parquet. This dominates today's analytical infrastructure but isn't logically necessary for Coframe — the column-spec abstraction (name, entity, operator-response) generalizes to any backend that can host a data-series-spec, including key-value stores, document stores, graph stores' property tables, time-series stores, even tensor stores.
Tabular as output. What shape the result of an analytical query takes: a row-set, a frame. This is what humans, BI tools, and dataframes consume. This is the form with staying power — because human cognition prefers rectangular outputs even when underlying data isn't rectangular.
The asymmetry is striking. Coframe doesn't actually require input tabularity; it requires only that backends expose data-series with (name, entity) declarations and respond to operators. But Coframe's Frame-QL output is structurally tabular by design — frames are row-sets, the BY-clause produces grouped tables, results stream as DataFrames.
That's the connection to longevity HW named: Coframe's bet is on output-tabularity, not input-tabularity. And that bet is more durable than it first appears. Tabular outputs are how humans (and most current AI agents, because they were trained on tabular data and tabular conventions) consume analytical results. Even when underlying data is graph-shaped or document-shaped, the analytical answer tends to be rectangular: "here's a table summarizing what you asked." So a framework whose query output is tabular has staying power proportional to the staying power of tabular cognition — which is much longer than the staying power of tabular storage.
The AI-agent-as-consumer angle expands the frame¶
CL pushed on HW's second observation harder. AI agents as analytical consumers don't have the tabular bias humans do. A human analyst wants a table because their cognition is built for spatial-grid reasoning over comparisons. An AI agent can consume any structured representation that's well-formed — graphs, trees, JSON, embeddings, free text with structured citations.
This means the AI-agent consumer audience may open up a class of analytical queries that don't have natural tabular outputs:
- "Walk the customer-influence graph from these seed customers and tell me which paths matter."
- "Cluster these documents semantically and tell me which clusters are growing."
- "Trace the causal-attribution path from this anomaly back through the system."
- "Find the subset of features that, together, explain the outcome variance."
These aren't tabular queries. The natural answer to each isn't a frame; it's a structured answer of the right shape for the question — a path, a clustering, a graph, a feature-set with weights.
This sharpened the family-of-frameworks idea: the family generalizes not just across input data shapes (which is interesting but somewhat incremental) but across output structures — which is where the real diversity of analytical questions lives. The AI-agent consumer is what makes this growth direction worth taking seriously, because AI agents can consume non-tabular outputs that humans typically can't.
The architectural picture refined:
- Triad (the family). Universal grammar layer for structured-data analytical observation. Output-shape-agnostic at the foundation.
- Coframe (member). The Triad instance for tabular-output analytical queries. Input-shape-flexible (any backend exposing column-specs); output-shape-fixed (frames). Serves the human-analyst audience and the tabular-trained-AI-agent audience. Long-lived because output-tabularity is durable.
- Future members. Triad instances for other output shapes: graph-output, tree-output, set-output, distribution-output. Each shares the Triad foundation but specializes its output structure and the operators relevant to that shape.
CL's three sharpening questions to HW¶
To make sure the picture matched HW's actual intuitions, CL asked three questions:
- Does "tabular-output member" feel like the right characterization of Coframe's place in the family?
- Are there concrete future members already being thought about, or is the family idea purely architectural?
- How load-bearing is the AI-agent-consumer angle in HW's thinking?
HW's answers — track separation¶
HW's reply was sharp and disciplined:
- yes
- You understand the vision of the bigger architecture correctly, however, stay on the coframe scope is important, and let's build coframe the way best for coframe scope.
- the vision for building a grounded framework for AI-agent as consumer is what I think my unique contribution — and I should really focus on that. However, remember #2, so I consider this is separate track of work
This was a discipline statement: don't let the bigger vision distort the smaller deliverable.
The two-track architecture that crystallized¶
The conversation arrived at a clean two-track structure:
Track 1 — Coframe. An analytical data platform with rigorous structural verification, useful for AI agents among other consumers. Tabular-output queries over input-flexible backends. The verification regime, the family vocabulary, MTI as theorem, the dubious-query mechanism, the four-rule filter. Ships as Coframe Core (open-source) and Coframe Pro (commercial). v1.0 deliverable on the timeframe specified in the platform design document (~24-30 weeks).
Track 2 — A grounded framework for AI-agent-as-consumer. A separate intellectual project that treats the AI agent as the analytical consumer of first concern and asks: what does an agent need from analytical infrastructure to ground its reasoning structurally? What are the failure modes of current AI-agent-analytics integrations? What architectural commitments would address them? The (entity, family, operator) triple is part of this, but so are: provenance, refusal-as-first-class, verification-of-claims, structured-disambiguation-protocols, calibrated-confidence-on-results.
Track 2 doesn't have a v1.0 deliverable in the same sense; it's foundational thinking. It would presumably encompass Coframe-style analytical querying plus whatever else agents need (graph traversal, semantic clustering, causal attribution) that doesn't fit Coframe's frame.
Track 1's success doesn't depend on Track 2's progress, and vice versa. Coframe Core ships, gets adoption, builds a community, generates revenue. Track 2 evolves through writing, prototypes, and engagement with AI-agent infrastructure projects, on intellectual rather than engineering timelines. If Track 2 eventually crystallizes into a broader framework with multiple members, Coframe is positioned as the relational/tabular member. If Track 2 takes longer or evolves differently, Coframe remains valuable on its own terms.
What this clarifies about HW's unique contribution¶
CL's reflection: "the vision for building a grounded framework for AI-agent-as-consumer is what I think my unique contribution" — this self-understanding is sharper than most founders manage. It identifies what's specifically HW's in a space crowded with analytical-tooling vendors and AI-agent-framework vendors:
- The analytical-tooling vendors (Cube, Looker, MetricFlow, etc.) think about data products for human analysts. AI-agent support is added on top of human-built infrastructure.
- The AI-agent-framework vendors (LangChain, MCP implementations, agent orchestration tools) think about agents as task-executors. Analytical correctness is a downstream concern, often unaddressed.
- The gap between them — analytical infrastructure designed agent-first, with structural rigor as the foundation — is what HW is positioned to articulate. This is the Track 2 contribution.
Coframe is the proof-point for that vision in one specific scope (tabular-output analytical queries). Without the proof-point, the vision is just a position paper. With the proof-point, the vision has credibility and a concrete instance it can point to.
What this settles about naming¶
The naming question simplifies dramatically given track separation:
- Coframe stays. With the family-of-frameworks idea held privately and Coframe scoped as the focused tabular-output deliverable, the earlier critiques (geometric-coframe collision, private etymology) are significantly weaker. A specific-member name doesn't need to carry universal weight. The Column-Operation-Frame etymology is exactly right for Coframe's actual scope.
- Coframe Core / Coframe Pro. Editions naming settled.
- The lowercase-jr-with-hyphen still goes. Independent of family-name questions.
- Domain pursued for Coframe specifically. Not for the family. Acquire
coframe.ioif available at reasonable cost; otherwise pick a clean alternative and stop optimizing. - The family name (Triad or otherwise) deferred. Becomes load-bearing only when a second family member is concretely planned. Holding it open costs nothing.
The brainstorming was useful for sharpening why Coframe is the right name given the right scoping, but the conclusion ended up close to the starting point with much more clarity about the reasoning.
Concrete next steps that emerged¶
For Coframe (Track 1):
- Lock the editions naming at Coframe Core / Coframe Pro. Update the manual, platform design, and article accordingly.
- Pursue domain acquisition of
coframe.io. Use a broker, anonymized inquiry. Fall back tocoframe.devor.isif needed. - Continue executing the platform-design v0.6 phasing; substantive technical decisions are settled, ship.
- Define verification levels (Bronze/Silver/Gold or A/AA/AAA) as Coframe-specific for v1.0. Reframe as family-level later if and when the family emerges.
For Track 2 (AI-agent-as-consumer):
- Start a separate writing space — design notes, position papers, sketches. Not for publication; for clarity.
- Write a foundational position paper that articulates what AI agents need from analytical infrastructure to be structurally grounded consumers rather than stochastic guess-makers. The (entity, family, operator) triple is part of this; so are provenance, refusal-as-first-class, calibrated-confidence-on-results.
- Treat Coframe as one case study in the position paper, not as the answer. Coframe demonstrates what tabular-output agent-grounded analytics looks like; the broader framework specifies principles any agent-grounded analytical infrastructure should honor.
- Engage with AI-agent infrastructure projects (Anthropic's MCP, OpenAI's function-calling, agent frameworks) to test principles against real systems.
Part III — The vision distilled¶
Several threads come together in this conversation that are worth naming explicitly because they characterize what HW is building:
1. Three primitives sufficient for any structural-verification regime¶
The (entity, family, operator) triple is HW's proposed foundation for analytical observation over any structured data. Entity manages the key space (what's observed). Family manages the value space (what's recorded about entities). Operator manages the operational linkage (transformations between observations). The claim that these three primitives are necessary and sufficient — neither fewer nor more — is the kind of claim that, if rigorously developed, functions as a reference standard the way relational algebra does for relational databases.
2. Output-tabularity as the durable bet¶
The framework's commitment isn't to rectangular data; it's to rectangular answers. Human cognition (and most current AI cognition) prefers tabular results regardless of underlying data shape. Coframe's design honors this: input-shape-flexible, output-shape-fixed. This is more durable than betting on tabular storage, which is fashion-bound, while tabular cognition is foundational.
3. AI-agent-as-consumer as the unique contribution¶
The category gap HW identified is real: analytical-tooling vendors think human-first; AI-agent vendors think task-execution-first; nobody in either camp is articulating analytical infrastructure designed agent-first with structural rigor as the foundation. This is HW's unique positioning, distinct from being-yet-another-semantic-layer-product or being-yet-another-agent-framework.
4. The discipline of separating tracks¶
The most important architectural decision in this conversation isn't a naming decision — it's the decision to separate Track 1 (Coframe shipping) from Track 2 (the broader vision). Many projects fail because founders build scaffolding for the bigger thing and the smaller thing never ships. HW explicitly rejected that pattern: stay on the Coframe scope when working on Coframe; develop the broader vision separately, on its own intellectual timeline.
This discipline creates a virtuous structure: - Coframe ships and proves the rigor-foundation-applied-to-tabular-output works. - The broader vision develops in parallel, informed by Coframe's lessons but not gated on them. - Coframe's success makes the broader vision more credible without requiring it. - The broader vision, when it crystallizes, has Coframe as a concrete proof-point.
5. Standards-positioning as second-order outcome¶
The conversation surfaced a recurring tension: how much to position for standards-influence vs. product-success. The conclusion: don't try to position. The deepest technical work, articulated rigorously, becomes the standards-influence over time, regardless of explicit positioning. Make Coframe's verification regime the most precisely articulated, the most reproducibly verifiable, the most cleanly tiered version that exists, and the standards conversation tends to come to the project rather than the other way around.
6. Naming follows from clarity, not the other way¶
The name-brainstorm consumed substantial effort but ended close to the starting point. What changed wasn't the name; it was the clarity about what's being named. Coframe with private etymology and a geometric collision is fine when it's a focused tabular-output member; the same name is friction-laden when it's positioned as universal-framework. The naming is the same; the framing is different. Most "naming problems" are actually framing problems wearing naming clothes.
Closing reflection¶
What emerged from this conversation is a clearer picture of three things, in increasing order of strategic importance:
- Coframe's name and editions — Coframe / Coframe Core / Coframe Pro. Settled.
- Coframe's architectural place — the tabular-output member of a broader framework family, with the family held privately for now. Settled.
- HW's unique contribution — articulating principles for analytical infrastructure designed AI-agent-first, with structural rigor as the foundation. The distinct intellectual project worth pursuing on its own track.
The discipline that makes all three work: ship Coframe excellently within its scope, hold the broader vision privately while keeping the architecture clean enough to support it later, and develop the AI-agent-grounded-framework as a separate intellectual project that will be ready when its time comes.
The naming question, viewed from this distance, was the entry-point into a much larger clarification about what's being built and why.
Captured at user request. This is a vision-oriented condensation of a longer naming-and-architecture discussion; a verbatim version exists separately for reference.