← Blog

The right answer has to be woven

2,590 words Filed in: context engineering, AI, content architecture, information architecture, software engineering

Woodcut-style print of dark ink threads running horizontally in parallel, with a small section woven into a rough grid at the centre and sparse amber accents showing through the gaps Image made with FLUX.2-dev.
Loose threads running in parallel, partly woven where they cross.

Several traditions solved the shared-understanding problem in parallel. LLMs are dissolving the boundary that kept them apart.

At a previous organisation, we syndicated content from Drupal across multiple downstream platforms through API feeds. Over the years, a recurring problem: a developer misspells a field name in the CMS, say summray instead of summary. A downstream JavaScript platform consumes that API and builds around the typo. Months later, someone fixes the typo in Drupal. Now the downstream platform breaks, because it was built against the misspelled field. So we add a translation layer to preserve the old typo alongside the fix. This happens again. And again. Each time, the downstream system absorbs more of the upstream system's historical mistakes, and the mapping between them gets more fragile.

The team solved this problem for years with what I variously called a "translation protocol" or a "data dictionary" — ad hoc scripts and field-mapping spreadsheets, reinvented slightly differently each time. It wasn't until I started reading Eric Evans' Domain-Driven Design that I discovered the pattern had a name: the anticorruption layer. It's a deliberate translation boundary between two systems that prevents one system's model from leaking into another's. Instead of each downstream consumer adapting to whatever the upstream happens to expose (typos and all), you build an explicit layer that translates between the external model and your clean internal model. The upstream can change; your internal model stays coherent.

Knowing the pattern wouldn't have just saved time; it would have changed how I structured the integration from the start. An anticorruption layer is a design decision you make at the boundary, not a patch you apply after the damage compounds. I'd been solving a named problem with duct tape because the formalised solution lived in a tradition I'd never read.

The loose threads, don't need to stay so.

tl;dr

  • Several traditions — among them software engineering (DDD), content strategy, and AI context engineering — each solved the problem of shared understanding across system boundaries
  • They developed in parallel partly because they operated on different substrates: human-readable meaning vs. machine-executable contracts
  • LLMs are dissolving that boundary; natural language is now a functional input to software
  • Specific patterns now transfer concretely, and the cross-pollination has barely started

Several traditions, one problem#

I wrote recently about how context was always the job. The more I've sat with that idea, the more I've noticed the same boundary-crossing problems showing up across fields, with solutions developed in parallel.

Software engineering. Evans published DDD in 2003, arguing that the hard problem of software is shared understanding of the domain, not the code. His prescription: build a "ubiquitous language" that domain experts and developers both use, draw explicit boundaries ("bounded contexts") around different models, and build translation layers where they meet. The primary artifact is the shared model, not the running system.

Content strategy. Daniel Jacobson at NPR built COPE (Create Once, Publish Everywhere) around content modelled as structured data with standardised interfaces. Karen McGrane's Content Strategy for Mobile sharpened the point: content trapped in a specific visual format can't travel across channels. You have to break it into meaningful chunks named by what they are, not how they look. "News item" survives a redesign; "featured card with sidebar layout" doesn't.

AI context engineering. An LLM has general knowledge of the universe but not your project's constraints, your team's vocabulary, or why the module is shaped the way it is. The bottleneck isn't the model; it's the context you provide. Practitioners are building agent instruction files (CLAUDE.md, AGENTS.md, .cursorrules), project briefs, scoped rules, and structured documentation for AI consumption. The discipline is new. The problem it's solving is not.

These are the traditions I can see from where I sit. My career moved through journalism, content strategy, information architecture, design systems, and software engineering, and I now spend most of my time on AI integration. This is shaped by my path, not a natural taxonomy. Knowledge management, library science, and information science all have deep roots here too. And these traditions aren't fully independent: content strategy grew out of information architecture, AI context engineering is practiced almost entirely by software engineers, DDD drew on earlier modelling traditions. They share intellectual ancestry even where they developed solutions separately.

At a high enough altitude, "decompose, name, and build explicit translation at boundaries" describes most of good engineering. The interesting question isn't whether the general principle holds. It's whether these traditions have produced specific patterns worth transferring, and whether there's a reason that transfer is newly practical.

Why the boundary is dissolving#

Before LLMs, these traditions operated on fundamentally different substrates. Content strategy dealt with human-readable meaning: editorial voice, reader comprehension, how a headline works in a news feed. Software engineering dealt with machine-executable contracts: type systems, APIs, interface boundaries. The abstract patterns rhymed, but the gap between "write so a human editor understands the content model" and "write so a compiler enforces the interface contract" was wide enough that cross-pollination rarely paid off in practice. You could admire the parallels. You couldn't easily import the other tradition's tools.

LLMs are collapsing that gap. An agent instruction file like CLAUDE.md is simultaneously human documentation and machine instruction. An editorial style guide that used to exist purely for human writers now directly shapes how an AI agent generates content. A content schema that used to govern what a CMS would accept now structures what context an agent receives. Natural language has become a functional input to software systems, and that changes which traditions have something to offer each other.

I don't want to overstate the collapse. These instruction files are natural language, but a very specific kind: structured, imperative, rule-based. Closer to a config file written in English than to a conversation. Content strategy's insights about editorial governance transfer well to this new substrate; its insights about reader comprehension transfer less cleanly, because the "reader" (an LLM) has very different failure modes than a human. The boundary is thinning, not gone.

But even a partial collapse changes what's practical. And AI has changed the cost of iteration enough to make context a designable variable. Before AI, context failures were slow and expensive to learn from: a misaligned contractor takes weeks to deliver the wrong thing; a content schema mismatch causes a slow bleed of broken syndication. You're not going to tear down a two-week pull request and rebuild it with a different briefing just to test whether the briefing was the problem.

With an LLM, you do exactly that. You give the agent one framing, watch it fail, adjust the context, and try again — and the cost of that cycle is minutes, not days. Because the iteration is cheap, you start to notice which context changes produce better outputs, which is how you discover that the problem was structural (missing bounded context, ambiguous vocabulary, no schema) rather than a one-off misunderstanding. Content strategists have always wanted to test whether restructuring content improves outcomes, but the feedback loop was too slow. Now it isn't.

Patterns that now transfer#

The anticorruption layer is one such transfer, from DDD to content architecture. Knowing the named pattern wouldn't have just saved me time; it would have changed how I structured the integration from the start. That's the cost of institutional silos: practitioners solve named problems with duct tape because the formalised solution lives in a tradition they've never read. Many IA practitioners have read Evans; many DDD practitioners use content modelling techniques without calling them that. But the conferences, journals, and canonical texts don't overlap. A small but growing group is connecting DDD to AI agent design, but it hasn't reached the mainstream AI tooling ecosystem, and almost none of these voices are looking further afield to content strategy.

There's a harder version of this objection: maybe content architects already build translation layers and just call them "field mapping" or "API adapters." If the vocabulary difference is the main barrier, that's a weaker claim than if the structural thinking is what's missing. The anticorruption layer example shows it's not just vocabulary: knowing the named pattern would have changed when and how I built the translation, not just what I called it. But not every parallel will cash out that concretely.

Here's a transfer running the other direction: from content strategy to AI context engineering.

Content strategists learned years ago that you define content types with required and optional fields, relationships to other types, and governance rules about who can create and edit them. This is the content schema: the structural backbone of any CMS at scale. A news article has a required headline, required body, optional teaser, required topic taxonomy, and a relationship to an author profile. These aren't suggestions; they're enforced by the CMS.

When I started structuring context for AI agents, I recognised the same problem. An agent working on a blog post in this site's codebase needs specific context: the editorial style guide, the image path conventions (which are non-obvious; the build rewrites them), the frontmatter schema, the CSS scoping rules. Some of that context is required for every task; some is optional enrichment. The relationships between context sources matter: the style guide references the frontmatter spec, which references the image conventions.

A content strategist would immediately think in terms of a content schema: define the required context per task type, make the relationships explicit, assign governance for who maintains each source. Most AI context engineering doesn't think this way yet. Agent instruction files tend to grow organically: someone hits a problem, adds a line, moves on. There's no schema, no required-vs-optional distinction, no governance. It's fair to ask whether this is a cross-pollination gap or just a maturity issue; these conventions are roughly a year old, and this is what every structured documentation system looks like in year one. Both are probably true. But content strategists have fifteen years of lessons about how to mature these structures, and now that these instruction files are natural language doing a machine's job, those lessons land in a way they couldn't when the substrates were different.

From where I sit#

I'm not arguing these traditions should merge. I'm noting that the barrier which kept them apart, the gap between human-readable meaning and machine-executable contracts, is narrower than it's ever been. Specific patterns now transfer concretely.

If you're working on context engineering for AI and you haven't read Evans on ubiquitous language, there are twenty years of lessons on formalising shared understanding that you're rediscovering from scratch. If you're doing DDD and haven't looked at how content strategists model structured content across platforms, you're missing a parallel tradition that solved the "how does this travel across boundaries" problem for editorial organisations.

The stable artifacts matter: a well-governed schema, a maintained instruction file, a shared vocabulary. But they only stay useful if someone is actively maintaining them. That's the common thread: Evans' domain models require ongoing collaboration with domain experts, McGrane's content schemas need governance to stay current, agent instruction files decay the moment the codebase moves on without them. None of these traditions own that insight. All of them arrived at it. The weaving has started, but it hasn't reached mainstream practice in any of these fields yet.

Cross-tradition reading#

  • Duranti & Goodwin, Rethinking Context: Language as an Interactive Phenomenon (1992). The academic foundation: context is produced through interaction, not inherited as background.
  • Eric Evans, Domain-Driven Design (2003). Ubiquitous language and bounded contexts as software engineering's answer to the shared understanding problem.
  • Karen McGrane, Content Strategy for Mobile (2012). Blobs vs. chunks, and why content must be structured by meaning to travel across channels.
  • Rod Johnson, Context Engineering Needs Domain Understanding (2025). The strongest bridge between DDD and AI context engineering.
  • Russ Miles, Domain Driven Agent Design (2025). What happens when agents cross bounded contexts without translation layers.
  • Paul Dourish, What We Talk About When We Talk About Context (2004). The closest existing bridge between linguistic anthropology and computing — argues context is interactional, not representational.
  • Lucy Suchman, Plans and Situated Actions (1987; 2nd ed. 2007). Suchman showed that people don't follow plans the way AI planners assumed — they improvise based on situated context. The modern agent community's discovery that rigid task decomposition fails without runtime context is the same finding.