From Documentation to Executable Context: The Encoding Process
You’ve run a knowledge audit. You have a decision map and a prioritized encoding backlog. Now comes the actual work: turning documentation and tribal knowledge into artifacts that AI agents can read and execute.
Business-as-Code moves from concept to practice here. The encoding process has a clear workflow, and it starts with what you already have: your existing documentation.
The Encoding Workflow
The process follows five stages. Each stage has a clear input, a clear output, and a clear owner.
Stage 1: Gather Source Material
Pull together everything that describes the process you’re encoding:
- Wiki pages and knowledge base articles
- Standard operating procedures (SOPs)
- Training materials and onboarding docs
- Email threads where process questions were answered
- Slack/Teams messages where exceptions were discussed
- Interview notes from your knowledge audit
You’re not looking for perfection. You’re looking for raw material. A rambling wiki page with outdated sections and three conflicting updates is still useful: it tells you what entities exist and how they’ve evolved.
Stage 2: Extract Entities
Read through the source material and identify the nouns: the things your process operates on. These become your schemas.
Take a real example. Here’s a paragraph from a typical customer onboarding wiki page:
“When a new enterprise customer signs, the success manager creates their workspace, configures SSO if they require it, sets up their billing profile based on their contract terms, assigns them to a support tier (Standard, Premium, or Dedicated), and schedules a kickoff call within 5 business days.”
The entities hiding in that paragraph:
- Customer (with attributes: type, SSO requirement, contract terms)
- Workspace (created per customer)
- Billing Profile (tied to contract terms)
- Support Tier (Standard, Premium, Dedicated)
- Kickoff Call (with a timing constraint: 5 business days)
Each of these becomes a schema candidate. You’re not designing a database; you’re defining the vocabulary of your business in a format that’s precise enough for an AI agent to work with.
Stage 3: Build Schemas
Schemas define your business entities using JSON Schema. They specify what fields exist, what values are valid, and how entities relate to each other.
Here’s what the Support Tier entity looks like as a schema:
{
"$schema": "https://json-schema.org/draft/2020-12/schema",
"title": "SupportTier",
"description": "Customer support tier assignment based on contract terms and ARR",
"type": "object",
"properties": {
"tier": {
"type": "string",
"enum": ["standard", "premium", "dedicated"],
"description": "Support tier level"
},
"responseTimeSla": {
"type": "string",
"description": "Maximum initial response time (e.g., '4 hours', '1 hour', '15 minutes')"
},
"dedicatedManager": {
"type": "boolean",
"description": "Whether the customer gets a named support manager"
}
},
"required": ["tier", "responseTimeSla"]
}
The schema doesn’t contain business logic: it defines the structure. An AI agent reading this schema knows exactly what a Support Tier is, what values it can have, and what fields matter. No ambiguity, no interpretation needed.
At NimbleBrain, we host schemas at schemas.nimblebrain.ai and use them across our own agent systems. The same approach works for any organization: define once, reference everywhere.
Stage 4: Write Skills
Skills encode the decision logic: the rules, the judgment calls, the “it depends” answers from your knowledge audit. Skills-as-Documents means these are written as structured markdown, not code.
Here’s the same onboarding process encoded as a skill:
# Skill: Customer Onboarding, Support Tier Assignment
## Purpose
Assign the correct support tier to a new customer based on their
contract terms and annual recurring revenue.
## Inputs
- Customer contract (from CRM)
- Annual recurring revenue (ARR)
- Any special terms flagged during sales
## Decision Rules
1. ARR >= $500K → Dedicated tier
2. ARR >= $100K OR contract includes "premium support" addendum → Premium tier
3. All others → Standard tier
4. EXCEPTION: If the customer was referred by an existing Dedicated-tier
customer, upgrade one tier regardless of ARR
## Outputs
- Assigned support tier
- Response time SLA
- Whether a dedicated manager is assigned
## Validation
Check with the customer success lead if:
- The customer's ARR is within 10% of a tier boundary
- The sales notes mention any verbal commitments about support level
- The customer is in a regulated industry (may need Dedicated regardless of ARR)
Notice what this skill captures that the wiki page didn’t: the referral exception (tribal knowledge from the audit), the boundary-case validation step, and the regulated-industry override. These are the rules Sarah carries in her head. Now any agent can follow them.
Stage 5: Validate with the Domain Expert
This is the step most encoding efforts skip, and it’s the step that determines whether the artifacts actually work.
Sit down with the knowledge holder: the person whose expertise you encoded and walk through both the schema and the skill. Ask three questions:
-
“Does this schema capture everything an agent would need to know about this entity?” They’ll catch missing fields, wrong enumerations, or relationships you overlooked.
-
“If an agent followed this skill exactly, would it make the right decision in every case you can think of?” They’ll surface edge cases and exceptions that didn’t come up in the interview.
-
“What’s the most common mistake a new hire makes with this process?” Whatever they describe, check that your skill prevents it.
Validation typically takes 30-60 minutes per schema/skill pair. It’s the highest-value time in the entire encoding process.
Before and After: Customer Onboarding
To make this concrete, here’s the full transformation for the customer onboarding example.
Before: The Wiki Page
A 1,200-word document last updated 14 months ago. It describes the “standard” onboarding flow but doesn’t cover enterprise variations. Three paragraphs are crossed out with notes saying “see Sarah for current process.” New hires read it on day one and then spend two months learning how things actually work.
An AI agent reading this page would follow the standard flow for every customer. Enterprise customers would get the self-serve setup. High-value accounts would get Standard support. The referral exception would never trigger.
After: The Encoded Artifacts
customer.schema.json: Defines the Customer entity with type, contract terms, ARR, referral source, regulatory statusworkspace.schema.json: Defines workspace configuration including SSO requirementsbilling-profile.schema.json: Defines billing structures tied to contract termssupport-tier.schema.json: Defines the three tiers with SLAs and attributesonboarding-tier-assignment.skill.md: Decision logic for tier assignment including exceptionsonboarding-workspace-setup.skill.md: Step-by-step workspace provisioning based on customer typeonboarding-kickoff-scheduling.skill.md: Scheduling rules including timezone and stakeholder considerations
An AI agent with these artifacts handles the enterprise customer correctly on the first interaction. It assigns the right tier, configures SSO, schedules the kickoff within the SLA, and flags the referral exception for upgrade consideration. It does what Sarah does, consistently, at 3 AM, on weekends, across every customer.
The Encoding Gets Easier
The first process you encode takes the longest. You’re learning the workflow, building your first schemas, writing your first skills. Budget a full week for one process area.
By the third or fourth process, you’ll notice something: the same entities keep appearing. The Customer schema you built for onboarding shows up in pricing, in support routing, in renewal. The schemas compose. The skills reference each other. The Recursive Loop kicks in. Each encoding effort makes the next one faster because the foundation is already in place.
This is Context Engineering in practice. You’re not just documenting for AI; you’re building a structured representation of how your business operates. Each schema, each skill, each validated artifact adds to a context layer that any agent can read. The documentation that used to be a static wiki page becomes a living, executable system.
Start with one process. Extract the entities. Build the schemas. Write the skills. Validate with the expert. Then do it again. By the time you’ve encoded three or four process areas, you’ll have a context layer that makes every AI deployment in your organization fundamentally more capable.
Frequently Asked Questions
Do I need to rewrite all my documentation?
No. You're extracting from existing docs, not replacing them. The wiki page stays as-is for human reference. You're pulling out the structured entities and decision rules and encoding them as separate artifacts (schemas and skills) that AI agents can read. Most organizations find that 70% of what they need is already written down somewhere, just not in a format agents can use.
What if our documentation is outdated or contradictory?
That's actually a feature of this process, not a bug. The encoding workflow forces you to reconcile what the docs say with what people actually do. When you sit down with a domain expert and walk through the schema, every gap and contradiction surfaces. You end up with artifacts that reflect reality, not aspirational process docs.
How technical do the domain experts need to be?
Not at all. The Skills-as-Documents approach uses structured markdown, not code. Domain experts describe what they know in plain language within a defined structure. They don't write JSON Schema directly: an engineer or the encoding facilitator handles that translation. The expert's job is to validate that the schema captures the right entities and the skill captures the right logic.