Thirty minutes. That’s the time from a blank directory to a working AI application that understands a domain, follows defined processes, and connects to external tools. No backend code. No framework boilerplate. Just structured knowledge artifacts that an AI runtime interprets and executes.
This tutorial builds a task tracker app. Simple enough to complete in 30 minutes. Complex enough to demonstrate every core Upjack concept: manifests, entity schemas, skills, context, and MCP server connections. By the end, you’ll have a running app that creates tasks, lists them, updates status, and prioritizes based on rules you define.
Prerequisites
Three things:
- Python 3.11+ or Node.js 18+ (either works)
- mpak CLI, the package manager for MCP servers and Upjack apps
- An LLM API key: Claude (recommended) or GPT-4
No Docker. No databases. No infrastructure setup. If you can run a Python script or a Node command, you’re ready.
Install mpak if you haven’t:
pip install mpak
Step 1: Scaffold the Project (2 minutes)
Create a new Upjack app:
mpak create task-tracker
cd task-tracker
This generates a directory with the standard Upjack structure:
task-tracker/
├── manifest.json
├── schemas/
├── skills/
├── context/
└── seed/
Five directories. One config file. That’s the entire application structure. Every Upjack app follows this layout, and the runtime knows where to find each artifact type.
Step 2: Define the Manifest (3 minutes)
Open manifest.json. This is the table of contents for your app, the first file the runtime reads.
{
"name": "task-tracker",
"version": "0.1.0",
"description": "A task management app with intelligent prioritization",
"schemas": ["task"],
"skills": [
"create-task",
"list-tasks",
"update-status",
"prioritize"
],
"context": ["project-rules"],
"mcp_servers": ["filesystem"]
}
The manifest declares four things: what entities the app knows about (schemas), what processes it can follow (skills), what background knowledge it has (context), and what external tools it can use (mcp_servers). Every skill, schema, and context file referenced here must exist in the corresponding directory.
Step 3: Create the Entity Schema (5 minutes)
Create schemas/task.json. This defines what a task IS, not in a paragraph, but as a data structure:
{
"$schema": "http://json-schema.org/draft-07/schema#",
"title": "Task",
"type": "object",
"required": ["title", "status", "priority"],
"properties": {
"id": {
"type": "string",
"description": "Unique task identifier"
},
"title": {
"type": "string",
"description": "Short description of the task"
},
"status": {
"type": "string",
"enum": ["backlog", "in-progress", "review", "done"],
"default": "backlog"
},
"priority": {
"type": "string",
"enum": ["critical", "high", "medium", "low"],
"default": "medium"
},
"due_date": {
"type": "string",
"format": "date",
"description": "Target completion date"
},
"assignee": {
"type": "string",
"description": "Person responsible"
},
"tags": {
"type": "array",
"items": { "type": "string" }
}
}
}
This is Business-as-Code in its simplest form. The schema tells the AI agent: a task has a title, a status with four valid states, a priority with four levels, an optional due date, an optional assignee, and optional tags. The agent can’t hallucinate a “urgency” field or invent a “pending” status. The schema constrains behavior to match reality.
Step 4: Write the Skills (10 minutes)
Skills are where Upjack gets interesting. Each skill is a markdown file that describes a process in natural language, Skills-as-Documents in practice. The agent reads these and follows them.
skills/create-task.md
# Create Task
## Trigger
User wants to add a new task to the tracker.
## Steps
1. Ask for the task title. This is required.
2. Ask for priority (critical, high, medium, low). Default to medium if not specified.
3. Ask for due date if the user mentions a deadline. Leave blank otherwise.
4. Ask for assignee if the user mentions a person. Leave blank otherwise.
5. Generate a unique ID using the format `TASK-{number}`.
6. Set status to "backlog" for all new tasks.
7. Save the task using the filesystem tool.
## Output
Confirm the task was created. Show the task ID, title, priority, and status.
skills/list-tasks.md
# List Tasks
## Trigger
User wants to see tasks: all tasks, filtered by status, or filtered by priority.
## Steps
1. Read all tasks from storage.
2. If the user specified a filter (status, priority, assignee), apply it.
3. If no filter specified, show all non-done tasks by default.
4. Sort by priority (critical first), then by due date (soonest first).
## Output
Display tasks in a clean list: ID, title, status, priority, due date.
If no tasks match, say so plainly.
skills/update-status.md
# Update Task Status
## Trigger
User wants to change a task's status.
## Steps
1. Identify the task by ID or title.
2. Validate the new status is one of: backlog, in-progress, review, done.
3. Update the task record.
4. If moving to "done", note the completion date.
## Rules
- Tasks can move forward (backlog -> in-progress -> review -> done) or backward.
- Moving a critical task to backlog from in-progress should trigger a confirmation.
skills/prioritize.md
# Prioritize Tasks
## Trigger
User asks to reprioritize, or asks what to work on next.
## Steps
1. Load all non-done tasks.
2. Apply prioritization rules from context.
3. Consider: overdue tasks get bumped to critical. Tasks due within 2 days get bumped one level. Tasks with no due date sort last within their priority level.
4. Present the recommended work order with reasoning.
## Output
Numbered list of tasks in recommended order, with a one-line reason for each position.
Notice: every skill is plain English. No code. No special syntax beyond markdown headers. A project manager, a team lead, or an operations director can read these skills, understand exactly what the AI will do, and edit them. The “prioritize” skill applies business logic (overdue tasks escalate, approaching deadlines bump priority) without a single line of Python.
Step 5: Add Context (3 minutes)
Create context/project-rules.md. This gives the agent background knowledge it needs to make good decisions within the skills:
# Project Rules
## Prioritization
- Client-facing tasks always outrank internal tasks at the same priority level.
- If two tasks have the same priority and due date, prefer the one with fewer dependencies.
- Never have more than 3 critical tasks active simultaneously. If a 4th would be marked critical, flag it for human review.
## Status Conventions
- "backlog" means planned but not started.
- "in-progress" means someone is actively working on it today.
- "review" means the work is done and needs verification.
- "done" means verified and complete.
## Team Norms
- Tasks without a due date are considered low urgency unless marked critical.
- Reassigning a task requires a reason.
Context is the judgment layer. Skills define processes. Context defines the rules and norms that govern how those processes apply in this specific environment. A different team with different norms would have different context files, same skills, different judgment.
Step 6: Connect an MCP Server (3 minutes)
The app needs somewhere to store tasks. MCP servers provide tools the agent can use to interact with external systems. For this tutorial, we’ll use a filesystem MCP server, the simplest option.
Install the server:
mpak install @modelcontextprotocol/server-filesystem
The manifest already declares "filesystem" in mcp_servers. The Upjack runtime discovers the installed server, connects to it, and makes its tools available to the agent. The agent can now read and write files, which is how it persists task records.
In a production app, you’d connect MCP servers for a database, a project management tool, a notification service, or any other system the app needs to interact with. The pattern is identical: install the server, add it to the manifest, and the agent gains new tools. Every MCP server at mpak.dev follows this pattern.
Step 7: Run It (4 minutes)
Start the app:
upjack run
The runtime reads the manifest, loads the schemas, skills, and context, connects to the filesystem MCP server, and presents a conversational interface. The agent understands what a task is (schema), what processes to follow (skills), what rules to apply (context), and how to persist data (MCP server).
Try it:
- “Create a task: Fix the login page bug, critical priority, due Friday”
- “List all critical tasks”
- “Move TASK-1 to in-progress”
- “What should I work on next?”
Each interaction follows the skills you wrote. The agent creates tasks using the schema’s field definitions. It prioritizes using the rules from context. It persists using the filesystem MCP server. The entire application (the domain model, the business logic, the operational rules) lives in the files you created.
What You Just Built
In 30 minutes, you built an AI application with:
- A data model (task entity schema) that constrains the AI to your domain
- Behavioral logic (four skills) that the AI follows as defined processes
- Operational context (project rules) that informs the AI’s judgment
- Tool integration (filesystem MCP server) that lets the AI act on external systems
No Python. No TypeScript. No backend. The entire application is JSON schemas and markdown documents that any team member can own and operate.
What Comes Next
This task tracker is a single-entity app with four skills. Production Upjack apps scale from here:
Add more entities. Define schemas for Projects, Sprints, Team Members. Relationships between entities give the agent richer understanding of the domain.
Add more skills. Sprint planning, workload balancing, status reports, daily standups. Each skill is a new process the agent can execute.
Add more context. Team capacity data, sprint velocity history, project priorities. Richer context produces better judgment.
Connect more MCP servers. Jira integration for syncing tasks. Slack integration for notifications. Calendar integration for deadline awareness. Each server adds new capabilities without changing the core app.
Deploy to production. The same manifest that runs locally runs on a server, in a container, or in a NimbleBrain-managed Kubernetes cluster. The artifacts are the application. Move the artifacts, move the app.
The full documentation at upjack.dev covers entity relationships, advanced skill patterns, context structuring, and production deployment. For how NimbleBrain uses Upjack on real engagements, see How Upjack Powers NimbleBrain Engagements.
Frequently Asked Questions
What do I need installed to follow this tutorial?
Python 3.11+ or Node.js 18+, and the mpak CLI (for installing Upjack). An LLM API key (Claude or GPT). That's it. No Docker, no databases, no infrastructure.
Can I build something more complex than a task tracker?
Absolutely. Upjack apps scale from simple single-entity apps to complex multi-entity systems with dozens of skills and MCP server connections. This tutorial covers the fundamentals (entity schemas, skills, context, and server connections) that apply at any scale.
Where do I go after this tutorial?
upjack.dev has full documentation, more examples, and guides for building production applications. The NimbleBrain Discord has a channel for Upjack builders where you can ask questions and share what you've built.