SageOx

The hivemind for agentic engineering

Cookbooks

SageOx + OpenClaw

OpenClaw orchestrates automated AI coding sessions — spawning agents that work on issues, PRs, and maintenance tasks without human intervention. SageOx gives those agents the same team context a human coworker would have, then distills what they learn back into your team's knowledge base.

This cookbook covers the full integration: priming factory agents, capturing session knowledge, running distillation pipelines, and publishing daily Slack digests.

What you need

  • SageOx CLI installed (ox)
  • A connected repo (ox init)
  • An OpenClaw account with factory access
  • Slack workspace (for daily digests)

How it works

Loading diagram...
Mermaid diagram

Every factory-spawned session follows the same lifecycle as a human coding session — it receives team context on startup, and its discoveries flow back into the knowledge base.

Setup

1. Connect your repo

terminal
# Standard SageOx setup
$
$

ox init configures your CLAUDE.md with the ox agent prime hook. This is the universal integration point — any tool that starts Claude Code in your repo gets SageOx context automatically, including OpenClaw.

2. Verify factory agents receive context

When OpenClaw spawns a session, it starts Claude Code in your repo. Claude Code reads CLAUDE.md and runs ox agent prime. Verify this is working:

terminal
# Check that agents are getting primed
$

You should see agent instances from factory-spawned sessions. Each gets a unique agent_id and receives the full team context payload:

Context layerWhat the agent receives
Team normsAGENTS.md conventions, coding standards
Architecture decisionsDistilled discussions and recordings
Domain terminologyTeam-specific vocabulary and concepts
Recent memoryDaily/weekly summaries from distillation
Coworker instructionsAgent behavior guidance from your team

3. Enable session recording

Factory sessions should capture their work automatically. SageOx detects the OpenClaw orchestrator and tracks sessions:

terminal
# Verify OpenClaw is recognized as orchestrator
$

When ox agent prime runs inside an OpenClaw-managed session, it sets the orchestrator context automatically. Session transcripts — the agent's reasoning, decisions, and code changes — get committed to your repo's Ledger.

Distillation: turning sessions into knowledge

Factory agents produce dozens of sessions per day. Raw session transcripts are valuable but noisy. Distillation extracts the signal.

How distillation works

terminal
# Run distillation across all knowledge sources
$

ox distill extracts facts from three sources:

SourceWhat it captures
SessionsArchitecture discoveries, debugging insights, refactoring rationale
GitHub activityPR decisions, issue resolutions, review feedback, direction changes
Team discussionsRecorded meetings, walkthroughs, design reviews

Facts are organized into temporal layers:

LayerCadencePurpose
memory/daily/Every dayWhat happened today — raw signal
memory/weekly/Every weekPatterns and themes from the week
memory/monthly/Every monthStrategic direction and compounding insights

Run distillation on a schedule

For teams running factory agents daily, distillation should run automatically:

terminal
# Distill daily layer (run via cron or CI)
$
terminal
# Distill weekly layer (run on Mondays)
$
terminal
# Distill monthly layer (run on the 1st)
$

Each layer synthesizes the layer below it: weeklies summarize dailies, monthlies summarize weeklies. The result is a living knowledge base that gets sharper over time.

Multi-repo distillation

If your team works across multiple repos, distill them together:

terminal
# Distill across all team repos in one pass
$

This produces a unified team memory that spans your entire codebase — an agent working on the frontend knows about the API decision made yesterday.

Daily Slack digests

Keep your team aware of what factory agents accomplished. Publish daily summaries to Slack using murmurs:

terminal
# Publish a WIP update to your team
$

Murmurs are lightweight team broadcasts. When combined with scheduled distillation, they create a daily pulse:

  1. Factory agents work overnight on assigned issues
  2. Distillation runs in the morning, extracting daily facts
  3. A digest murmur posts to Slack summarizing what changed and what was learned

Slack integration via OpenClaw

OpenClaw's Slack bot can also capture team decisions flowing in the other direction — from Slack threads into your Team Context:

DirectionMechanism
SageOx → Slackox murmur publishes digests and WIP updates
Slack → SageOxOpenClaw bot captures bookmarked threads as recordings

Configure OpenClaw to watch channels for :bookmark: reactions or keyword triggers. Captured threads become recordings that flow through the standard SageOx pipeline.

Cross-agent awareness for solo developers

Murmurs aren't just for team Slack channels. Even as a solo developer, if you're running multiple agents in parallel — each in its own git worktree working on a different issue — murmurs give every agent a window into what the others are doing.

Each agent periodically murmurs its current focus. Other agents in the same repo pick up those murmurs as context, which means:

  • An agent refactoring the auth module knows another agent is adding a new endpoint that depends on auth
  • An agent updating types knows another agent just changed the schema those types derive from
  • Merge conflicts and duplicate work drop significantly because agents are aware of each other's in-flight changes

This is the difference between N isolated agents and N agents that collaborate. No shared memory server, no orchestration bus — just lightweight murmurs flowing through SageOx.

Context that works across agent platforms

Most coding agents have some form of memory or team context — but it's locked to that vendor. Claude Code's memory doesn't reach Codex. Gemini's context doesn't reach Amp. If your team uses more than one agent platform (or even if a single developer switches between them), each agent operates blind to what the others know.

SageOx sits underneath all of them. Because context flows through the repo itself via ox agent prime, it works with any agent that reads CLAUDE.md, AGENTS.md, or runs shell hooks on session start. A discovery made in one agent is available to every other agent in the next session — no vendor lock-in on your team's accumulated knowledge.

See ox prime — Supported agents for the full compatibility matrix.

Recipes

The overnight factory pattern

Run OpenClaw agents on a nightly schedule against your backlog:

  1. OpenClaw picks issues labeled factory-ready from your tracker
  2. For each issue, it spawns a Claude Code session in the relevant repo
  3. ox agent prime injects team context (including yesterday's distillation)
  4. The agent works on the issue, commits code, opens a PR
  5. Session transcript is captured to the Ledger
  6. Morning distillation summarizes what happened

Your team arrives to PRs ready for review, with full context on why each change was made.

Observations from factory sessions

Factory agents can record observations — specific insights worth preserving beyond the raw session transcript:

terminal
# Record a batch of observations (JSONL format)
$

Observations are the raw material for distillation. They're lighter than full session transcripts — just the key takeaways an agent noticed during its work.

Dry-run distillation

Preview what distillation would produce before committing:

terminal
$

This shows extracted facts and proposed memory files without writing anything. Useful when tuning your distillation schedule or verifying new source types.

The compounding loop

The real power is the feedback loop between factory agents and distillation:

  1. Factory agent works on an issue — discovers that the payment service needs a retry wrapper
  2. Session gets captured to the Ledger
  3. Distillation extracts the insight — "payment service calls need retry wrappers due to intermittent timeouts"
  4. Next factory session receives this via team context
  5. That agent adds retries proactively when touching payment code — no human needed to flag it

After a month of factory runs plus distillation, your agents have absorbed hundreds of codebase-specific insights that no prompt engineering could replicate.

What's next