Dispatch Strategies¶
CRP provides 9 dispatch strategies, each optimized for different use cases. All strategies benefit from the 6-stage extraction pipeline, quality tier assessment, and HMAC-chained audit trail.
Strategy Overview¶
| # | Strategy | Pattern | Spec Section |
|---|---|---|---|
| 1 | dispatch() |
PUSH — pre-packed envelope | §6 |
| 2 | dispatch_with_tools() |
PULL — LLM requests context | §20 |
| 3 | dispatch_reflexive() |
Verify-then-Refine | §21.1 |
| 4 | dispatch_progressive() |
Index-then-Detail | §21.2 |
| 5 | dispatch_stream_augmented() |
Real-time Context Injection | §21.3 |
| 6 | dispatch_agentic() |
8-phase Cognitive Engine | §22 |
| 7 | dispatch_stream() |
Streaming tokens + events | §6.10.5 |
| 8 | dispatch_batch() |
Sequential multi-task | §6.6 |
| 9 | dispatch_hierarchical() |
Map-Reduce | §14 |
1. dispatch() — PUSH-based (Default)¶
The default strategy. CRP pre-loads the context envelope with the most relevant facts from the warm store, then dispatches the full envelope + task to the LLM.
output, report = client.dispatch(
system_prompt="You are a technical writer.",
task_input="Write a guide to Kubernetes networking.",
)
Best for: General tasks where CRP has domain knowledge ingested.
How it works:
- Query warm store for facts relevant to the task
- Pack facts into context envelope (respecting token budget)
- Send envelope + task to LLM
- If output truncated (finish_reason="length"), extract facts and continue
- Stitch windows together, assess quality
2. dispatch_with_tools() — PULL-based¶
Instead of pre-loading context, the LLM is given CRP context tools
(retrieve_facts, search_by_keyword). The LLM requests context on demand.
output, report = client.dispatch_with_tools(
system_prompt="You are a technical writer.",
task_input="What CNI plugins are available for Kubernetes?",
max_tool_rounds=5,
)
Best for: Tasks where the LLM knows what it needs better than CRP does.
Note
Requires a provider that supports tool/function calling (OpenAI, Anthropic).
3. dispatch_reflexive() — Verify-then-Refine¶
Two-pass strategy. Pass 1: generate with NO envelope (pure parametric knowledge). CRP analyzes output against the knowledge base, finds contradictions and unsupported claims. Pass 2+: model refines with precision corrections.
output, report = client.dispatch_reflexive(
system_prompt="You are a technical writer.",
task_input="Describe Kubernetes RBAC best practices.",
max_refinement_passes=2,
)
Best for: Fact-checking, high-accuracy requirements, hallucination reduction.
4. dispatch_progressive() — Index-then-Detail¶
Builds a compact INDEX of available facts (~10% token cost). Sends task + index. Detects which entries were referenced. Expands referenced entries to full detail.
output, report = client.dispatch_progressive(
system_prompt="You are a technical writer.",
task_input="Explain horizontal pod autoscaling.",
)
Best for: Large knowledge bases where not all context is relevant.
5. dispatch_stream_augmented() — Real-time Context Injection¶
Streams generation without envelope. After each sentence, CRP fact-matches against the warm store. If relevant NEW facts are found, injects them mid-stream.
output, report = client.dispatch_stream_augmented(
system_prompt="You are a technical writer.",
task_input="How does Kubernetes service mesh work?",
max_injections=3,
)
Best for: Dynamic, exploration-style tasks.
6. dispatch_agentic() — Cognitive Engine¶
8-phase cognitive loop for complex tasks:
graph LR
A[ANALYZE] --> B[PLAN]
B --> C[SYNTHESIZE]
C --> D[ROUTE]
D --> E[GENERATE]
E --> F[EVALUATE]
F --> G[REVISE]
G --> H[CURATE]
output, report = client.dispatch_agentic(
system_prompt="You are a security architect.",
task_input="Design a Kubernetes security hardening strategy.",
max_revision_rounds=2,
enable_planning=True,
enable_curation=True,
)
Best for: Complex multi-step tasks requiring autonomous reasoning.
7. dispatch_stream() — Streaming¶
Yields StreamEvent objects in real-time for live UIs:
for event in client.dispatch_stream(
system_prompt="You are a technical writer.",
task_input="Explain etcd in Kubernetes.",
):
if event.event_type == "token":
print(event.data, end="", flush=True)
elif event.event_type == "extraction":
print(f"\n[Extracted fact]")
elif event.event_type == "done":
break
Event types: token, extraction, continuation, window_complete, done, error
Best for: Real-time UIs, chatbots, interactive applications.
8. dispatch_batch() — Batch Processing¶
Dispatches multiple tasks sequentially through the same session. Facts accumulate across tasks.
intents = [
{"system_prompt": "...", "task_input": "Explain ConfigMaps."},
{"system_prompt": "...", "task_input": "Explain Secrets."},
{"system_prompt": "...", "task_input": "Compare ConfigMaps vs Secrets."},
]
results = client.dispatch_batch(intents)
# results: list[tuple[str, QualityReport]]
Best for: Processing multiple related tasks, report generation.
9. dispatch_hierarchical() — Map-Reduce¶
Segments large input into chunks, dispatches each through the LLM, then iteratively reduces the syntheses.
syntheses, report = client.dispatch_hierarchical(
system_prompt="You are an analyst.",
large_input=very_long_document,
task_intent="Summarize key findings",
)
Best for: Analyzing documents that exceed context windows.
Choosing a Strategy¶
graph TD
A{What's your use case?} --> B{Do you have domain knowledge?}
B -->|Yes| C{How much?}
B -->|No| D[dispatch]
C -->|Small KB| D
C -->|Large KB| E{Need high accuracy?}
E -->|Yes| F[dispatch_reflexive]
E -->|No| G{LLM should pull context?}
G -->|Yes| H[dispatch_with_tools]
G -->|No| I{Real-time UI?}
I -->|Yes| J[dispatch_stream]
I -->|No| K{Complex reasoning?}
K -->|Yes| L[dispatch_agentic]
K -->|No| D
A --> M{Multiple tasks?}
M -->|Yes| N[dispatch_batch]
A --> O{Large document?}
O -->|Yes| P[dispatch_hierarchical]