Dispatch Methods¶
CRP provides 9 dispatch strategies. All return quality-assessed results with full audit trails.
dispatch()¶
PUSH-based — CRP pre-packs the context envelope.
output, report = client.dispatch(
system_prompt: str,
task_input: str,
**kwargs,
) -> tuple[str, QualityReport]
| Parameter | Type | Description |
|---|---|---|
system_prompt |
str |
System instructions for the LLM |
task_input |
str |
The task to perform |
temperature |
float |
Sampling temperature (optional) |
max_output_tokens |
int |
Generation token limit (optional) |
max_continuations |
int |
Max continuation windows (optional) |
dispatch_with_tools()¶
PULL-based — LLM requests context on demand via tool calls.
output, report = client.dispatch_with_tools(
system_prompt: str,
task_input: str,
max_tool_rounds: int = 5,
**kwargs,
) -> tuple[str, QualityReport]
| Parameter | Type | Default | Description |
|---|---|---|---|
system_prompt |
str |
— | System instructions |
task_input |
str |
— | The task |
max_tool_rounds |
int |
5 |
Max tool call iterations |
Note
Requires a provider that supports tool/function calling.
dispatch_reflexive()¶
Verify-then-Refine — Generate without context, then refine with corrections.
output, report = client.dispatch_reflexive(
system_prompt: str,
task_input: str,
max_refinement_passes: int = 2,
**kwargs,
) -> tuple[str, QualityReport]
| Parameter | Type | Default | Description |
|---|---|---|---|
system_prompt |
str |
— | System instructions |
task_input |
str |
— | The task |
max_refinement_passes |
int |
2 |
Max refinement iterations |
dispatch_progressive()¶
Index-then-Detail — Compact fact index first, expand referenced entries.
output, report = client.dispatch_progressive(
system_prompt: str,
task_input: str,
**kwargs,
) -> tuple[str, QualityReport]
dispatch_stream_augmented()¶
Real-time Context Injection — Inject facts mid-stream as they become relevant.
output, report = client.dispatch_stream_augmented(
system_prompt: str,
task_input: str,
max_injections: int = 3,
**kwargs,
) -> tuple[str, QualityReport]
| Parameter | Type | Default | Description |
|---|---|---|---|
max_injections |
int |
3 |
Max mid-stream context injections |
dispatch_agentic()¶
8-phase Cognitive Engine — Analyze → Plan → Synthesize → Route → Generate → Evaluate → Revise → Curate.
output, report = client.dispatch_agentic(
system_prompt: str,
task_input: str,
max_revision_rounds: int = 2,
enable_planning: bool = True,
enable_curation: bool = True,
**kwargs,
) -> tuple[str, QualityReport]
| Parameter | Type | Default | Description |
|---|---|---|---|
max_revision_rounds |
int |
2 |
Max revision iterations |
enable_planning |
bool |
True |
Enable planning phase |
enable_curation |
bool |
True |
Enable memory curation |
dispatch_stream()¶
Streaming — Yields StreamEvent objects for real-time UIs.
events = client.dispatch_stream(
system_prompt: str,
task_input: str,
**kwargs,
) -> Generator[StreamEvent]
StreamEvent¶
| Field | Type | Description |
|---|---|---|
event_type |
str |
Event type identifier |
data |
Any |
Event payload |
Event Types¶
| Type | Data | Description |
|---|---|---|
token |
str |
Single generated token |
extraction |
dict |
Fact extracted from output |
continuation |
dict |
Continuation window started |
window_complete |
dict |
Window generation finished |
done |
dict |
Generation complete |
error |
str |
Error message |
Example¶
for event in client.dispatch_stream(
system_prompt="...",
task_input="...",
):
match event.event_type:
case "token":
print(event.data, end="", flush=True)
case "extraction":
print(f"\n[Fact extracted]")
case "done":
break
dispatch_batch()¶
Batch Processing — Multiple tasks through the same session.
results = client.dispatch_batch(
intents: list[dict],
**kwargs,
) -> list[tuple[str, QualityReport]]
Each intent dict contains:
intents = [
{"system_prompt": "...", "task_input": "Explain ConfigMaps."},
{"system_prompt": "...", "task_input": "Explain Secrets."},
{"system_prompt": "...", "task_input": "Compare them."},
]
Fact accumulation
Facts from earlier tasks are available to later tasks in the batch. The third task above benefits from facts extracted from the first two.
dispatch_hierarchical()¶
Map-Reduce — Segments large input and reduces syntheses.
syntheses, report = client.dispatch_hierarchical(
system_prompt: str,
large_input: str,
task_intent: str,
**kwargs,
) -> tuple[list[str], QualityReport]
| Parameter | Type | Description |
|---|---|---|
system_prompt |
str |
System instructions |
large_input |
str |
Large document to process |
task_intent |
str |
What to do with the document |
Returns: List of synthesis segments + quality report.
QualityReport¶
All dispatch methods return a QualityReport:
| Field | Type | Description |
|---|---|---|
session_id |
str |
Session identifier |
window_id |
str |
Final window identifier |
output |
str |
Complete output text |
quality_tier |
str |
S, A, B, C, or D |
continuation_windows |
int |
Number of continuation windows |
envelope_saturation |
float |
Envelope fill ratio |
facts_extracted |
int |
Facts extracted from output |
security_flags |
list[str] |
Security warnings (if any) |
telemetry |
dict |
Timing, token counts, overhead |