Quick Start¶
This guide walks you through your first CRP dispatch in 5 minutes.
1. Create a Client¶
import crp
# Auto-detect provider from environment variables
client = crp.Client()
# Or specify explicitly
client = crp.Client(model="gpt-4o-mini")
CRP checks for OPENAI_API_KEY, ANTHROPIC_API_KEY, or a running Ollama server
and selects the appropriate provider automatically.
2. Ingest Domain Knowledge¶
result = client.ingest("""
Kubernetes uses etcd as its distributed key-value store for all cluster
state. The API server is the only component that directly interacts with
etcd. Pod scheduling is handled by kube-scheduler which considers resource
requirements, affinity rules, taints, and tolerations.
""")
print(f"Extracted {result.facts_extracted} facts")
CRP's 6-stage extraction pipeline processes the text, identifies entities, extracts structured facts, and stores them in the warm store.
3. Dispatch a Task¶
output, report = client.dispatch(
system_prompt="You are a senior infrastructure architect.",
task_input="Explain Kubernetes pod networking architecture.",
)
print(output)
print(f"Quality: {report.quality_tier}") # S, A, B, C, or D
print(f"Facts used: {report.facts_extracted}")
print(f"Windows: {report.continuation_windows}")
CRP packs the most relevant facts into a context envelope, dispatches to the LLM, and handles continuation if the output is truncated.
4. Check Session Status¶
status = client.session_status()
print(f"Session: {status.session_id}")
print(f"Facts in warm store: {status.facts_in_warm_state}")
print(f"Total tokens: {status.total_input_tokens + status.total_output_tokens:,}")
5. Clean Up¶
Next Steps¶
- All 9 Dispatch Strategies — choose the right strategy for your use case
- Providers — configure different LLM backends
- Compliance — EU AI Act and GDPR features
- Demo App — comprehensive interactive demo