Conversation Lifecycle Controller for Go. Orchestrate. Don't Implement.
Manage LLM-powered chat interactions with clean separation between conversation plumbing and reasoning logic. History, turn-taking, and streaming handled for you — your processor handles the brain.
Get Startedimport "github.com/zoobz-io/chit"
// Your reasoning logic — any LLM, any strategy
processor := chit.ProcessorFunc(func(
ctx context.Context,
input string,
session *zyn.Session,
) (chit.Result, error) {
// Internal reasoning stays internal
response, _ := llm.Complete(ctx, session, input)
return &chit.Response{Content: response}, nil
})
// Dual-channel emitter — text + structured data
emitter := chit.EmitterFunc(
func(text string) { stream(text) }, // Emit: conversational text
func(resource any) { pushToUI(resource) }, // Push: structured resources
)
// Chat handles state, history, streaming
chat := chit.New(processor, emitter)
chat.Handle(ctx, "What's the status of order #42?")
// Multi-turn with Yield — no callback hell
processor = chit.ProcessorFunc(func(ctx context.Context, input string, session *zyn.Session) (chit.Result, error) {
return &chit.Yield{
Content: "I need your email to proceed.",
Continue: func(ctx context.Context, reply string, s *zyn.Session) (chit.Result, error) {
return &chit.Response{Content: "Confirmed: " + reply}, nil
},
}, nil
})Why Chit?
Thin orchestration that separates conversation plumbing from reasoning logic.
Clean History Separation
User-facing conversation stays unpolluted. Internal LLM reasoning, retries, and tool calls are the processor's business.
Yield/Continue Turn-Taking
Multi-step workflows without callback hell. Return a Yield with a Continuation for natural multi-turn flows.
Dual-Channel Emitter
Emit conversational text and push structured resources simultaneously. Rich UIs get typed data alongside the conversation.
Pipeline-Native Reliability
Built on pipz for composable retry, timeout, rate limiting, and circuit breaker. Continuations get the same reliability wrappers.
Observable Lifecycle
Capitan signals for ChatCreated, InputReceived, ProcessingStarted, ProcessingCompleted, ResponseEmitted, TurnYielded, TurnResumed.
Bring Your Own LLM
Works with any LLM client via the Processor interface. Cogito, OpenAI SDK, or your own — chit doesn't care.
Capabilities
Conversation orchestration with pluggable reasoning, multi-turn flows, and built-in reliability.
| Feature | Description | Link |
|---|---|---|
| Processor Interface | Pluggable reasoning logic. ProcessorFunc for simple cases, full interface for complex workflows. LLM-agnostic. | Concepts |
| Multi-Turn Workflows | Yield/Continue pattern for natural conversation branching. Continuations run through the same reliability pipeline. | Architecture |
| Reliability Patterns | Retry with backoff, timeout protection, circuit breakers, and rate limiting via pipz composition. | Reliability |
| Session Integration | Built on zyn sessions for typed conversation history. Transactional updates — session only changes on success. | Concepts |
| Testing | Mock processors and emitters for deterministic tests. Processors testable in isolation without chat orchestration. | Testing |
| Troubleshooting | Common issues with processor wiring, emitter setup, and continuation state management. | Troubleshooting |
Articles
Browse the full chit documentation.