badlogic-pi-mono
IndexedCommit: c213cb50 pullsUpdated Feb 5, 2026
A modular monorepo of TypeScript packages for building AI agents and managing LLM deployments. Core libraries include a unified multi-provider LLM API, agent framework, terminal UI components, web UI
Install this reference
Reference
Pi Monorepo
A modular monorepo of TypeScript packages for building AI agents and managing LLM deployments. Core libraries include a unified multi-provider LLM API, agent framework, terminal UI components, web UI components, and tools for running models on GPU infrastructure.
Quick References
| File | Purpose |
|---|---|
packages/ai/README.md | Core LLM API documentation |
packages/coding-agent/README.md | CLI agent usage guide |
packages/agent/README.md | Agent framework reference |
packages/tui/README.md | Terminal UI components |
packages/web-ui/README.md | Web UI components |
Packages
| Package | npm name | Description |
|---|---|---|
packages/ai | @mariozechner/pi-ai | Unified multi-provider LLM API with tool calling and model discovery |
packages/agent | @mariozechner/pi-agent-core | Stateful agent framework with tool execution and event streaming |
packages/coding-agent | @mariozechner/pi-coding-agent | Interactive CLI coding agent with TUI, sessions, and extensions |
packages/tui | @mariozechner/pi-tui | Terminal UI library with differential rendering |
packages/web-ui | @mariozechner/pi-web-ui | Web components for AI chat interfaces |
packages/mom | @mariozechner/pi-mom | Slack bot that delegates messages to the coding agent |
packages/pods | @mariozechner/pi | CLI for managing vLLM deployments on GPU pods |
When to Use
- Integrate LLM capabilities with consistent API across OpenAI, Anthropic, Google, Mistral, and 15+ other providers
- Build interactive terminal applications with flicker-free rendering
- Create AI chat interfaces for web applications with web components
- Deploy open-source models on GPU infrastructure (DataCrunch, RunPod, AWS)
- Run coding assistants directly in terminal with extensible tools and custom prompts
- Create Slack bots that execute bash commands and manage workflows
- Implement agent workflows with tool calling, thinking/reasoning, and state management
Installation
# Core LLM API
npm install @mariozechner/pi-ai
# Agent framework
npm install @mariozechner/pi-ai @mariozechner/pi-agent-core
# Terminal UI
npm install @mariozechner/pi-tui
# Web UI components
npm install @mariozechner/pi-web-ui @mariozechner/pi-agent-core @mariozechner/pi-ai
# Coding agent CLI
npm install -g @mariozechner/pi-coding-agent
# Slack bot
npm install -g @mariozechner/pi-mom
# GPU pods CLI
npm install -g @mariozechner/pi
Best Practices
- Use
complete()for simple requests,stream()for real-time UI updates - Define tools with TypeBox schemas for automatic validation and serialization
- Check model capabilities via
model.input(vision support) andmodel.reasoning(thinking) - Use
convertToLlm()function to transform custom message types for LLM compatibility - Handle partial tool call arguments defensively during streaming (check existence before use)
- Use
validateToolCall()before executing tools to validate arguments against schemas - Store API keys in environment variables (Node.js) or pass explicitly (browser)
- Use
crossProviderHandoff()feature to switch models mid-conversation while preserving context - For terminal components, ensure each
render()line does not exceed the width parameter - Use
truncateToWidth()utility to prevent TUI errors from oversized lines
Common Patterns
Basic LLM Streaming:
import { getModel, stream, Type } from '@mariozechner/pi-ai';
const model = getModel('openai', 'gpt-4o-mini');
const tools = [{
name: 'get_time',
description: 'Get current time',
parameters: Type.Object({
timezone: Type.Optional(Type.String())
})
}];
const s = stream(model, {
systemPrompt: 'You are a helpful assistant.',
messages: [{ role: 'user', content: 'What time is it?' }],
tools
});
for await (const event of s) {
if (event.type === 'text_delta') {
process.stdout.write(event.delta);
}
}
const message = await s.result();
console.log(`Tokens: ${message.usage.input} in, ${message.usage.output} out`);
Agent with Tool Execution:
import { Agent } from '@mariozechner/pi-agent-core';
import { getModel, Type } from '@mariozechner/pi-ai';
const agent = new Agent({
initialState: {
systemPrompt: 'You help users with file operations.',
model: getModel('anthropic', 'claude-sonnet-4-20250514'),
tools: [{
name: 'read_file',
description: 'Read a file',
parameters: Type.Object({
path: Type.String()
}),
execute: async (id, params) => {
const content = await fs.readFile(params.path, 'utf-8');
return { content: [{ type: 'text', text: content }] };
}
}]
},
convertToLlm: (messages) =>
messages.filter(m => ['user', 'assistant', 'toolResult'].includes(m.role))
});
agent.subscribe((event) => {
if (event.type === 'message_update' && event.assistantMessageEvent.type === 'text_delta') {
process.stdout.write(event.assistantMessageEvent.delta);
}
});
await agent.prompt('Read package.json');
Terminal UI Application:
import { TUI, Text, Editor, ProcessTerminal } from '@mariozechner/pi-tui';
const terminal = new ProcessTerminal();
const tui = new TUI(terminal);
tui.addChild(new Text('Welcome! Type something:'));
const editor = new Editor(tui);
editor.onSubmit = (text) => {
tui.addChild(new Text(`You said: ${text}`));
};
tui.addChild(editor);
tui.start();
Web UI Chat Interface:
import { Agent } from '@mariozechner/pi-agent-core';
import { getModel } from '@mariozechner/pi-ai';
import { ChatPanel, AppStorage, IndexedDBStorageBackend, setAppStorage } from '@mariozechner/pi-web-ui';
import '@mariozechner/pi-web-ui/app.css';
const backend = new IndexedDBStorageBackend({
dbName: 'my-app',
version: 1,
stores: [SettingsStore.getConfig(), ProviderKeysStore.getConfig(), SessionsStore.getConfig()]
});
const agent = new Agent({
initialState: {
systemPrompt: 'Helpful assistant',
model: getModel('anthropic', 'claude-sonnet-4-5-20250929')
}
});
const chatPanel = new ChatPanel();
await chatPanel.setAgent(agent);
document.body.appendChild(chatPanel);
Model with Vision Support:
import { getModel, complete } from '@mariozechner/pi-ai';
const model = getModel('openai', 'gpt-4o-mini');
if (model.input.includes('image')) {
const imageBuffer = fs.readFileSync('image.png');
const response = await complete(model, {
messages: [{
role: 'user',
content: [
{ type: 'text', text: 'What is in this image?' },
{ type: 'image', data: imageBuffer.toString('base64'), mimeType: 'image/png' }
]
}]
});
}
Context Serialization:
import { Context, getModel, complete } from '@mariozechner/pi-ai';
const context: Context = {
systemPrompt: 'You are helpful.',
messages: [{ role: 'user', content: 'Hello' }]
};
// Serialize
const serialized = JSON.stringify(context);
localStorage.setItem('conversation', serialized);
// Resume later
const restored: Context = JSON.parse(localStorage.getItem('conversation')!);
restored.messages.push({ role: 'user', content: 'Tell me more' });
const continuation = await complete(model, restored);
API Quick Reference
| Export | Type | Description |
|---|---|---|
getModel | Function | Get Model instance by provider and model ID |
complete | Function | Get complete LLM response without streaming |
stream | Function | Stream LLM response with all event types |
completeSimple | Function | Simplified complete with reasoning option |
streamSimple | Function | Simplified stream with reasoning option |
validateToolCall | Function | Validate tool arguments against TypeBox schema |
getProviders | Function | List all available provider names |
getModels | Function | Get all models for a provider |
getEnvApiKey | Function | Check if API key is set in environment |
Agent | Class | Agent class with tool execution and events |
agentLoop | Function | Low-level agent loop without Agent class |
TUI | Class | Terminal UI container |
ChatPanel | Class | Web chat interface component |
AppStorage | Class | IndexedDB storage for sessions and settings |
| Type | Description |
|---|---|
Model<TApi> | Model with provider, API, capabilities, cost |
Context | System prompt + messages + tools |
Tool | Tool definition with TypeBox schema |
Message | User, assistant, or tool result message |
AssistantMessageEvent | Stream events (text_delta, tool_call, etc.) |
Usage | Token and cost information |
AgentMessage | Flexible message type with custom ext. |
ThinkingLevel | "off" |
| CLI/Tool | Description |
|---|---|
pi | Main coding agent CLI with interactive mode |
pi-agent | Standalone OpenAI-compatible agent |
mom | Slack bot for channel-based agent |
pi-pods | vLLM GPU pod management CLI |