badlogic-pi-mono

IndexedCommit: c213cb50 pullsUpdated Feb 5, 2026

A modular monorepo of TypeScript packages for building AI agents and managing LLM deployments. Core libraries include a unified multi-provider LLM API, agent framework, terminal UI components, web UI

Install this reference
Reference

Pi Monorepo

A modular monorepo of TypeScript packages for building AI agents and managing LLM deployments. Core libraries include a unified multi-provider LLM API, agent framework, terminal UI components, web UI components, and tools for running models on GPU infrastructure.

Quick References

FilePurpose
packages/ai/README.mdCore LLM API documentation
packages/coding-agent/README.mdCLI agent usage guide
packages/agent/README.mdAgent framework reference
packages/tui/README.mdTerminal UI components
packages/web-ui/README.mdWeb UI components

Packages

Packagenpm nameDescription
packages/ai@mariozechner/pi-aiUnified multi-provider LLM API with tool calling and model discovery
packages/agent@mariozechner/pi-agent-coreStateful agent framework with tool execution and event streaming
packages/coding-agent@mariozechner/pi-coding-agentInteractive CLI coding agent with TUI, sessions, and extensions
packages/tui@mariozechner/pi-tuiTerminal UI library with differential rendering
packages/web-ui@mariozechner/pi-web-uiWeb components for AI chat interfaces
packages/mom@mariozechner/pi-momSlack bot that delegates messages to the coding agent
packages/pods@mariozechner/piCLI for managing vLLM deployments on GPU pods

When to Use

  • Integrate LLM capabilities with consistent API across OpenAI, Anthropic, Google, Mistral, and 15+ other providers
  • Build interactive terminal applications with flicker-free rendering
  • Create AI chat interfaces for web applications with web components
  • Deploy open-source models on GPU infrastructure (DataCrunch, RunPod, AWS)
  • Run coding assistants directly in terminal with extensible tools and custom prompts
  • Create Slack bots that execute bash commands and manage workflows
  • Implement agent workflows with tool calling, thinking/reasoning, and state management

Installation

# Core LLM API
npm install @mariozechner/pi-ai

# Agent framework
npm install @mariozechner/pi-ai @mariozechner/pi-agent-core

# Terminal UI
npm install @mariozechner/pi-tui

# Web UI components
npm install @mariozechner/pi-web-ui @mariozechner/pi-agent-core @mariozechner/pi-ai

# Coding agent CLI
npm install -g @mariozechner/pi-coding-agent

# Slack bot
npm install -g @mariozechner/pi-mom

# GPU pods CLI
npm install -g @mariozechner/pi

Best Practices

  1. Use complete() for simple requests, stream() for real-time UI updates
  2. Define tools with TypeBox schemas for automatic validation and serialization
  3. Check model capabilities via model.input (vision support) and model.reasoning (thinking)
  4. Use convertToLlm() function to transform custom message types for LLM compatibility
  5. Handle partial tool call arguments defensively during streaming (check existence before use)
  6. Use validateToolCall() before executing tools to validate arguments against schemas
  7. Store API keys in environment variables (Node.js) or pass explicitly (browser)
  8. Use crossProviderHandoff() feature to switch models mid-conversation while preserving context
  9. For terminal components, ensure each render() line does not exceed the width parameter
  10. Use truncateToWidth() utility to prevent TUI errors from oversized lines

Common Patterns

Basic LLM Streaming:

import { getModel, stream, Type } from '@mariozechner/pi-ai';

const model = getModel('openai', 'gpt-4o-mini');

const tools = [{
  name: 'get_time',
  description: 'Get current time',
  parameters: Type.Object({
    timezone: Type.Optional(Type.String())
  })
}];

const s = stream(model, {
  systemPrompt: 'You are a helpful assistant.',
  messages: [{ role: 'user', content: 'What time is it?' }],
  tools
});

for await (const event of s) {
  if (event.type === 'text_delta') {
    process.stdout.write(event.delta);
  }
}

const message = await s.result();
console.log(`Tokens: ${message.usage.input} in, ${message.usage.output} out`);

Agent with Tool Execution:

import { Agent } from '@mariozechner/pi-agent-core';
import { getModel, Type } from '@mariozechner/pi-ai';

const agent = new Agent({
  initialState: {
    systemPrompt: 'You help users with file operations.',
    model: getModel('anthropic', 'claude-sonnet-4-20250514'),
    tools: [{
      name: 'read_file',
      description: 'Read a file',
      parameters: Type.Object({
        path: Type.String()
      }),
      execute: async (id, params) => {
        const content = await fs.readFile(params.path, 'utf-8');
        return { content: [{ type: 'text', text: content }] };
      }
    }]
  },
  convertToLlm: (messages) => 
    messages.filter(m => ['user', 'assistant', 'toolResult'].includes(m.role))
});

agent.subscribe((event) => {
  if (event.type === 'message_update' && event.assistantMessageEvent.type === 'text_delta') {
    process.stdout.write(event.assistantMessageEvent.delta);
  }
});

await agent.prompt('Read package.json');

Terminal UI Application:

import { TUI, Text, Editor, ProcessTerminal } from '@mariozechner/pi-tui';

const terminal = new ProcessTerminal();
const tui = new TUI(terminal);

tui.addChild(new Text('Welcome! Type something:'));

const editor = new Editor(tui);
editor.onSubmit = (text) => {
  tui.addChild(new Text(`You said: ${text}`));
};
tui.addChild(editor);

tui.start();

Web UI Chat Interface:

import { Agent } from '@mariozechner/pi-agent-core';
import { getModel } from '@mariozechner/pi-ai';
import { ChatPanel, AppStorage, IndexedDBStorageBackend, setAppStorage } from '@mariozechner/pi-web-ui';
import '@mariozechner/pi-web-ui/app.css';

const backend = new IndexedDBStorageBackend({
  dbName: 'my-app',
  version: 1,
  stores: [SettingsStore.getConfig(), ProviderKeysStore.getConfig(), SessionsStore.getConfig()]
});

const agent = new Agent({
  initialState: {
    systemPrompt: 'Helpful assistant',
    model: getModel('anthropic', 'claude-sonnet-4-5-20250929')
  }
});

const chatPanel = new ChatPanel();
await chatPanel.setAgent(agent);

document.body.appendChild(chatPanel);

Model with Vision Support:

import { getModel, complete } from '@mariozechner/pi-ai';

const model = getModel('openai', 'gpt-4o-mini');

if (model.input.includes('image')) {
  const imageBuffer = fs.readFileSync('image.png');
  const response = await complete(model, {
    messages: [{
      role: 'user',
      content: [
        { type: 'text', text: 'What is in this image?' },
        { type: 'image', data: imageBuffer.toString('base64'), mimeType: 'image/png' }
      ]
    }]
  });
}

Context Serialization:

import { Context, getModel, complete } from '@mariozechner/pi-ai';

const context: Context = {
  systemPrompt: 'You are helpful.',
  messages: [{ role: 'user', content: 'Hello' }]
};

// Serialize
const serialized = JSON.stringify(context);
localStorage.setItem('conversation', serialized);

// Resume later
const restored: Context = JSON.parse(localStorage.getItem('conversation')!);
restored.messages.push({ role: 'user', content: 'Tell me more' });

const continuation = await complete(model, restored);

API Quick Reference

ExportTypeDescription
getModelFunctionGet Model instance by provider and model ID
completeFunctionGet complete LLM response without streaming
streamFunctionStream LLM response with all event types
completeSimpleFunctionSimplified complete with reasoning option
streamSimpleFunctionSimplified stream with reasoning option
validateToolCallFunctionValidate tool arguments against TypeBox schema
getProvidersFunctionList all available provider names
getModelsFunctionGet all models for a provider
getEnvApiKeyFunctionCheck if API key is set in environment
AgentClassAgent class with tool execution and events
agentLoopFunctionLow-level agent loop without Agent class
TUIClassTerminal UI container
ChatPanelClassWeb chat interface component
AppStorageClassIndexedDB storage for sessions and settings
TypeDescription
Model<TApi>Model with provider, API, capabilities, cost
ContextSystem prompt + messages + tools
ToolTool definition with TypeBox schema
MessageUser, assistant, or tool result message
AssistantMessageEventStream events (text_delta, tool_call, etc.)
UsageToken and cost information
AgentMessageFlexible message type with custom ext.
ThinkingLevel"off"
CLI/ToolDescription
piMain coding agent CLI with interactive mode
pi-agentStandalone OpenAI-compatible agent
momSlack bot for channel-based agent
pi-podsvLLM GPU pod management CLI

“Explore distant worlds.”

© 2026 Oscar Gabriel