- ❯ flush left for user input, continuation lines indented 2 spaces
- ◆ purple icon for AI responses, continuation indented
- User multiline messages: ❯ first line, indented rest
- Tool output: indented under parent
- System messages: • prefix with multiline indent
- Input area: no extra padding, ❯ at column 0
Switch from textinput to textarea bubble:
- Enter submits message
- Shift+Enter / Ctrl+J inserts newline
- Input area auto-expands from 1 to 10 lines based on content
- Line numbers hidden, prompt preserved
At startup, polls ollama (/api/tags) and llama.cpp (/v1/models) for
available models. Registers each as an arm in the router alongside
the CLI-specified provider.
Discovered: 7 ollama models + 1 llama.cpp model = 9 total arms.
Router can now select from multiple local models based on task type.
Discovery is non-blocking — failures logged and skipped.
Engine.InjectMessage() appends messages to history without triggering
a turn. When permission mode or incognito changes, the notification
is injected as a user+assistant pair so the model sees it as context.
Fixes: model now knows permissions changed and will retry tool calls
instead of remembering old denials from previous mode.
When permission mode changes (Shift+Tab or /permission), the system
message now says "previous tool denials no longer apply, retry if
asked" — helps the model understand it should re-attempt tools
instead of remembering old denials from conversation history.
- Default permission mode changed from bypass to default
- Removed mode info from status bar (shown on separator line instead)
- Ctrl+I toggles incognito mode
- Incognito mode: amber/yellow separator lines with 🔒 label
overrides permission mode color when active
- Shift+Tab cycles permission modes
Each permission mode has a distinct color:
- bypass: green, default: blue, plan: teal
- accept_edits: purple, auto: peach, deny: red
Top separator line shows mode label on right side in mode color.
Both separator lines (above/below input) colored to match.
Shift+Tab cycling visually changes the line colors.
- Shift+Tab cycles permission modes: bypass → default → plan →
accept_edits → auto → bypass
- /permission <mode> slash command to set specific mode
- Current mode shown in status bar (🛡 bypass)
- Permission checker wired into TUI config
Tools now go through permission.Checker before executing:
- plan mode: denies all writes (fs.write, bash), allows reads
- bypass mode: allows all (deny rules still enforced)
- default mode: prompts user (pipe: stdin prompt, TUI: auto-approve for now)
- accept_edits: auto-allows file ops, prompts for bash
- deny mode: denies all without allow rules
CLI flags: --permission <mode>, --incognito
Pipe mode: console Y/N prompt on stderr
TUI mode: auto-approve (proper overlay TODO)
Verified: plan mode correctly blocks fs.write, model sees error.
- Fixed: chat content no longer overflows past allocated height.
Lines are measured for physical width and hard-truncated to
exactly the chat area height. Input + status bar always visible.
- Header scrolls with chat (not pinned), only input/status fixed
- Git branch in status bar (green, via git rev-parse)
- Alt screen mode — terminal scrollback disabled
- Mouse wheel + PgUp/PgDown scroll within TUI
- New EventToolResult: tool output as dimmed indented block
- Separator lines above/below input, no status bar backgrounds
Switch to bubbles textinput for proper keyboard handling (space,
cursor, backspace, clipboard all work correctly).
Improved design:
- ❯ user prompt, ◆ assistant prefix, ✗ error prefix
- Word wrapping for long responses
- Separator line between chat and input
- Streaming indicator (● streaming) in status bar
- Better color scheme (lighter purples/blues)
- Welcome message with usage hints
TUI launches when no piped input detected. Features:
- Chat panel with scrollable message history
- Streaming response with animated cursor
- User/assistant/tool/error message styling (purple theme)
- Status bar: provider, model, token count, turn count
- Input with basic editing
- Slash commands: /quit, /clear, /incognito (stub)
- Ctrl+C cancels current turn or exits
Built on charm.land/bubbletea/v2, charm.land/lipgloss/v2.
Session interface decouples TUI from engine via channels.
Pipe mode still works for non-interactive use.
System prompt gets a one-line summary (~200 chars): OS, CPU, RAM,
GPU, top runtimes, package count, PATH command count.
Full details available on demand via system_info tool with sections:
runtimes, packages, tools, hardware, all. LLM calls the tool when
it needs specifics — saves thousands of tokens per request.
Hardware detection: CPU model, core count, total RAM, GPU via lspci.
Package manager: pacman/apt/dnf/brew with dev package filtering.
PATH scan: 5541 executables. Runtime probing: 22 detected.
No hardcoded tool lists. Scans all $PATH directories for executables
(5541 on this system), then probes known runtime patterns for version
info (23 detected: Go, Python, Node, Rust, Ruby, Perl, Java, Dart,
Deno, Bun, Lua, LuaJIT, Guile, GCC, Clang, NASM + package managers).
System prompt includes: OS, shell, runtime versions, and notable
tools (git, docker, kubectl, fzf, rg, etc.) from the full PATH scan.
Total executable count reported so the LLM knows the full scope.
Milestones updated: M6 fixed context prefix, M12 multimodality.
Thin wrapper over OpenAI adapter with custom base URLs.
Ollama: localhost:11434/v1, llama.cpp: localhost:8080/v1.
No API key required for local providers.
Fixed: initial tool call args captured on first chunk
(Ollama sends complete args in one chunk, not as deltas).
Live verified: text + tool calling with qwen3:14b on Ollama.
Five providers now live: Mistral, Anthropic, OpenAI, Google, Ollama.
Streaming via goroutine+channel bridge (range-based iter.Seq2 → pull
iterator). Tool use with FunctionCall/FunctionResponse, tool name
sanitization, tool name map for FunctionResponse correlation.
Stop reason override (Google uses STOP for function calls).
Hardcoded model list (gemini-2.5-pro/flash, gemini-2.0-flash).
Wired into CLI with GOOGLE_API_KEY + GEMINI_API_KEY env support.
Live verified: text streaming + tool calling with gemini-2.5-flash.
Four providers now live: Mistral, Anthropic, OpenAI, Google.
Streaming, tool use (index-based delta accumulation), tool name
sanitization (fs.read → fs_read), StreamOptions.IncludeUsage for
token tracking. Hardcoded model list (gpt-4o, gpt-4o-mini, o3, o3-mini).
Wired into CLI with OPENAI_API_KEY env support.
Live verified: text streaming + tool calling with gpt-4o.
Streaming, tool use (with InputJSONDelta assembly), thinking blocks,
cache token tracking, system prompt separation. Tool name sanitization
(fs.read → fs_read) for Anthropic's naming constraints with reverse
translation on tool call responses.
Hardcoded model list with capabilities (Opus 4, Sonnet 4, Haiku 4.5).
Wired into CLI with ANTHROPIC_API_KEY + ANTHROPICS_API_KEY env support.
Also: migrated Mistral SDK to github.com/VikingOwl91/mistral-go-sdk.
Live verified: text streaming + tool calling with claude-sonnet-4.
126 tests across 9 packages.
Go 1.26 module (somegit.dev/Owlibou/gnoma), Makefile with
build/test/lint targets, CLAUDE.md with project conventions,
placeholder main.go, and .gitignore.