provider/openai:
- Fix doubled tool call args (argsComplete flag): Ollama sends complete
args in the first streaming chunk then repeats them as delta, causing
doubled JSON and 400 errors in elfs
- Handle fs: prefix (gemma4 uses fs:grep instead of fs.grep)
- Add Reasoning field support for Ollama thinking output
cmd/gnoma:
- Early TTY detection so logger is created with correct destination
before any component gets a reference to it (fixes slog WARN bleed
into TUI textarea)
permission:
- Exempt spawn_elfs and agent tools from safety scanner: elf prompt
text may legitimately mention .env/.ssh/credentials patterns and
should not be blocked
tui/app:
- /init retry chain: no-tool-calls → spawn_elfs nudge → write nudge
(ask for plain text output) → TUI fallback write from streamBuf
- looksLikeAgentsMD + extractMarkdownDoc: validate and clean fallback
content before writing (reject refusals, strip narrative preambles)
- Collapse thinking output to 3 lines; ctrl+o to expand (live stream
and committed messages)
- Stream-level filter for model pseudo-tool-call blocks: suppresses
<<tool_code>>...</tool_code>> and <<function_call>>...<tool_call|>
from entering streamBuf across chunk boundaries
- sanitizeAssistantText regex covers both block formats
- Reset streamFilterClose at every turn start
Gap 11 (M6): Fixed context prefix
- Window.PrefixMessages stores immutable docs (CLAUDE.md, .gnoma/GNOMA.md)
- Prefix stripped before compaction, prepended after — survives all compaction
- AllMessages() returns prefix + history for provider requests
- main.go loads CLAUDE.md and .gnoma/GNOMA.md at startup as prefix
Gap 12 (M6): Deferred tool loading
- DeferrableTool optional interface: ShouldDefer() bool
- buildRequest() skips deferred tools until activated
- Tools auto-activate on first model request (activatedTools map)
- agent + spawn_elfs marked as deferrable (large schemas, rarely needed early)
- Saves ~800 tokens per deferred tool per request
Gap 13 (M6): Pre/post compact hooks
- OnPreCompact/OnPostCompact callbacks in WindowConfig
- Called in doCompact() (shared by CompactIfNeeded + ForceCompact)
- M8 hooks system will extend these to full protocol
Engine retries transient errors (429, 5xx) up to 4 times with
1s/2s/4s/8s backoff. Respects Retry-After header from provider.
Batch tool staggers elf spawns by 300ms to avoid rate limit bursts
when all elfs hit the API simultaneously (Mistral's 1 req/s limit).
- Elf tool calls show as 🦉 [elf] <prompt> (not ⚙ [agent])
- Live 2-line progress beneath the elf label showing what the
elf is currently outputting (grey, auto-updated)
- Agent tool forwards elf streaming events via progress channel
- Progress cleared on turn completion
- elfProgressCh wired from agent tool → TUI
internal/elf/:
- BackgroundElf: runs on own goroutine with independent engine,
history, and provider. No shared mutable state.
- Manager: spawns elfs via router.Select() (picks best arm per
task type), tracks lifecycle, WaitAll(), CancelAll(), Cleanup().
internal/tool/agent/:
- Agent tool: LLM can call 'agent' to spawn sub-agents.
Supports task_type hint for routing, wait/background mode.
5-minute timeout, context cancellation propagated.
Concurrent tool execution:
- Read-only tools (fs.read, fs.grep, fs.glob, etc.) execute in
parallel via goroutines.
- Write tools (bash, fs.write, fs.edit) execute sequentially.
- Partition by tool.IsReadOnly().
TUI: /elf command explains how to use sub-agents.
5 elf tests. Exit criteria: parent spawns 3 background elfs on
different providers, collects and synthesizes results.
- ctrl+o toggles between 10-line truncated and full tool output
- Label shows "ctrl+o to expand" (lowercase)
- Fixed: auto permission mode now sticks — config default was
overriding flag default ("default" → "auto" in config defaults)
SummarizeStrategy: calls LLM to condense older messages into a
summary, preserving key decisions, file changes, tool outputs.
Falls back to truncation on failure. Keeps 6 recent messages.
Tool result persistence: outputs >50K chars saved to disk at
.gnoma/sessions/tool-results/{id}.txt with 2K preview inline.
TUI: /compact command for manual compaction, /clear now resets
engine history. Summarize strategy used by default (with
truncation fallback).
- ❯ flush left for user input, continuation lines indented 2 spaces
- ◆ purple icon for AI responses, continuation indented
- User multiline messages: ❯ first line, indented rest
- Tool output: indented under parent
- System messages: • prefix with multiline indent
- Input area: no extra padding, ❯ at column 0
Switch from textinput to textarea bubble:
- Enter submits message
- Shift+Enter / Ctrl+J inserts newline
- Input area auto-expands from 1 to 10 lines based on content
- Line numbers hidden, prompt preserved
At startup, polls ollama (/api/tags) and llama.cpp (/v1/models) for
available models. Registers each as an arm in the router alongside
the CLI-specified provider.
Discovered: 7 ollama models + 1 llama.cpp model = 9 total arms.
Router can now select from multiple local models based on task type.
Discovery is non-blocking — failures logged and skipped.
Engine.InjectMessage() appends messages to history without triggering
a turn. When permission mode or incognito changes, the notification
is injected as a user+assistant pair so the model sees it as context.
Fixes: model now knows permissions changed and will retry tool calls
instead of remembering old denials from previous mode.
When permission mode changes (Shift+Tab or /permission), the system
message now says "previous tool denials no longer apply, retry if
asked" — helps the model understand it should re-attempt tools
instead of remembering old denials from conversation history.
- Default permission mode changed from bypass to default
- Removed mode info from status bar (shown on separator line instead)
- Ctrl+I toggles incognito mode
- Incognito mode: amber/yellow separator lines with 🔒 label
overrides permission mode color when active
- Shift+Tab cycles permission modes
Each permission mode has a distinct color:
- bypass: green, default: blue, plan: teal
- accept_edits: purple, auto: peach, deny: red
Top separator line shows mode label on right side in mode color.
Both separator lines (above/below input) colored to match.
Shift+Tab cycling visually changes the line colors.
- Shift+Tab cycles permission modes: bypass → default → plan →
accept_edits → auto → bypass
- /permission <mode> slash command to set specific mode
- Current mode shown in status bar (🛡 bypass)
- Permission checker wired into TUI config
Tools now go through permission.Checker before executing:
- plan mode: denies all writes (fs.write, bash), allows reads
- bypass mode: allows all (deny rules still enforced)
- default mode: prompts user (pipe: stdin prompt, TUI: auto-approve for now)
- accept_edits: auto-allows file ops, prompts for bash
- deny mode: denies all without allow rules
CLI flags: --permission <mode>, --incognito
Pipe mode: console Y/N prompt on stderr
TUI mode: auto-approve (proper overlay TODO)
Verified: plan mode correctly blocks fs.write, model sees error.
- Fixed: chat content no longer overflows past allocated height.
Lines are measured for physical width and hard-truncated to
exactly the chat area height. Input + status bar always visible.
- Header scrolls with chat (not pinned), only input/status fixed
- Git branch in status bar (green, via git rev-parse)
- Alt screen mode — terminal scrollback disabled
- Mouse wheel + PgUp/PgDown scroll within TUI
- New EventToolResult: tool output as dimmed indented block
- Separator lines above/below input, no status bar backgrounds
Switch to bubbles textinput for proper keyboard handling (space,
cursor, backspace, clipboard all work correctly).
Improved design:
- ❯ user prompt, ◆ assistant prefix, ✗ error prefix
- Word wrapping for long responses
- Separator line between chat and input
- Streaming indicator (● streaming) in status bar
- Better color scheme (lighter purples/blues)
- Welcome message with usage hints
TUI launches when no piped input detected. Features:
- Chat panel with scrollable message history
- Streaming response with animated cursor
- User/assistant/tool/error message styling (purple theme)
- Status bar: provider, model, token count, turn count
- Input with basic editing
- Slash commands: /quit, /clear, /incognito (stub)
- Ctrl+C cancels current turn or exits
Built on charm.land/bubbletea/v2, charm.land/lipgloss/v2.
Session interface decouples TUI from engine via channels.
Pipe mode still works for non-interactive use.
System prompt gets a one-line summary (~200 chars): OS, CPU, RAM,
GPU, top runtimes, package count, PATH command count.
Full details available on demand via system_info tool with sections:
runtimes, packages, tools, hardware, all. LLM calls the tool when
it needs specifics — saves thousands of tokens per request.
Hardware detection: CPU model, core count, total RAM, GPU via lspci.
Package manager: pacman/apt/dnf/brew with dev package filtering.
PATH scan: 5541 executables. Runtime probing: 22 detected.
No hardcoded tool lists. Scans all $PATH directories for executables
(5541 on this system), then probes known runtime patterns for version
info (23 detected: Go, Python, Node, Rust, Ruby, Perl, Java, Dart,
Deno, Bun, Lua, LuaJIT, Guile, GCC, Clang, NASM + package managers).
System prompt includes: OS, shell, runtime versions, and notable
tools (git, docker, kubectl, fzf, rg, etc.) from the full PATH scan.
Total executable count reported so the LLM knows the full scope.
Milestones updated: M6 fixed context prefix, M12 multimodality.
Thin wrapper over OpenAI adapter with custom base URLs.
Ollama: localhost:11434/v1, llama.cpp: localhost:8080/v1.
No API key required for local providers.
Fixed: initial tool call args captured on first chunk
(Ollama sends complete args in one chunk, not as deltas).
Live verified: text + tool calling with qwen3:14b on Ollama.
Five providers now live: Mistral, Anthropic, OpenAI, Google, Ollama.