vikingowl 15345540f2 fix: ClassifyTask priority ordering — orchestration below operational types
Operational task types (debug, review, refactor, test, explain) now gate
before orchestration in the keyword cascade. Previously, prompts like
"review the orchestration layer" or "refactor the pipeline dispatch"
matched "orchestrat"/"dispatch" and misclassified as TaskOrchestration.
Planning is also moved below the operational types.

Expanded orchestration keywords to cover common intent that the original
four keywords missed: "fan out", "subtask", "delegate to", "spawn elf".

Adds regression tests for false-positive cases and positive tests for new
keywords.
2026-04-06 00:58:54 +02:00

gnoma

Provider-agnostic agentic coding assistant in Go. Named after the northern pygmy-owl (Glaucidium gnoma). Agents are called elfs (elf owl).

Build

make build          # ./bin/gnoma
make install        # $GOPATH/bin/gnoma

Providers

Anthropic

export ANTHROPIC_API_KEY=sk-ant-...
./bin/gnoma --provider anthropic
./bin/gnoma --provider anthropic --model claude-opus-4-5-20251001

Integration tests hit the real API — keep a key in env:

go test -tags integration ./internal/provider/...

OpenAI

export OPENAI_API_KEY=sk-proj-...
./bin/gnoma --provider openai
./bin/gnoma --provider openai --model gpt-4o

Mistral

export MISTRAL_API_KEY=...
./bin/gnoma --provider mistral

Google (Gemini)

export GEMINI_API_KEY=AIza...
./bin/gnoma --provider google
./bin/gnoma --provider google --model gemini-2.0-flash

Ollama (local)

Start Ollama and pull a model, then:

./bin/gnoma --provider ollama --model gemma4:latest
./bin/gnoma --provider ollama --model qwen3:8b     # default if --model omitted

Default endpoint: http://localhost:11434/v1. Override via config or env:

# .gnoma/config.toml
[provider]
default = "ollama"
model   = "gemma4:latest"

[provider.endpoints]
ollama = "http://myhost:11434/v1"

llama.cpp (local)

Start the llama.cpp server:

llama-server --model /path/to/model.gguf --port 8080 --ctx-size 8192

Then:

./bin/gnoma --provider llamacpp
# model name is taken from the server's /v1/models response

Default endpoint: http://localhost:8080/v1. Override:

[provider.endpoints]
llamacpp = "http://localhost:9090/v1"

Session Persistence

Conversations are auto-saved to .gnoma/sessions/ after each completed turn. On a crash you lose at most the current in-flight turn; all previously completed turns are safe.

Resume a session

gnoma --resume              # interactive session picker (↑↓ navigate, Enter load, Esc cancel)
gnoma --resume <id>         # restore directly by ID
gnoma -r                    # shorthand

Inside the TUI:

/resume                     # open picker
/resume <id>                # restore by ID

Incognito mode

gnoma --incognito           # no session saved, no quality scores updated

Toggle at runtime with Ctrl+X.

Config

[session]
max_keep = 20   # how many sessions to retain per project (default: 20)

Sessions are stored per-project under .gnoma/sessions/<id>/. Quality scores (EMA routing data) are stored globally at ~/.config/gnoma/quality.json.


Config

Config is read in priority order:

  1. ~/.config/gnoma/config.toml — global
  2. .gnoma/config.toml — project-local (next to go.mod / .git)
  3. Environment variables

Example .gnoma/config.toml:

[provider]
default = "anthropic"
model   = "claude-sonnet-4-6"

[provider.api_keys]
anthropic = "${ANTHROPIC_API_KEY}"

[provider.endpoints]
ollama   = "http://localhost:11434/v1"
llamacpp = "http://localhost:8080/v1"

[permission]
mode = "auto"   # auto | accept_edits | bypass | deny | plan

Environment variable overrides: GNOMA_PROVIDER, GNOMA_MODEL.


Testing

make test               # unit tests
make test-integration   # integration tests (require real API keys)
make cover              # coverage report → coverage.html
make lint               # golangci-lint
make check              # fmt + vet + lint + test

Integration tests are gated behind //go:build integration and skipped by default.

Description
No description provided
Readme 539 KiB
Languages
Go 99.9%