Files
gnoma/internal/provider
vikingowl 54ae24d11c feat: add OpenAI-compat adapter for Ollama and llama.cpp
Thin wrapper over OpenAI adapter with custom base URLs.
Ollama: localhost:11434/v1, llama.cpp: localhost:8080/v1.
No API key required for local providers.

Fixed: initial tool call args captured on first chunk
(Ollama sends complete args in one chunk, not as deltas).

Live verified: text + tool calling with qwen3:14b on Ollama.
Five providers now live: Mistral, Anthropic, OpenAI, Google, Ollama.
2026-04-03 13:47:30 +02:00
..