Thin wrapper over OpenAI adapter with custom base URLs. Ollama: localhost:11434/v1, llama.cpp: localhost:8080/v1. No API key required for local providers. Fixed: initial tool call args captured on first chunk (Ollama sends complete args in one chunk, not as deltas). Live verified: text + tool calling with qwen3:14b on Ollama. Five providers now live: Mistral, Anthropic, OpenAI, Google, Ollama.