Providers
OpenSyntax supports hosted model APIs and local OpenAI-compatible servers. Use `opensyntax auth` to connect and validate a provider.
| Provider | Auth | Environment variable | Example models | Notes |
|---|---|---|---|---|
| OpenAI | API key | OPENAI_API_KEY | gpt-4.1-mini, gpt-4o-mini | OpenAI-compatible chat completions. |
| Anthropic Claude | API key | ANTHROPIC_API_KEY | claude-3-5-sonnet-latest | Uses Anthropic Messages API. |
| Google Gemini | API key | GEMINI_API_KEY, GOOGLE_API_KEY | gemini-1.5-pro | Uses Gemini generate content endpoint. |
| OpenRouter | API key | OPENROUTER_API_KEY | openai/gpt-4o-mini | OpenAI-compatible endpoint. |
| NVIDIA NIM | API key | NVIDIA_API_KEY | meta/llama-3.1-70b-instruct | Base URL: https://integrate.api.nvidia.com/v1. |
| Groq | API key | GROQ_API_KEY | llama-3.1-70b-versatile | OpenAI-compatible low-latency API. |
| DeepSeek | API key | DEEPSEEK_API_KEY | deepseek-chat | OpenAI-compatible endpoint. |
| Mistral | API key | MISTRAL_API_KEY | mistral-large-latest | OpenAI-compatible endpoint. |
| Ollama | None | Not required | llama3.1, mistral | Requires local Ollama server at localhost:11434. |
| LM Studio | None | Not required | local-model | Requires LM Studio server at localhost:1234. |
Provider commands
opensyntax auth --provider nvidia
opensyntax providers
opensyntax models nvidia
opensyntax doctorAuthentication note: OpenSyntax only shows authentication methods that are actually supported for each provider. Current hosted AI providers use API keys or detected environment variables. Browser OAuth and device-code helpers are implemented for providers that expose real public CLI OAuth/device endpoints, but unsupported providers do not claim browser login.