OSOpenSyntax

Providers

OpenSyntax supports hosted model APIs and local OpenAI-compatible servers. Use `opensyntax auth` to connect and validate a provider.

ProviderAuthEnvironment variableExample modelsNotes
OpenAIAPI keyOPENAI_API_KEYgpt-4.1-mini, gpt-4o-miniOpenAI-compatible chat completions.
Anthropic ClaudeAPI keyANTHROPIC_API_KEYclaude-3-5-sonnet-latestUses Anthropic Messages API.
Google GeminiAPI keyGEMINI_API_KEY, GOOGLE_API_KEYgemini-1.5-proUses Gemini generate content endpoint.
OpenRouterAPI keyOPENROUTER_API_KEYopenai/gpt-4o-miniOpenAI-compatible endpoint.
NVIDIA NIMAPI keyNVIDIA_API_KEYmeta/llama-3.1-70b-instructBase URL: https://integrate.api.nvidia.com/v1.
GroqAPI keyGROQ_API_KEYllama-3.1-70b-versatileOpenAI-compatible low-latency API.
DeepSeekAPI keyDEEPSEEK_API_KEYdeepseek-chatOpenAI-compatible endpoint.
MistralAPI keyMISTRAL_API_KEYmistral-large-latestOpenAI-compatible endpoint.
OllamaNoneNot requiredllama3.1, mistralRequires local Ollama server at localhost:11434.
LM StudioNoneNot requiredlocal-modelRequires LM Studio server at localhost:1234.

Provider commands

opensyntax auth --provider nvidia
opensyntax providers
opensyntax models nvidia
opensyntax doctor
Authentication note: OpenSyntax only shows authentication methods that are actually supported for each provider. Current hosted AI providers use API keys or detected environment variables. Browser OAuth and device-code helpers are implemented for providers that expose real public CLI OAuth/device endpoints, but unsupported providers do not claim browser login.