OpenAI-compatible. Every model.
Point your existing OpenAI SDK at https://llm.smoo.ai/v1 and ship. Your virtual key is the only thing that changes.
Quickstart
Sign up, create a virtual key from your dashboard, drop it in. Same SDKs, same shapes — just a different base URL.
curl https://llm.smoo.ai/v1/chat/completions \
-H "Authorization: Bearer $SMOOAI_LLM_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "claude-haiku-4-5",
"messages": [{"role": "user", "content": "ping"}]
}'from openai import OpenAI
client = OpenAI(
api_key=os.environ["SMOOAI_LLM_KEY"],
base_url="https://llm.smoo.ai/v1",
)
resp = client.chat.completions.create(
model="claude-haiku-4-5",
messages=[{"role": "user", "content": "ping"}],
)
print(resp.choices[0].message.content)import OpenAI from 'openai';
const client = new OpenAI({
apiKey: process.env.SMOOAI_LLM_KEY,
baseURL: 'https://llm.smoo.ai/v1',
});
const resp = await client.chat.completions.create({
model: 'claude-haiku-4-5',
messages: [{ role: 'user', content: 'ping' }],
});
console.log(resp.choices[0].message.content);Why it's smoother than DIY
Every frontier model
OpenAI, Anthropic, Groq, Gemini, ElevenLabs out of the gate. Provider expansion (xAI, DeepSeek, Mistral, Bedrock) ships when keys are added.
Zero proxy overhead
Direct calls to the gateway. No Lambda cold start, no smooai middleware in the path. Same LiteLLM auth + rate limit you would self-host.
Built-in playground
Test prompts in /chat_ui without writing code. SSO from your smooai login — same workspace, no separate auth.
Drop-in compatibility
OpenAI SDK, LangChain, LlamaIndex, Vercel AI SDK — anything that takes a base URL works unchanged.
Live API docs
Full Swagger UI + OpenAPI spec at /docs. Auto-generated client types for every supported model.
Streaming, tool use, JSON mode
Everything the upstream model supports works through the gateway. Pass-through for kwargs the OpenAI shape does not cover.