Translation Providers
@lingo.dev/compiler supports multiple translation providers—use Lingo.dev Engine for the best experience, or connect directly to LLM providers.
Lingo.dev Engine (Recommended)
Lingo.dev Engine is the easiest and most powerful way to translate your app. It provides:
- Dynamic model selection - Automatically routes to the best model for each language pair
- Automated fallbacks - Switches to backup models if primary fails
- Translation memory - Considers past translations for consistency
- Glossary support - Maintains domain-specific terminology
- Cost optimization - Uses efficient models where appropriate
Setup
- Sign up at lingo.dev
- Authenticate:
npx lingo.dev@latest login - Configure:
{ models: "lingo.dev" }
Pricing: Free Hobby tier available. Sufficient for most projects.
Manual API Key
If browser auth fails (e.g., Brave browser blocking), add API key to .env:
LINGODOTDEV_API_KEY=your_key_here
Find your API key in project settings at lingo.dev.
Direct LLM Providers
Connect directly to LLM providers for full control over model selection and costs.
Supported Providers
| Provider | Model String Format | Environment Variable | Get API Key |
|---|---|---|---|
| OpenAI | openai:gpt-4o | OPENAI_API_KEY | platform.openai.com |
| Anthropic | anthropic:claude-3-5-sonnet | ANTHROPIC_API_KEY | console.anthropic.com |
google:gemini-2.0-flash | GOOGLE_API_KEY | ai.google.dev | |
| Groq | groq:llama-3.3-70b-versatile | GROQ_API_KEY | console.groq.com |
| Mistral | mistral:mistral-large | MISTRAL_API_KEY | console.mistral.ai |
| OpenRouter | openrouter:anthropic/claude-3.5-sonnet | OPENROUTER_API_KEY | openrouter.ai |
| Ollama | ollama:llama3.2 | (none) | ollama.com (local) |
Simple Configuration
Use a single provider for all translations:
{
models: {
"*:*": "groq:llama-3.3-70b-versatile"
}
}
Locale-Pair Mapping
Use different providers for different language pairs:
{
models: {
// Specific pairs
"en:es": "groq:llama-3.3-70b-versatile", // Fast & cheap for Spanish
"en:de": "google:gemini-2.0-flash", // Good quality for German
"en:ja": "openai:gpt-4o", // High quality for Japanese
// Wildcards
"*:fr": "anthropic:claude-3-5-sonnet", // All sources → French
"en:*": "google:gemini-2.0-flash", // English → all targets
// Fallback
"*:*": "lingo.dev", // Everything else
}
}
Pattern matching priority:
- Exact match (
"en:es") - Source wildcard (
"*:es") - Target wildcard (
"en:*") - Global wildcard (
"*:*")
API Key Setup
Add provider API keys to .env:
# Lingo.dev Engine
LINGODOTDEV_API_KEY=your_key
# OpenAI
OPENAI_API_KEY=sk-...
# Anthropic
ANTHROPIC_API_KEY=sk-ant-...
# Google
GOOGLE_API_KEY=...
# Groq
GROQ_API_KEY=gsk_...
# Mistral
MISTRAL_API_KEY=...
# OpenRouter
OPENROUTER_API_KEY=sk-or-...
Never commit .env files—add to .gitignore.
Model Selection Guide
For Development
Use pseudotranslator—instant, free, no API keys:
{
dev: {
usePseudotranslator: true,
}
}
For Budget-Conscious Projects
Groq - Fast inference, generous free tier:
{
models: {
"*:*": "groq:llama-3.3-70b-versatile",
}
}
Google Gemini - Competitive pricing, good quality:
{
models: {
"*:*": "google:gemini-2.0-flash",
}
}
For High Quality
OpenAI GPT-4 - Best overall quality:
{
models: {
"*:*": "openai:gpt-4o",
}
}
Anthropic Claude - Excellent for nuanced translations:
{
models: {
"*:*": "anthropic:claude-3-5-sonnet",
}
}
For Local/Offline
Ollama - Run models locally:
{
models: {
"*:*": "ollama:llama3.2",
}
}
Install Ollama and pull a model:
curl -fsSL https://ollama.com/install.sh | sh
ollama pull llama3.2
Mixed Strategy (Recommended)
Optimize costs by using different models per language:
{
models: {
// Fast & cheap for Romance languages
"en:es": "groq:llama-3.3-70b-versatile",
"en:fr": "groq:llama-3.3-70b-versatile",
"en:pt": "groq:llama-3.3-70b-versatile",
// Higher quality for complex languages
"en:ja": "openai:gpt-4o",
"en:zh": "openai:gpt-4o",
"en:ar": "openai:gpt-4o",
// Good balance for European languages
"en:de": "google:gemini-2.0-flash",
"en:nl": "google:gemini-2.0-flash",
// Fallback
"*:*": "lingo.dev",
}
}
Custom Translation Prompts
Customize the translation instruction sent to LLMs:
{
models: "lingo.dev",
prompt: `Translate from {SOURCE_LOCALE} to {TARGET_LOCALE}.
Guidelines:
- Use a professional tone
- Preserve all technical terms
- Do not translate brand names
- Maintain formatting (bold, italic, etc.)
- Use gender-neutral language where possible`
}
Available placeholders:
{SOURCE_LOCALE}: Source locale code (e.g.,"en"){TARGET_LOCALE}: Target locale code (e.g.,"es")
The compiler automatically appends context about the text being translated (file, component, surrounding elements).
Provider-Specific Models
OpenAI
"openai:gpt-4o" // Best quality
"openai:gpt-4o-mini" // Faster, cheaper
"openai:gpt-4-turbo" // Previous generation
Anthropic
"anthropic:claude-3-5-sonnet" // Best quality
"anthropic:claude-3-haiku" // Faster, cheaper
"anthropic:claude-3-opus" // Highest quality (expensive)
"google:gemini-2.0-flash" // Fast, efficient
"google:gemini-1.5-pro" // Higher quality
Groq
"groq:llama-3.3-70b-versatile" // Fast inference
"groq:mixtral-8x7b-32768" // Good quality
Mistral
"mistral:mistral-large" // Best quality
"mistral:mistral-small" // Faster, cheaper
OpenRouter
OpenRouter provides access to 100+ models. Use model IDs from openrouter.ai/models:
"openrouter:anthropic/claude-3.5-sonnet"
"openrouter:google/gemini-2.0-flash"
"openrouter:meta-llama/llama-3.3-70b"
Ollama
Use any Ollama model:
"ollama:llama3.2"
"ollama:mistral"
"ollama:qwen2.5"
List available models: ollama list
OpenAI-Compatible Providers
You can use any OpenAI-compatible API by setting OPENAI_BASE_URL to point to the provider's endpoint. This works with providers like Nebius, Together AI, Anyscale, and Fireworks.
Note Nebius is the only OpenAI-compatible provider officially tested with the Lingo.dev compiler. Other providers listed below expose OpenAI-compatible APIs but are not officially tested.
Setup
- Set the environment variables:
OPENAI_API_KEY=<your-provider-api-key>
OPENAI_BASE_URL=<provider-api-endpoint>
- Use the
openai:prefix with the provider's model ID:
{
models: {
"*:*": "openai:provider-model-id"
}
}
Supported Providers
| Provider | Base URL | Example Model |
|---|---|---|
| Nebius | https://api.tokenfactory.nebius.com/v1 | google/gemma-2-9b-it-fast |
| Together AI | https://api.together.xyz/v1 | meta-llama/Llama-3-70b-chat-hf |
| Anyscale | https://api.endpoints.anyscale.com/v1 | meta-llama/Llama-2-70b-chat-hf |
| Fireworks | https://api.fireworks.ai/inference/v1 | accounts/fireworks/models/llama-v3-70b-instruct |
Common Questions
Which provider should I use? Start with Lingo.dev Engine for simplicity. For full control and cost optimization, use locale-pair mapping with multiple providers.
Do I need API keys in production?
No. Use buildMode: "cache-only" in production—translations are pre-generated. See Build Modes.
Can I mix providers? Yes. Use locale-pair mapping to route different language pairs to different providers.
What if my API key is invalid?
The compiler will fail with a clear error message. Check your .env file and ensure the API key is correct for the configured provider.
Can I use custom models? OpenRouter supports 100+ models. Ollama supports any locally-installed model. Other providers are limited to their model catalog.
How do I test without API calls? Enable pseudotranslator in development:
{
dev: { usePseudotranslator: true }
}
What's the cost difference between providers? Varies significantly. Groq offers generous free tier. OpenAI GPT-4 is premium pricing. Google Gemini is competitively priced. Check each provider's pricing page.
Next Steps
- Build Modes — Optimize for development vs production
- Configuration Reference — All configuration options
- Best Practices — Recommended provider strategies