Translation Providers

@lingo.dev/compiler supports multiple translation providers—use Lingo.dev Engine for the best experience, or connect directly to LLM providers.

Lingo.dev Engine is the easiest and most powerful way to translate your app. It provides:

  • Dynamic model selection - Automatically routes to the best model for each language pair
  • Automated fallbacks - Switches to backup models if primary fails
  • Translation memory - Considers past translations for consistency
  • Glossary support - Maintains domain-specific terminology
  • Cost optimization - Uses efficient models where appropriate

Setup

  1. Sign up at lingo.dev
  2. Authenticate:
    npx lingo.dev@latest login
    
  3. Configure:
    {
      models: "lingo.dev"
    }
    

Pricing: Free Hobby tier available. Sufficient for most projects.

Manual API Key

If browser auth fails (e.g., Brave browser blocking), add API key to .env:

LINGODOTDEV_API_KEY=your_key_here

Find your API key in project settings at lingo.dev.

Direct LLM Providers

Connect directly to LLM providers for full control over model selection and costs.

Supported Providers

ProviderModel String FormatEnvironment VariableGet API Key
OpenAIopenai:gpt-4oOPENAI_API_KEYplatform.openai.com
Anthropicanthropic:claude-3-5-sonnetANTHROPIC_API_KEYconsole.anthropic.com
Googlegoogle:gemini-2.0-flashGOOGLE_API_KEYai.google.dev
Groqgroq:llama-3.3-70b-versatileGROQ_API_KEYconsole.groq.com
Mistralmistral:mistral-largeMISTRAL_API_KEYconsole.mistral.ai
OpenRouteropenrouter:anthropic/claude-3.5-sonnetOPENROUTER_API_KEYopenrouter.ai
Ollamaollama:llama3.2(none)ollama.com (local)

Simple Configuration

Use a single provider for all translations:

{
  models: {
    "*:*": "groq:llama-3.3-70b-versatile"
  }
}

Locale-Pair Mapping

Use different providers for different language pairs:

{
  models: {
    // Specific pairs
    "en:es": "groq:llama-3.3-70b-versatile",    // Fast & cheap for Spanish
    "en:de": "google:gemini-2.0-flash",         // Good quality for German
    "en:ja": "openai:gpt-4o",                   // High quality for Japanese

    // Wildcards
    "*:fr": "anthropic:claude-3-5-sonnet",      // All sources → French
    "en:*": "google:gemini-2.0-flash",          // English → all targets

    // Fallback
    "*:*": "lingo.dev",                         // Everything else
  }
}

Pattern matching priority:

  1. Exact match ("en:es")
  2. Source wildcard ("*:es")
  3. Target wildcard ("en:*")
  4. Global wildcard ("*:*")

API Key Setup

Add provider API keys to .env:

# Lingo.dev Engine
LINGODOTDEV_API_KEY=your_key

# OpenAI
OPENAI_API_KEY=sk-...

# Anthropic
ANTHROPIC_API_KEY=sk-ant-...

# Google
GOOGLE_API_KEY=...

# Groq
GROQ_API_KEY=gsk_...

# Mistral
MISTRAL_API_KEY=...

# OpenRouter
OPENROUTER_API_KEY=sk-or-...

Never commit .env files—add to .gitignore.

Model Selection Guide

For Development

Use pseudotranslator—instant, free, no API keys:

{
  dev: {
    usePseudotranslator: true,
  }
}

For Budget-Conscious Projects

Groq - Fast inference, generous free tier:

{
  models: {
    "*:*": "groq:llama-3.3-70b-versatile",
  }
}

Google Gemini - Competitive pricing, good quality:

{
  models: {
    "*:*": "google:gemini-2.0-flash",
  }
}

For High Quality

OpenAI GPT-4 - Best overall quality:

{
  models: {
    "*:*": "openai:gpt-4o",
  }
}

Anthropic Claude - Excellent for nuanced translations:

{
  models: {
    "*:*": "anthropic:claude-3-5-sonnet",
  }
}

For Local/Offline

Ollama - Run models locally:

{
  models: {
    "*:*": "ollama:llama3.2",
  }
}

Install Ollama and pull a model:

curl -fsSL https://ollama.com/install.sh | sh
ollama pull llama3.2

Optimize costs by using different models per language:

{
  models: {
    // Fast & cheap for Romance languages
    "en:es": "groq:llama-3.3-70b-versatile",
    "en:fr": "groq:llama-3.3-70b-versatile",
    "en:pt": "groq:llama-3.3-70b-versatile",

    // Higher quality for complex languages
    "en:ja": "openai:gpt-4o",
    "en:zh": "openai:gpt-4o",
    "en:ar": "openai:gpt-4o",

    // Good balance for European languages
    "en:de": "google:gemini-2.0-flash",
    "en:nl": "google:gemini-2.0-flash",

    // Fallback
    "*:*": "lingo.dev",
  }
}

Custom Translation Prompts

Customize the translation instruction sent to LLMs:

{
  models: "lingo.dev",
  prompt: `Translate from {SOURCE_LOCALE} to {TARGET_LOCALE}.

Guidelines:
- Use a professional tone
- Preserve all technical terms
- Do not translate brand names
- Maintain formatting (bold, italic, etc.)
- Use gender-neutral language where possible`
}

Available placeholders:

  • {SOURCE_LOCALE}: Source locale code (e.g., "en")
  • {TARGET_LOCALE}: Target locale code (e.g., "es")

The compiler automatically appends context about the text being translated (file, component, surrounding elements).

Provider-Specific Models

OpenAI

"openai:gpt-4o"              // Best quality
"openai:gpt-4o-mini"         // Faster, cheaper
"openai:gpt-4-turbo"         // Previous generation

Anthropic

"anthropic:claude-3-5-sonnet"  // Best quality
"anthropic:claude-3-haiku"     // Faster, cheaper
"anthropic:claude-3-opus"      // Highest quality (expensive)

Google

"google:gemini-2.0-flash"      // Fast, efficient
"google:gemini-1.5-pro"        // Higher quality

Groq

"groq:llama-3.3-70b-versatile"  // Fast inference
"groq:mixtral-8x7b-32768"       // Good quality

Mistral

"mistral:mistral-large"         // Best quality
"mistral:mistral-small"         // Faster, cheaper

OpenRouter

OpenRouter provides access to 100+ models. Use model IDs from openrouter.ai/models:

"openrouter:anthropic/claude-3.5-sonnet"
"openrouter:google/gemini-2.0-flash"
"openrouter:meta-llama/llama-3.3-70b"

Ollama

Use any Ollama model:

"ollama:llama3.2"
"ollama:mistral"
"ollama:qwen2.5"

List available models: ollama list

OpenAI-Compatible Providers

You can use any OpenAI-compatible API by setting OPENAI_BASE_URL to point to the provider's endpoint. This works with providers like Nebius, Together AI, Anyscale, and Fireworks.

Note Nebius is the only OpenAI-compatible provider officially tested with the Lingo.dev compiler. Other providers listed below expose OpenAI-compatible APIs but are not officially tested.

Setup

  1. Set the environment variables:
OPENAI_API_KEY=<your-provider-api-key>
OPENAI_BASE_URL=<provider-api-endpoint>
  1. Use the openai: prefix with the provider's model ID:
{
  models: {
    "*:*": "openai:provider-model-id"
  }
}

Supported Providers

ProviderBase URLExample Model
Nebiushttps://api.tokenfactory.nebius.com/v1google/gemma-2-9b-it-fast
Together AIhttps://api.together.xyz/v1meta-llama/Llama-3-70b-chat-hf
Anyscalehttps://api.endpoints.anyscale.com/v1meta-llama/Llama-2-70b-chat-hf
Fireworkshttps://api.fireworks.ai/inference/v1accounts/fireworks/models/llama-v3-70b-instruct
Example models are illustrative. Availability and model IDs may change over time. Always verify the current model list with the provider's API.

Common Questions

Which provider should I use? Start with Lingo.dev Engine for simplicity. For full control and cost optimization, use locale-pair mapping with multiple providers.

Do I need API keys in production? No. Use buildMode: "cache-only" in production—translations are pre-generated. See Build Modes.

Can I mix providers? Yes. Use locale-pair mapping to route different language pairs to different providers.

What if my API key is invalid? The compiler will fail with a clear error message. Check your .env file and ensure the API key is correct for the configured provider.

Can I use custom models? OpenRouter supports 100+ models. Ollama supports any locally-installed model. Other providers are limited to their model catalog.

How do I test without API calls? Enable pseudotranslator in development:

{
  dev: { usePseudotranslator: true }
}

What's the cost difference between providers? Varies significantly. Groq offers generous free tier. OpenAI GPT-4 is premium pricing. Google Gemini is competitively priced. Check each provider's pricing page.

Next Steps