Automatic Language Detection
Supported languages
enen_auen_uken_usesfrdeitptnlhijazhfikoplrutrukviafsqamarhyasazbaeubebnbsbrbgmycahrcsdaetfoglkaelguhthahawhehuisidjwknkkkmlolalvlnltlbmkmgmsmlmtmimrmnnenonnocpapsfarosasrsnsdsiskslsosuswsvtltgtatttethbotkuruzcyyiyoSupported models
universal-3-prouniversal-2Supported regions
US & EU
Identify the dominant language spoken in an audio file and use it during the transcription. Enable it to detect any of the supported languages.
When language detection is enabled, the system automatically routes your request to the best available model based on the detected language and the models you provide in the speech_models parameter. For example, with speech_models: ["universal-3-pro", "universal-2"], the system will use Universal-3 Pro for languages it supports and automatically fall back to Universal-2 for all other languages. You can check which model processed your request using the speech_model_used field in the response. See the Model selection page for more details.
To reliably identify the dominant language, a file must contain at least 15 seconds of spoken audio. Results will be improved if there is at least 15-90 seconds of spoken audio in the file.
Set a list of expected languages
If you’re confident the audio is in one of a few languages, provide that list via language_detection_options.expected_languages. Detection is restricted to these candidates and the model will choose the language with the highest confidence from this list. This can eliminate scenarios where Automatic Language Detection selects an unexpected language for transcription.
- Use our language codes (e.g.,
"en","es","fr"). - If
expected_languagesis not specified, it is set to["all"]by default.
Choose a fallback language
Control what language transcription should fall back to when detection cannot confidently select a language from the expected_languages list.
- Set
language_detection_options.fallback_languageto a specific language code (e.g.,"en"). fallback_languagemust be one of the language codes inexpected_languagesor"auto".- When
fallback_languageis unspecified, it is set to"auto"by default. This tells our model to choose the fallback language fromexpected_languageswith the highest confidence score.
Confidence score
If language detection is enabled, the API returns a confidence score for the detected language. The score ranges from 0.0 (low confidence) to 1.0 (high confidence).
Set a language confidence threshold
You can set the confidence threshold that must be reached if language detection is enabled. An error will be returned if the language confidence is below this threshold. Valid values are in the range [0,1] inclusive.
If the language_confidence_threshold you specify is not met you will receive
an error message like detected language 'bg', confidence 0.2949, is below the requested confidence threshold value of '0.4'.
Troubleshooting
Accented speech detected as the wrong language
Automatic Language Detection uses Whisper-based language identification, which can sometimes misidentify heavily accented speech as a different language. For example, English spoken with a strong accent may be detected as Finnish, Latvian, Latin, or Arabic.
When this happens, the model might not just return a wrong language label — it might also transcribe the audio in the incorrectly detected language. This effectively translates the speech rather than transcribing it, producing output in a language the speaker wasn’t using.
The exact transcription behavior can vary depending on the detected language and speech model used.
Recommended mitigations
Use expected_languages to constrain detection (most effective). If you know which languages your audio may contain, set expected_languages to only those languages. This prevents the model from selecting an unexpected language entirely.
For example, if your application processes interviews in English, Spanish, and French:
Setting fallback_language to your most common language (e.g., "en") ensures that if the model can’t confidently choose between the expected languages, it defaults to the language most likely to produce a useful transcript.
Use language_confidence_threshold to reject low-confidence detections. Setting a threshold (e.g., 0.7) causes the API to return an error instead of a transcript when confidence is low. This helps catch some misdetections, but not cases where the model is confidently wrong.
Monitor language_confidence in responses. Log the language_code and language_confidence fields from your transcript responses. Unexpected language codes or unusual confidence patterns can help you identify misdetection issues early and decide whether to retry with expected_languages or flag the transcript for review.