Automatic Language Detection

Global Englishen
Australian Englishen_au
British Englishen_uk
US Englishen_us
Spanishes
Frenchfr
Germande
Italianit
Portuguesept
Dutchnl
Hindihi
Japaneseja
Chinesezh
Finnishfi
Koreanko
Polishpl
Russianru
Turkishtr
Ukrainianuk
Vietnamesevi
Afrikaansaf
Albaniansq
Amharicam
Arabicar
Armenianhy
Assameseas
Azerbaijaniaz
Bashkirba
Basqueeu
Belarusianbe
Bengalibn
Bosnianbs
Bretonbr
Bulgarianbg
Burmesemy
Catalanca
Croatianhr
Czechcs
Danishda
Estonianet
Faroesefo
Galiciangl
Georgianka
Greekel
Gujaratigu
Haitianht
Hausaha
Hawaiianhaw
Hebrewhe
Hungarianhu
Icelandicis
Indonesianid
Javanesejw
Kannadakn
Kazakhkk
Khmerkm
Laolo
Latinla
Latvianlv
Lingalaln
Lithuanianlt
Luxembourgishlb
Macedonianmk
Malagasymg
Malayms
Malayalamml
Maltesemt
Maorimi
Marathimr
Mongolianmn
Nepaline
Norwegianno
Norwegian Nynorsknn
Occitanoc
Panjabipa
Pashtops
Persianfa
Romanianro
Sanskritsa
Serbiansr
Shonasn
Sindhisd
Sinhalasi
Slovaksk
Sloveniansl
Somaliso
Sundanesesu
Swahilisw
Swedishsv
Tagalogtl
Tajiktg
Tamilta
Tatartt
Telugute
Thaith
Tibetanbo
Turkmentk
Urduur
Uzbekuz
Welshcy
Yiddishyi
Yorubayo

Universal-3 Prouniversal-3-pro
Universal-2universal-2

US & EU

Identify the dominant language spoken in an audio file and use it during the transcription. Enable it to detect any of the supported languages.

When language detection is enabled, the system automatically routes your request to the best available model based on the detected language and the models you provide in the speech_models parameter. For example, with speech_models: ["universal-3-pro", "universal-2"], the system will use Universal-3 Pro for languages it supports and automatically fall back to Universal-2 for all other languages. You can check which model processed your request using the speech_model_used field in the response. See the Model selection page for more details.

To reliably identify the dominant language, a file must contain at least 15 seconds of spoken audio. Results will be improved if there is at least 15-90 seconds of spoken audio in the file.

1import assemblyai as aai
2
3aai.settings.api_key = "<YOUR_API_KEY>"
4
5# audio_file = "./local_file.mp3"
6audio_file = "https://assembly.ai/wildfires.mp3"
7
8config = aai.TranscriptionConfig(
9 speech_models=["universal-3-pro", "universal-2"],
10 language_detection=True
11)
12
13transcript = aai.Transcriber(config=config).transcribe(audio_file)
14
15print(transcript.text)
16print(transcript.json_response["language_code"])

Set a list of expected languages

If you’re confident the audio is in one of a few languages, provide that list via language_detection_options.expected_languages. Detection is restricted to these candidates and the model will choose the language with the highest confidence from this list. This can eliminate scenarios where Automatic Language Detection selects an unexpected language for transcription.

  • Use our language codes (e.g., "en", "es", "fr").
  • If expected_languages is not specified, it is set to ["all"] by default.
1import assemblyai as aai
2
3aai.settings.api_key = "<YOUR_API_KEY>"
4
5# audio_file = "./local_file.mp3"
6audio_file = "https://assembly.ai/wildfires.mp3"
7
8options = aai.LanguageDetectionOptions(
9 expected_languages=["en", "es", "fr", "de"],
10 fallback_language="auto"
11)
12
13config = aai.TranscriptionConfig(
14 speech_models=["universal-3-pro", "universal-2"],
15 language_detection=True,
16 language_detection_options=options
17)
18
19transcript = aai.Transcriber(config=config).transcribe(audio_file)
20
21print(transcript.text)
22print(transcript.json_response["language_code"])

Choose a fallback language

Control what language transcription should fall back to when detection cannot confidently select a language from the expected_languages list.

  • Set language_detection_options.fallback_language to a specific language code (e.g., "en").
  • fallback_language must be one of the language codes in expected_languages or "auto".
  • When fallback_language is unspecified, it is set to "auto" by default. This tells our model to choose the fallback language from expected_languages with the highest confidence score.
1import assemblyai as aai
2
3aai.settings.api_key = "<YOUR_API_KEY>"
4
5# audio_file = "./local_file.mp3"
6audio_file = "https://assembly.ai/wildfires.mp3"
7
8options = aai.LanguageDetectionOptions(
9 expected_languages=["en", "es", "fr", "de"],
10 fallback_language="auto"
11)
12
13config = aai.TranscriptionConfig(
14 speech_models=["universal-3-pro", "universal-2"],
15 language_detection=True,
16 language_detection_options=options
17)
18
19transcript = aai.Transcriber(config=config).transcribe(audio_file)
20
21print(transcript.text)
22print(transcript.json_response["language_code"])

Confidence score

If language detection is enabled, the API returns a confidence score for the detected language. The score ranges from 0.0 (low confidence) to 1.0 (high confidence).

1import assemblyai as aai
2
3aai.settings.api_key = "<YOUR_API_KEY>"
4
5# audio_file = "./local_file.mp3"
6audio_file = "https://assembly.ai/wildfires.mp3"
7
8config = aai.TranscriptionConfig(
9 speech_models=["universal-3-pro", "universal-2"],
10 language_detection=True
11)
12
13transcript = aai.Transcriber(config=config).transcribe(audio_file)
14
15print(transcript.text)
16print(transcript.json_response["language_confidence"])

Set a language confidence threshold

You can set the confidence threshold that must be reached if language detection is enabled. An error will be returned if the language confidence is below this threshold. Valid values are in the range [0,1] inclusive.

1import assemblyai as aai
2
3aai.settings.api_key = "<YOUR_API_KEY>"
4
5# audio_file = "./local_file.mp3"
6audio_file = "https://assembly.ai/wildfires.mp3"
7
8config = aai.TranscriptionConfig(
9 speech_models=["universal-3-pro", "universal-2"],
10 language_detection=True,
11 language_confidence_threshold=0.8
12)
13
14transcript = aai.Transcriber(config=config).transcribe(audio_file)
15
16if transcript.status == "error":
17 raise RuntimeError(f"Transcription failed: {transcript.error}")
18else:
19 print(transcript.json_response["language_confidence"])
20 print(transcript.text)

If the language_confidence_threshold you specify is not met you will receive an error message like detected language 'bg', confidence 0.2949, is below the requested confidence threshold value of '0.4'.

Troubleshooting

Accented speech detected as the wrong language

Automatic Language Detection uses Whisper-based language identification, which can sometimes misidentify heavily accented speech as a different language. For example, English spoken with a strong accent may be detected as Finnish, Latvian, Latin, or Arabic.

When this happens, the model might not just return a wrong language label — it might also transcribe the audio in the incorrectly detected language. This effectively translates the speech rather than transcribing it, producing output in a language the speaker wasn’t using.

The exact transcription behavior can vary depending on the detected language and speech model used.

Use expected_languages to constrain detection (most effective). If you know which languages your audio may contain, set expected_languages to only those languages. This prevents the model from selecting an unexpected language entirely.

For example, if your application processes interviews in English, Spanish, and French:

1{
2 "language_detection": true,
3 "language_detection_options": {
4 "expected_languages": ["en", "es", "fr"],
5 "fallback_language": "en"
6 }
7}

Setting fallback_language to your most common language (e.g., "en") ensures that if the model can’t confidently choose between the expected languages, it defaults to the language most likely to produce a useful transcript.

Use language_confidence_threshold to reject low-confidence detections. Setting a threshold (e.g., 0.7) causes the API to return an error instead of a transcript when confidence is low. This helps catch some misdetections, but not cases where the model is confidently wrong.

Monitor language_confidence in responses. Log the language_code and language_confidence fields from your transcript responses. Unexpected language codes or unusual confidence patterns can help you identify misdetection issues early and decide whether to retry with expected_languages or flag the transcript for review.