Go to file
Jong Wook Kim 0a60fcaa9b Release 20230918 2023-09-18 17:13:19 -07:00
.github/workflows Update test.yml 2023-09-18 16:38:17 -07:00
data initial commit 2022-09-22 01:09:43 +09:00
notebooks Use ndimage.median_filter instead of signal.medfilter (#812) 2023-01-17 14:43:05 -08:00
tests Fix truncated words list when the replacement character is decoded (#1089) 2023-03-14 09:32:41 -07:00
whisper Release 20230918 2023-09-18 17:13:19 -07:00
.flake8 apply formatting with `black` (#1038) 2023-03-06 15:50:37 -08:00
.gitattributes fix github language stats getting dominated by jupyter notebook (#1076) 2023-03-14 00:07:09 -07:00
.gitignore initial commit 2022-09-22 01:09:43 +09:00
.pre-commit-config.yaml Add .pre-commit-config.yaml (#1528) 2023-09-18 16:15:33 -07:00
CHANGELOG.md Release 20230918 2023-09-18 17:13:19 -07:00
LICENSE initial commit 2022-09-22 01:09:43 +09:00
MANIFEST.in Use tiktoken (#1044) 2023-03-13 02:34:16 -07:00
README.md Updated README.md to provide more insight on BLEU and specific appendices (#1236) 2023-05-04 23:47:45 -07:00
approach.png initial commit 2022-09-22 01:09:43 +09:00
language-breakdown.svg large-v2 figure and arxiv url update 2022-12-09 00:12:39 -05:00
model-card.md Update model-card.md (#1643) 2023-09-18 15:59:49 -07:00
pyproject.toml apply formatting with `black` (#1038) 2023-03-06 15:50:37 -08:00
requirements.txt Drop ffmpeg-python dependency and call ffmpeg directly. (#1242) 2023-05-04 10:53:59 -07:00
setup.py Use triton==2.0.0 (#1053) 2023-03-07 16:56:31 -08:00



[Blog] [Paper] [Model card] [Colab example]

Whisper is a general-purpose speech recognition model. It is trained on a large dataset of diverse audio and is also a multitasking model that can perform multilingual speech recognition, speech translation, and language identification.



A Transformer sequence-to-sequence model is trained on various speech processing tasks, including multilingual speech recognition, speech translation, spoken language identification, and voice activity detection. These tasks are jointly represented as a sequence of tokens to be predicted by the decoder, allowing a single model to replace many stages of a traditional speech-processing pipeline. The multitask training format uses a set of special tokens that serve as task specifiers or classification targets.


We used Python 3.9.9 and PyTorch 1.10.1 to train and test our models, but the codebase is expected to be compatible with Python 3.8-3.11 and recent PyTorch versions. The codebase also depends on a few Python packages, most notably OpenAI's tiktoken for their fast tokenizer implementation. You can download and install (or update to) the latest release of Whisper with the following command:

pip install -U openai-whisper

Alternatively, the following command will pull and install the latest commit from this repository, along with its Python dependencies:

pip install git+https://github.com/openai/whisper.git 

To update the package to the latest version of this repository, please run:

pip install --upgrade --no-deps --force-reinstall git+https://github.com/openai/whisper.git

It also requires the command-line tool ffmpeg to be installed on your system, which is available from most package managers:

# on Ubuntu or Debian
sudo apt update && sudo apt install ffmpeg

# on Arch Linux
sudo pacman -S ffmpeg

# on MacOS using Homebrew (https://brew.sh/)
brew install ffmpeg

# on Windows using Chocolatey (https://chocolatey.org/)
choco install ffmpeg

# on Windows using Scoop (https://scoop.sh/)
scoop install ffmpeg

You may need rust installed as well, in case tiktoken does not provide a pre-built wheel for your platform. If you see installation errors during the pip install command above, please follow the Getting started page to install Rust development environment. Additionally, you may need to configure the PATH environment variable, e.g. export PATH="$HOME/.cargo/bin:$PATH". If the installation fails with No module named 'setuptools_rust', you need to install setuptools_rust, e.g. by running:

pip install setuptools-rust

Available models and languages

There are five model sizes, four with English-only versions, offering speed and accuracy tradeoffs. Below are the names of the available models and their approximate memory requirements and relative speed.

Size Parameters English-only model Multilingual model Required VRAM Relative speed
tiny 39 M tiny.en tiny ~1 GB ~32x
base 74 M base.en base ~1 GB ~16x
small 244 M small.en small ~2 GB ~6x
medium 769 M medium.en medium ~5 GB ~2x
large 1550 M N/A large ~10 GB 1x

The .en models for English-only applications tend to perform better, especially for the tiny.en and base.en models. We observed that the difference becomes less significant for the small.en and medium.en models.

Whisper's performance varies widely depending on the language. The figure below shows a WER (Word Error Rate) breakdown by languages of the Fleurs dataset using the large-v2 model (The smaller the numbers, the better the performance). Additional WER scores corresponding to the other models and datasets can be found in Appendix D.1, D.2, and D.4. Meanwhile, more BLEU (Bilingual Evaluation Understudy) scores can be found in Appendix D.3. Both are found in the paper.

WER breakdown by language

Command-line usage

The following command will transcribe speech in audio files, using the medium model:

whisper audio.flac audio.mp3 audio.wav --model medium

The default setting (which selects the small model) works well for transcribing English. To transcribe an audio file containing non-English speech, you can specify the language using the --language option:

whisper japanese.wav --language Japanese

Adding --task translate will translate the speech into English:

whisper japanese.wav --language Japanese --task translate

Run the following to view all available options:

whisper --help

See tokenizer.py for the list of all available languages.

Python usage

Transcription can also be performed within Python:

import whisper

model = whisper.load_model("base")
result = model.transcribe("audio.mp3")

Internally, the transcribe() method reads the entire file and processes the audio with a sliding 30-second window, performing autoregressive sequence-to-sequence predictions on each window.

Below is an example usage of whisper.detect_language() and whisper.decode() which provide lower-level access to the model.

import whisper

model = whisper.load_model("base")

# load audio and pad/trim it to fit 30 seconds
audio = whisper.load_audio("audio.mp3")
audio = whisper.pad_or_trim(audio)

# make log-Mel spectrogram and move to the same device as the model
mel = whisper.log_mel_spectrogram(audio).to(model.device)

# detect the spoken language
_, probs = model.detect_language(mel)
print(f"Detected language: {max(probs, key=probs.get)}")

# decode the audio
options = whisper.DecodingOptions()
result = whisper.decode(model, mel, options)

# print the recognized text

More examples

Please use the 🙌 Show and tell category in Discussions for sharing more example usages of Whisper and third-party extensions such as web demos, integrations with other tools, ports for different platforms, etc.


Whisper's code and model weights are released under the MIT License. See LICENSE for further details.