OpenAI Whisper: How Does OpenAI Whisper Work?

OpenAI Whisper

OpenAI recently launched the Whisper API, a hosted version of the open-source Whisper speech-to-text model to coincide with the release of ChatGPT API.

Priced at $0.006 per minute, Whisper is an automatic speech recognition system that OpenAI claims enables “robust” transcription in multiple languages as well as translation from those languages into English. It takes files in a variety of formats, including M4A, MP3, MP4, MPEG, MPGA, WAV, and WEBM.

Countless organizations have developed highly capable speech recognition systems, which sit at the core of software and services from tech giants like Google, Amazon, and Meta. However, what sets Whisper apart is that it was trained on 680,000 hours of multilingual and “multitask” data collected from the web.

This leads to improved recognition of unique accents, background noise, and technical jargon.

Overview of OpenAI Whisper

Whisper is an automatic speech recognition model trained on 680,000 hours of multilingual data collected from the web. As per OpenAI, this model is robust to accents, background noise, and technical language. In addition, it supports 99 different languages’ transcription and translation from those languages into English.

Whisper has five models (refer to the below table). Below is the table available on OpenAI’s GitHub page. According to OpenAI, there are four models for English-only applications, which are denoted as .en. The model performs better for tiny.en and base.en, but differences become less significant for the small.en and medium.en models.

Ref: OpenAI’s GitHHub Page

The Whisper models are trained for speech recognition and translation tasks, capable of transcribing speech audio into the text in the language it is spoken (ASR) as well as translated into English (speech translation). Whisper is an Encoder-Decoder model, trained on 680,000 hours of multilingual and multitask supervised data collected from the web.

Transcription is a process of converting spoken language into text. In the past, it was done manually; now, there are AI-powered tools like Whisper that can accurately understand spoken language. With a basic knowledge of Python language, you can integrate OpenAI Whisper API into your application.

The Whisper API is a part of openai/openai-python, which allows you to access various OpenAI services and models.

What are good use cases for transcription?

  1. Transcribing interviews, meetings, lectures, and podcasts for analysis, easy access, and keeping records. 
  2. Real-time speech transcription for subtitles (YouTube), captioning (Zoom meetings), and translation of spoken language.
  3. Speech transcription for personal and professional use. Transcribing voice notes, messages, reminders, memos, and feedback.
  4. Transcription for people with hearing impairments.
  5. Transcription for voice-based applications that require text input. For example, chatbot, voice assistant, and language translation.

How does Whisper work?

Input audio is split into 30-second chunks, converted into a log-Mel spectrogram, and then passed into an encoder. A decoder is trained to predict the corresponding text caption, intermixed with special tokens that direct the single model to perform tasks such as language identification, phrase-level timestamps, multilingual speech transcription, and to-English speech translation.

In simpler words, OpenAI Whisper is built on the transformer architecture, stacking encoder blocks and decoder blocks with the attention mechanism propagating information between both.

It will take the audio recording, split it into 30-second chunks and process them one by one. For each 30-second recording, it will encode the audio using the encoder section and save the position of each word said. Then, it will leverage this encoded information to find what was said using the decoder.

The decoder will predict what we call tokens from all this information, which is basically each word that is said. Then, it will repeat this process for the next word using all the same information as well as the predicted previous word, helping it guess the next one that would make more sense.

OpenAI trained Whisper’s audio model in a similar way as GPT-3 – with data available on the internet. This makes it a large and general audio model. It also makes the model way more robust than others. In fact, according to OpenAI, Whisper approaches human-level robustness due to being trained on such a diverse set of data ranging from clips, TED talks, podcasts, interviews, and more.

All of these represent real-world-like data, with some of them transcribed using machine learning-based models and not humans.

How to use OpenAI Whisper

The speech-to-text API provides two endpoints – transcriptions and translations – based on OpenAI’s state-of-the-art open-source large-v2 Whisper model. They can be used to:

  • Transcribe audio into whatever language the audio is in.
  • Translate and transcribe the audio into English.

File uploads are currently limited to 25 MB and the following input file types are supported: mp3mp4mpegmpgam4awav, and webm.

Quickstart

Transcriptions

The transcriptions API takes as input the audio file you want to transcribe and the desired output file format for the transcription of the audio. It currently supports multiple input and output file formats.

By default, the response type will be json with the raw text included.

{ 
  "text": "Imagine the wildest idea that you've ever had, and you're curious about how it might scale to something that's a 100, a 1,000 times bigger. 
.... 
}

To set additional parameters in a request, you can add more --form lines with the relevant options. For example, if you want to set the output format as text, you would add the following line:

1 ...
2 --form [email protected] \
3 --form model=whisper-1 \
4 --form response_format=text

Translations

The translations API takes as input the audio file in any of the supported languages and transcribes, if necessary, the audio into English. This differs from OpennAI’s /Transcriptions endpoint since the output is not in the original input language and is instead translated into English text.

In this case, the inputted audio was German and the outputted text looks like this:

Hello, my name is Wolfgang and I come from Germany. Where are you heading today?

Features of OpenAI Whisper

Languages

OpenAI Whisper API supports the following languages for transcriptions and translations:

Afrikaans. Arabic. Armenian. Azerbaijani. Belarusian. Bosnian. Bulgarian. Catalan. Chinese. Croatian. Czech. Danish. Dutch. English. Estonian. Finnish. French. Galician. German. Greek. Hebrew. Hindi. Hungarian. Icelandic. Indonesian. Italian. Japanese. Kannada. Kazakh. Korean. Latvian. Lithuanian. Macedonian. Malay. Marathi. Maori. Nepali. Norwegian. Persian. Polish. Portuguese. Romanian. Russian. Serbian. Slovak. Slovenian. Spanish. Swahili. Swedish. Tagalog. Tamil. Thai. Turkish. Ukrainian. Urdu. Vietnamese. Welsh.

The breakdown of the Word Error Rate (WER) for Fleur’s dataset using the large-v2 model is presented in the figure below, categorized by languages. The smaller the WER, the better the transcription accuracy. 

Language ranking from OpenAI

File formats

Whisper API supports the following file formats: mp3mp4mpegmpgam4awav, and webm. Currently, upload file size is limited to 25MB. If you have larger files, you can break them down into smaller chunks using pydub.

Command-line usage

The following command will transcribe speech in audio files, using the medium model:

whisper audio.flac audio.mp3 audio.wav --model medium

The default setting (which selects the small model) works well for transcribing English. To transcribe an audio file containing non-English speech, you can specify the language using the --language option:

whisper japanese.wav --language Japanese

Adding --task translate will translate the speech into English:

whisper japanese.wav --language Japanese --task translate

Run the following to view all available options:

whisper --help

See tokenizer.py for the list of all available languages.

Longer inputs

By default, the Whisper API only supports files that are less than 25 MB. If you have an audio file that is longer than that, you will need to break it up into chunks of 25 MB or less or use a compressed audio format. To get the best performance, we suggest that you avoid breaking the audio up mid-sentence as this may cause some context to be lost.

One way to handle this is to use the PyDub open-source Python package to split the audio:

1 from pydub import AudioSegment
2
3 song = AudioSegment.from_mp3("good_morning.mp3")
4
5 # PyDub handles time in milliseconds
6 ten_minutes = 10 * 60 * 1000
7
8 first_10_minutes = song[:ten_minutes]
9
10 first_10_minutes.export("good_morning_10.mp3", format="mp3")

Prompting

You can use a prompt to improve the quality of the transcripts generated by the Whisper API. The model will try to match the style of the prompt, so it will be more likely to use capitalization and punctuation if the prompt does too.

However, the current prompting system is much more limited than other language models and only provides limited control over the generated audio.

Here are some examples of how prompting can help in different scenarios:
  1. Prompts can be very helpful for correcting specific words or acronyms that the model often misrecognizes in the audio. For example, the following prompt improves the transcription of the words DALL·E and GPT-3, which were previously written as “GDP 3” and “DALI”: “The transcript is about OpenAI which makes technology like DALL·E, GPT-3, and ChatGPT with the hope of one day building an AGI system that benefits all of humanity”
  2. To preserve the context of a file that was split into segments, you can prompt the model with the transcript of the preceding segment. This will make the transcript more accurate, as the model will use the relevant information from the previous audio. The model will only consider the final 224 tokens of the prompt and ignore anything earlier. For multilingual inputs, Whisper uses a custom tokenizer and uses the standard GPT-2 tokenizer for English-only inputs. Both are accessible through the open-source Whisper Python package.
  3. Sometimes, the model might skip punctuation in the transcript. You can avoid this by using a simple prompt that includes punctuation: “Hello, welcome to my lecture.”
  4. The model may also leave out common filler words in the audio. If you want to keep the filler words in your transcript, you can use a prompt that contains them: “Umm, let me think like, hmm… Okay, here’s what I’m, like, thinking.”
  5. Some languages can be written in different ways, such as simplified or traditional Chinese. The model might not always use the writing style that you want for your transcript by default. You can improve this by using a prompt in your preferred writing style.

In conclusion

Having such a general model isn’t very powerful in itself, as it will be beaten at most tasks by smaller and more specific models adapted to the task at hand. But it has other benefits. You can use this kind of pre-trained models and fine-tune them on your task. This means that you will take this powerful model and retrain a part of it, or the entire thing, with your own data.

This technique has been shown to produce much better models than starting training from scratch with your data.

Another benefit is that OpenAI open-sourced its code and everything instead of an API. This means you can use Whisper as a pre-trained foundation architecture to build upon and create more powerful models for yourself.

References

0 Shares:
Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like