Audio

Learn how to turn audio into text.

Related guide: Speech to text

Create transcription Beta

POST https://api.openai.com/v1/audio/transcriptions

Transcribes audio into the input language.

Request body

filestringRequired

The audio file to transcribe, in one of these formats: mp3, mp4, mpeg, mpga, m4a, wav, or webm.


modelstringRequired

ID of the model to use. Only whisper-1 is currently available.


promptstringOptional

An optional text to guide the model's style or continue a previous audio segment. The prompt should match the audio language.


response_formatstringOptionalDefaults to json

The format of the transcript output, in one of these options: json, text, srt, verbose_json, or vtt.


temperaturenumberOptionalDefaults to 0

The sampling temperature, between 0 and 1. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. If set to 0, the model will use log probability to automatically increase the temperature until certain thresholds are hit.


languagestringOptional

The language of the input audio. Supplying the input language in ISO-639-1 format will improve accuracy and latency.


Example request:

curl:

curl https://api.openai.com/v1/audio/transcriptions \
  -H "Authorization: Bearer $OPENAI_API_KEY" \
  -H "Content-Type: multipart/form-data" \
  -F file="@/path/to/file/audio.mp3" \
  -F model="whisper-1"

python:

import os
import openai
openai.api_key = os.getenv("OPENAI_API_KEY")
audio_file = open("audio.mp3", "rb")
transcript = openai.Audio.transcribe("whisper-1", audio_file)

node.js:

const { Configuration, OpenAIApi } = require("openai");
const configuration = new Configuration({
  apiKey: process.env.OPENAI_API_KEY,
});
const openai = new OpenAIApi(configuration);
const resp = await openai.createTranscription(
  fs.createReadStream("audio.mp3"),
  "whisper-1"
);

Parameters:

{
  "file": "audio.mp3",
  "model": "whisper-1"
}

Response:

{
  "text": "Imagine the wildest idea that you've ever had, and you're curious about how it might scale to something that's a 100, a 1,000 times bigger. This is a place where you can get to do that."
}

Create translation Beta

POST https://api.openai.com/v1/audio/translations

Translates audio into into English.

Request body

filestringRequired

The audio file to translate, in one of these formats: mp3, mp4, mpeg, mpga, m4a, wav, or webm.


modelstringRequired

ID of the model to use. Only whisper-1 is currently available.


promptstringOptional

An optional text to guide the model's style or continue a previous audio segment. The prompt should be in English.


response_formatstringOptionalDefaults to json

The format of the transcript output, in one of these options: json, text, srt, verbose_json, or vtt.


temperaturenumberOptionalDefaults to 0

The sampling temperature, between 0 and 1. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. If set to 0, the model will use log probability to automatically increase the temperature until certain thresholds are hit.


Example request:

curl:

curl https://api.openai.com/v1/audio/translations \
  -H "Authorization: Bearer $OPENAI_API_KEY" \
  -H "Content-Type: multipart/form-data" \
  -F file="@/path/to/file/german.m4a" \
  -F model="whisper-1"

python:

import os
import openai
openai.api_key = os.getenv("OPENAI_API_KEY")
audio_file = open("german.m4a", "rb")
transcript = openai.Audio.translate("whisper-1", audio_file)

node.js:

const { Configuration, OpenAIApi } = require("openai");
const configuration = new Configuration({
  apiKey: process.env.OPENAI_API_KEY,
});
const openai = new OpenAIApi(configuration);
const resp = await openai.createTranslation(
  fs.createReadStream("audio.mp3"),
  "whisper-1"
);

Parameters:

{
  "file": "german.m4a",
  "model": "whisper-1"
}

Response:

{
  "text": "Hello, my name is Wolfgang and I come from Germany. Where are you heading today?"
}