Embeddings

Get a vector representation of a given input that can be easily consumed by machine learning models and algorithms.

Related guide: Embeddings

Create embeddings

POST https://api.openai.com/v1/embeddings

Creates an embedding vector representing the input text.

Request body

modelstringRequired

ID of the model to use. You can use the List models API to see all of your available models, or see our Model overview for descriptions of them.


inputstring or arrayRequired

Input text to get embeddings for, encoded as a string or array of tokens. To get embeddings for multiple inputs in a single request, pass an array of strings or array of token arrays. Each input must not exceed 8192 tokens in length.


userstringOptional

A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse. Learn more.


Example request:

curl:

curl https://api.openai.com/v1/embeddings \
  -H "Authorization: Bearer $OPENAI_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "input": "The food was delicious and the waiter...",
    "model": "text-embedding-ada-002"
  }'

python:

import os
import openai
openai.api_key = os.getenv("OPENAI_API_KEY")
openai.Embedding.create(
  model="text-embedding-ada-002",
  input="The food was delicious and the waiter..."
)

node.js:

const { Configuration, OpenAIApi } = require("openai");
const configuration = new Configuration({
  apiKey: process.env.OPENAI_API_KEY,
});
const openai = new OpenAIApi(configuration);
const response = await openai.createEmbedding({
  model: "text-embedding-ada-002",
  input: "The food was delicious and the waiter...",
});

Parameters:

{
  "model": "text-embedding-ada-002",
  "input": "The food was delicious and the waiter..."
}

Response:

{
  "object": "list",
  "data": [
    {
      "object": "embedding",
      "embedding": [
        0.0023064255,
        -0.009327292,
        .... (1536 floats total for ada-002)
        -0.0028842222,
      ],
      "index": 0
    }
  ],
  "model": "text-embedding-ada-002",
  "usage": {
    "prompt_tokens": 8,
    "total_tokens": 8
  }
}