Note: The Assistants API is a more powerful, stateful alternative to the Chat Completions API for building persistent, multi-step AI applications. Note that OpenAI is the defacto standard and can be used for all models available!
OpenAI API Endpoints
| Endpoint Category | Endpoint | Description |
|---|---|---|
| Chat | /v1/chat/completions | The primary endpoint for most modern applications. It takes a list of messages and returns a model’s predicted chat completion. Powers ChatGPT and is used with models like gpt-4 and gpt-3.5-turbo. |
| Completions | /v1/completions | The legacy endpoint for text completion. Given a prompt, the model will return one or more predicted text completions. Used with older models like text-davinci-003. |
| Assistants | /v1/assistants | (Beta) Part of the Assistants API. Used to create, manage, and retrieve persistent “Assistants” that can call tools and use knowledge from uploaded files. |
| Assistants | /v1/threads | (Beta) Part of the Assistants API. Creates a “Thread” (a conversation session between a user and an Assistant). Threads store messages and manage state. |
| Assistants | /v1/threads/{thread_id}/runs | (Beta) Part of the Assistants API. Triggers an Assistant to process the messages in a Thread, using its tools and knowledge to generate a response. |
| Embeddings | /v1/embeddings | Gets a vector representation (a list of numbers) for a given input text. These embeddings are used for search, clustering, recommendations, and similarity comparisons. |
| Image Generation | /v1/images/generations | Creates, edits, or generates variations of images based on a text prompt. The famous DALL-E models are accessed through this endpoint. |
| Audio (Speech) | /v1/audio/speech | Generates realistic audio from text input using a variety of pre-selected voices. This is the Text-to-Speech (TTS) endpoint. |
| Audio (Transcription) | /v1/audio/transcriptions | Transcribes audio into text in the same language. Also capable of translating audio from other languages into English text. Uses the Whisper model. |
| Audio (Translation) | /v1/audio/translations | Translates audio from one language into English text. This is specifically for translation, not transcription of non-English audio. |
| Fine-Tuning | /v1/fine_tuning/jobs | Manages the lifecycle of fine-tuning jobs. Allows you to create a custom model by training on top of a base model like gpt-3.5-turbo with your own data. |
| File Management | /v1/files | Used to upload, list, retrieve, or delete files that can be used with other endpoints (e.g., for fine-tuning or with the Assistants API). |
| Moderation | /v1/moderations | A free tool that checks if text is potentially harmful or violates OpenAI’s usage policies. Classifies content into categories like hate, self-harm, and sexual content. |
| Models | /v1/models | Lists and describes the various models available through the API, and allows you to retrieve information about a specific model. |
For the most up-to-date and detailed information, always refer to the official OpenAI API Documentation.