Skip to content

API Reference

Voidon provides a fully OpenAI-compatible API with powerful extensions. All endpoints follow REST conventions and return JSON responses.

Base URL

Text Only
https://api.voidon.astramind.ai/v1

Authentication

All API requests require authentication using an API key in the Authorization header:

HTTP
Authorization: Bearer your-api-key-here

Chat Completions

POST /v1/chat/completions

Create a chat completion with automatic model selection, anonymization, and document processing.

Request Body

By default all of the parameters of the OpenAI endpoint /chat/completion are allowed, some of those are:

Parameter Type Required Description
model string Yes Model identifier. Use "auto" for intelligent selection
messages array Yes Array of message objects
max_tokens integer No Maximum tokens to generate (default: 1000)
temperature number No Sampling temperature (0-2, default: 1)
stream boolean No Enable streaming responses (default: false)
extra_body (vodion extensions) object No Voidon-specific features

Voidon Extensions

Parameter Type Description
enable_anonymization boolean Enable automatic PII removal
anonymization_types list[int] The numerical values for the types of entities to anonymize
ignore_mismatched_parameters boolean Ignore specified parameters during the model selection phase, it can help if no model are found for that combination of parameters/providers

Example Request

Bash
curl -X POST https://api.voidon.astramind.ai/v1/chat/completions \
  -H "Authorization: Bearer your-voidon-api-key" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "auto",
    "messages": [
      {
        "role": "system",
        "content": "You are a helpful assistant."
      },
      {
        "role": "user",
        "content": "What is machine learning?"
      }
    ],
    "max_tokens": 500,
    "temperature": 0.7
  }'
Bash
curl -X POST https://api.voidon.astramind.ai/v1/chat/completions \
  -H "Authorization: Bearer your-voidon-api-key" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "auto",
    "messages": [
      {
        "role": "user",
        "content": "My name is John Smith, email john@example.com. Help me write a cover letter."
      }
    ],
    "extra_body": {
      "enable_anonymization": true,
      "anonymization_types": [1,2,3]
    }
  }'
Bash
curl -X POST https://api.voidon.astramind.ai/v1/chat/completions \
  -H "Authorization: Bearer your-voidon-api-key" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "auto",
    "messages": [
      {
        "role": "user",
        "content": [
          {
            "type": "text",
            "text": "Summarize this document:"
          },
          {
            "type": "file",
            "file": {
              "url": "data:application/pdf;base64,JVBERi0xLjQK..."
            }
          }
        ]
      }
    ]
  }'
Python
import openai

client = openai.OpenAI(
    api_key="your-voidon-api-key",
    base_url="https://api.voidon.astramind.ai/v1"
)

response = client.chat.completions.create(
    model="auto",
    messages=[
        {"role": "system", "content": "You are a helpful assistant."},
        {"role": "user", "content": "What is machine learning?"}
    ],
    max_tokens=500,
    temperature=0.7
)

print(response.choices.message.content)
Python
import openai

client = openai.OpenAI(
    api_key="your-voidon-api-key",
    base_url="https://api.voidon.astramind.ai/v1"
)

response = client.chat.completions.create(
    model="auto",
    messages=[
        {"role": "user", "content": "My name is John Smith, email john@example.com. Help me write a cover letter."}
    ],
    extra_body={
        "extra_body": {
            "enable_anonymization": True,
           "anonymization_types": [1,2,3]
        }
    }
)

print(response.choices.message.content)
Python
import openai

client = openai.OpenAI(
    api_key="your-voidon-api-key",
    base_url="https://api.voidon.astramind.ai/v1"
)

response = client.chat.completions.create(
    model="auto",
    messages=[
        {
            "role": "user",
            "content": [
                {
                    "type": "text",
                    "text": "Summarize this document:"
                },
                {
                    "type": "file",
                    "file": {
                        "url": "data:application/pdf;base64,JVBERi0xLjQK..."
                    }
                }
            ]
        }
    ]
)

print(response.choices.message.content)
JavaScript
import OpenAI from 'openai';

const openai = new OpenAI({
    apiKey: 'your-voidon-api-key',
    baseURL: 'https://api.voidon.astramind.ai/v1'
});

async function main() {
    const response = await openai.chat.completions.create({
        model: 'auto',
        messages: [
            { role: 'system', content: 'You are a helpful assistant.' },
            { role: 'user', content: 'What is machine learning?' }
        ],
        max_tokens: 500,
        temperature: 0.7
    });
    console.log(response.choices.message.content);
}

main();
JavaScript
import OpenAI from 'openai';

const openai = new OpenAI({
    apiKey: 'your-voidon-api-key',
    baseURL: 'https://api.voidon.astramind.ai/v1'
});

async function main() {
    const response = await openai.chat.completions.create({
        model: 'auto',
        messages: [{ role: 'user', content: 'My name is John Smith, email john@example.com. Help me write a cover letter.' }],
        extra_body: {
            enable_anonymization: true
        }
    });
    console.log(response.choices.message.content);
}

main();
JavaScript
import OpenAI from 'openai';

const openai = new OpenAI({
    apiKey: 'your-voidon-api-key',
    baseURL: 'https://api.voidon.astramind.ai/v1'
});

async function main() {
    const response = await openai.chat.completions.create({
        model: 'auto',
        messages: [
            {
                role: 'user',
                content: [
                    {
                        type: 'text',
                        text: 'Summarize this document:'
                    },
                    {
                        type: 'file',
                        file: {
                            url: 'data:application/pdf;base64,JVBERi0xLjQK...'
                        }
                    }
                ]
            }
        ],
    });
    console.log(response.choices.message.content);
}

main();
TypeScript
import OpenAI from 'openai';

const openai = new OpenAI({
    apiKey: 'your-voidon-api-key',
    baseURL: 'https://api.voidon.astramind.ai/v1'
});

async function main() {
    const response = await openai.chat.completions.create({
        model: 'auto',
        messages: [
            { role: 'system', content: 'You are a helpful assistant.' },
            { role: 'user', content: 'What is machine learning?' }
        ],
        max_tokens: 500,
        temperature: 0.7
    });
    console.log(response.choices?.message?.content);
}

main();
TypeScript
import OpenAI from 'openai';

const openai = new OpenAI({
    apiKey: 'your-voidon-api-key',
    baseURL: 'https://api.voidon.astramind.ai/v1'
});

async function main() {
    const response = await openai.chat.completions.create({
        model: 'auto',
        messages: [{ role: 'user', content: 'My name is John Smith, email john@example.com. Help me write a cover letter.' }],
        // @ts-ignore - for custom extensions
        extra_body: {
            enable_anonymization: true
        }
    });
    console.log(response.choices?.message?.content);
}

main();
TypeScript
import OpenAI from 'openai';
import { ChatCompletionMessageParam } from 'openai/resources/chat';

const openai = new OpenAI({
    apiKey: 'your-voidon-api-key',
    baseURL: 'https://api.voidon.astramind.ai/v1'
});

async function main() {
    const messages: ChatCompletionMessageParam[] = [
        {
            role: 'user',
            content: [
                {
                    type: 'text',
                    text: 'Summarize this document:'
                },
                // @ts-ignore - for custom content types
                {
                    type: 'file',
                    file: {
                        url: 'data:application/pdf;base64,JVBERi0xLjQK...'
                    }
                }
            ]
        }
    ];

    const response = await openai.chat.completions.create({
        model: 'auto',
        messages: messages
    });
    console.log(response.choices?.message?.content);
}

main();
PHP
<?php
require_once __DIR__ . '/vendor/autoload.php';

$client = OpenAI::factory()
    ->withApiKey('your-voidon-api-key')
    ->withBaseUri('api.voidon.astramind.ai/v1') 
    ->make();

$response = $client->chat()->create([
    'model' => 'auto',
    'messages' => [
        ['role' => 'system', 'content' => 'You are a helpful assistant.'],
        ['role' => 'user', 'content' => 'What is machine learning?'],
    ],
    'max_tokens' => 500,
    'temperature' => 0.7,
]);

echo $response->choices->message->content;
PHP
<?php
require_once __DIR__ . '/vendor/autoload.php';

$client = OpenAI::factory()
    ->withApiKey('your-voidon-api-key')
    ->withBaseUri('api.voidon.astramind.ai/v1')
    ->make();

$response = $client->chat()->create([
    'model' => 'auto',
    'messages' => [
        ['role' => 'user', 'content' => 'My name is John Smith, email john@example.com. Help me write a cover letter.'],
    ],
    'extra_body' => [
        'enable_anonymization' => true,
    ],
]);

echo $response->choices->message->content;
PHP
<?php
require_once __DIR__ . '/vendor/autoload.php';

$client = OpenAI::factory()
    ->withApiKey('your-voidon-api-key')
    ->withBaseUri('api.voidon.astramind.ai/v1')
    ->make();

$response = $client->chat()->create([
    'model' => 'auto',
    'messages' => [
        [
            'role' => 'user',
            'content' => [
                ['type' => 'text', 'text' => 'Summarize this document:'],
                ['type' => 'file', 'file' => ['url' => 'data:application/pdf;base64,JVBERi0xLjQK...']],
            ],
        ],
    ],
]);

echo $response->choices->message->content;
Go
package main

import (
    "context"
    "fmt"
    "github.com/sashabaranov/go-openai"
)

func main() {
    config := openai.DefaultConfig("your-voidon-api-key")
    config.BaseURL = "https://api.voidon.astramind.ai/v1"
    client := openai.NewClientWithConfig(config)

    resp, err := client.CreateChatCompletion(
        context.Background(),
        openai.ChatCompletionRequest{
            Model: "auto",
            Messages: []openai.ChatCompletionMessage{
                {Role: openai.ChatMessageRoleSystem, Content: "You are a helpful assistant."},
                {Role: openai.ChatMessageRoleUser, Content: "What is machine learning?"},
            },
            MaxTokens: 500,
            Temperature: 0.7,
        },
    )
    if err != nil {
        fmt.Printf("Error: %v\n", err)
        return
    }
    fmt.Println(resp.Choices.Message.Content)
}
Go
// Nota: la libreria standard go-openai non supporta campi personalizzati
// nel body della richiesta in modo nativo. Per inviare estensioni
// personalizzate come "extra_body", è necessario utilizzare un 
// client HTTP o una libreria più flessibile.
package main

import "fmt"

func main() {
    fmt.Println("L'invio di estensioni personalizzate richiede un client HTTP.")
}
Go
package main

import (
    "context"
    "fmt"
    "github.com/sashabaranov/go-openai"
)

func main() {
    config := openai.DefaultConfig("your-voidon-api-key")
    config.BaseURL = "https://api.voidon.astramind.ai/v1"
    client := openai.NewClientWithConfig(config)

    resp, err := client.CreateChatCompletion(
        context.Background(),
        openai.ChatCompletionRequest{
            Model: "auto",
            Messages: []openai.ChatCompletionMessage{
                {
                    Role: openai.ChatMessageRoleUser,
                    MultiContent: []openai.ChatMessagePart{
                        {
                            Type: openai.ChatMessagePartTypeText,
                            Text: "Summarize this document:",
                        },
                        // Nota: la libreria go-openai non supporta il tipo "file".
                        // Questo esempio mostra come si userebbe per un'immagine,
                        // ma l'endpoint personalizzato potrebbe richiederne un adattamento.
                        {
                            Type: openai.ChatMessagePartTypeImageURL,
                            ImageURL: &openai.ChatMessageImageURL{
                                URL: "data:application/pdf;base64,JVBERi0xLjQK...",
                            },
                        },
                    },
                },
            },
        },
    )
    if err != nil {
        fmt.Printf("Error: %v\n", err)
        return
    }
    fmt.Println(resp.Choices.Message.Content)
}
Java
// Dep: com.theokanning.openai-service
import com.theokanning.openai.completion.chat.ChatCompletionRequest;
import com.theokanning.openai.completion.chat.ChatMessage;
import com.theokanning.openai.service.OpenAiService;
import java.util.List;

public class VoidonExample {
    public static void main(String[] args) {
        String apiKey = "your-voidon-api-key";
        // La configurazione del base URL richiede un setup avanzato (omesso per brevità)
        OpenAiService service = new OpenAiService(apiKey); 

        ChatCompletionRequest request = ChatCompletionRequest.builder()
            .model("auto")
            .messages(List.of(
                new ChatMessage("system", "You are a helpful assistant."),
                new ChatMessage("user", "What is machine learning?")
            ))
            .maxTokens(500)
            .temperature(0.7)
            .build();

        System.out.println(service.createChatCompletion(request).getChoices().get(0).getMessage().getContent());
    }
}

```java // Nota: la libreria standard com.theokanning.openai-service è fortemente // tipizzata e non supporta l'aggiunta di campi personalizzati come // "extra_body" al corpo della richiesta senza creare classi custom // o usare un client HTTP.

public class VoidonExample { public static void main(String[] args) { System.out.println("L'invio di estensioni personalizzate richiede un client HTTP o una customizzazione della libreria."); } } ```

```java // Nota: la libreria com.theokanning.openai-service non supporta // tipi di contenuto personalizzati come "file". L'approccio // corretto per questa libreria richiederebbe l'uso di un client HTTP // per costruire manualmente il body della richiesta JSON.

public class VoidonExample { public static void main(String[] args) { System.out.println("L'invio di tipi di contenuto personalizzati richiede un client HTTP."); } } ```

Ruby
require 'openai'

client = OpenAI::Client.new(
  access_token: 'your-voidon-api-key',
  uri_base: 'https://api.voidon.astramind.ai/'
)

response = client.chat(
  parameters: {
    model: 'auto',
    messages: [
      { role: 'system', content: 'You are a helpful assistant.' },
      { role: 'user', content: 'What is machine learning?' }
    ],
    max_tokens: 500,
    temperature: 0.7
  }
)

puts response.dig('choices', 0, 'message', 'content')
Ruby
require 'openai'

client = OpenAI::Client.new(
  access_token: 'your-voidon-api-key',
  uri_base: 'https://api.voidon.astramind.ai/'
)

response = client.chat(
  parameters: {
    model: 'auto',
    messages: [{ role: 'user', content: 'My name is John Smith, email john@example.com. Help me write a cover letter.' }],
    extra_body: {
      enable_anonymization: true
    }
  }
)

puts response.dig('choices', 0, 'message', 'content')
Ruby
require 'openai'

client = OpenAI::Client.new(
  access_token: 'your-voidon-api-key',
  uri_base: 'https://api.voidon.astramind.ai/'
)

response = client.chat(
  parameters: {
    model: 'auto',
    messages: [
      {
        role: 'user',
        content: [
          { type: 'text', text: 'Summarize this document:' },
          { type: 'file', file: { url: 'data:application/pdf;base64,JVBERi0xLjQK...' } }
        ]
      }
    ]
  }
)

puts response.dig('choices', 0, 'message', 'content')

Response

JSON
{
  "id": "chatcmpl-123",
  "object": "chat.completion",
  "created": 1677652288,
  "model": "gpt-4o",
  "choices": [
    {
      "index": 0,
      "message": {
        "role": "assistant",
        "content": "Machine learning is a subset of artificial intelligence..."
      },
      "finish_reason": "stop"
    }
  ],
  "usage": {
    "prompt_tokens": 56,
    "completion_tokens": 31,
    "total_tokens": 87,
    "request_consumed_per_type":1
  }

}

Streaming Response

When stream: true, responses are sent as Server-Sent Events:

Text Only
1
2
3
4
5
data: {"id":"chatcmpl-123","object":"chat.completion.chunk","created":1677652288,"model":"gpt-4o","choices":[{"index":0,"delta":{"content":"Machine"},"finish_reason":null}]}

data: {"id":"chatcmpl-123","object":"chat.completion.chunk","created":1677652289,"model":"gpt-4o","choices":[{"index":0,"delta":{"content":" learning"},"finish_reason":null}]}

data: [DONE]

Model Selection

Automatic Model Selection

Use model: "auto" to let Voidon choose the optimal model based on:

  • Cost efficiency: Balance quality vs. price
  • Context length: Match your prompt length requirements
  • Availability: Route around downtime
  • Performance: Optimize for speed or quality

Specific Model Selection

You can also specify exact models using the format:

Text Only
provider/model_name

Examples: - openai/gpt-4o - anthropic/claude-3-5-sonnet - google/gemini-1.5-pro

Message Formats

Text Messages

JSON
1
2
3
4
{
  "role": "user",
  "content": "Hello, how are you?"
}

Multi-Modal Messages

JSON
{
  "role": "user",
  "content": [
    {
      "type": "text",
      "text": "What's in this image?"
    },
    {
      "type": "image_url",
      "image_url": {
        "url": "https://example.com/image.jpg"
      }
    }
  ]
}

Document Messages

JSON
{
  "role": "user",
  "content": [
    {
      "type": "text",
      "text": "Analyze this document:"
    },
    {
      "type": "file",
      "file": {
        "url": "data:application/pdf;base64,JVBERi0xLjQK..."
      }
    }
  ]
}

To see a supported file formats list please visit the Document page

Best Practices

Optimize Costs

Use model: "auto" to automatically choose cost-effective models.

Handle Rate Limits

Implement exponential backoff for 429 responses.

Stream Long Responses

Use stream: true for better user experience with long responses.

Secure PII

Enable anonymization when processing user-generated content.

Monitor Usage

Check the dashboard regularly to track usage and costs.


OpenAPI Schema Reference

Voidon provides a complete OpenAPI 3.1.0 specification. The full schema is available at:

Text Only
https://api.voidon.astramind.ai/openapi.json

Request Schema

The /v1/chat/completions endpoint accepts the following request structure:

Field Type Required Description
model string Model identifier (e.g., "auto", "openai/gpt-4o")
messages array Array of message objects (default: [])
functions array | null Legacy function calling support
function_call string | null Legacy function call control
timeout number | integer | null Request timeout in seconds
temperature number | null Sampling temperature (0-2)
top_p number | null Nucleus sampling parameter
n integer | null Number of completions to generate
stream boolean | null Enable streaming responses
stream_options object | null Streaming configuration options
stop string | array | null Stop sequences
max_tokens integer | null Maximum tokens in completion
max_completion_tokens integer | null Alternative max tokens parameter
modalities array[enum] | null Response modalities: ["text", "audio"]
prediction ChatCompletionPredictionContentParam | null Predictive completion hints
audio ChatCompletionAudioParam | null Audio output configuration
presence_penalty number | null Presence penalty (-2.0 to 2.0)
frequency_penalty number | null Frequency penalty (-2.0 to 2.0)
logit_bias object | null Token bias adjustments
user string | null End-user identifier
response_format object | null Response format specification (e.g., JSON mode)
seed integer | null Deterministic sampling seed
tools array | null Available tools for function calling
tool_choice string | object | null Tool selection strategy
parallel_tool_calls boolean | null Enable parallel tool execution
logprobs boolean | null Return log probabilities
top_logprobs integer | null Number of top logprobs to return
deployment_id string | null Azure deployment identifier
reasoning_effort enum | null Reasoning effort: minimal, low, medium, high
base_url string | null Custom base URL override
api_version string | null API version override
api_key string | null API key override
model_list array | null Custom model list
extra_headers object | null Additional HTTP headers
thinking AnthropicThinkingParam | null Anthropic extended thinking mode
web_search_options OpenAIWebSearchOptions | null Web search configuration
enable_anonymization boolean Voidon Extension: Enable PII anonymization (default: account setting)
enable_anonymization boolean Voidon Extension: Enable automatic PII removal
anonymization_types list[int] Voidon Extension: The numerical values for the types of entities to anonymize
ignore_mismatched_parameters boolean Voidon Extension: Ignore specified parameters during the model selection phase, it can help if no model are found for that combination of parameters/providers

Supporting Schemas

Prediction Schema

ChatCompletionPredictionContentParam

Field Type Required Description
type string Must be "content"
content string | array[ChatCompletionContentPartTextParam] Predicted content
Audio Schema

ChatCompletionAudioParam

Field Type Required Description
format enum Audio format: wav, aac, mp3, flac, opus, pcm16
voice string | enum Voice ID or preset: alloy, ash, ballad, coral, echo, sage, shimmer, verse
Thinking Schema

AnthropicThinkingParam

Field Type Required Description
type string Must be "enabled"
budget_tokens integer Token budget for thinking
Web Search Schema

OpenAIWebSearchOptions

Field Type Required Description
search_context_size enum | null Context size: low, medium, high
user_location OpenAIWebSearchUserLocation | null User location for localized results
User Location

OpenAIWebSearchUserLocation

Field Type Required Description
type string Must be "approximate"
approximate object Approximate location
approximate.city string City name
approximate.country string Country code
approximate.region string Region/state name
approximate.timezone string IANA timezone
Text Content

ChatCompletionContentPartTextParam

Field Type Required Description
type string Must be "text"
text string Text content

Response Schema

The endpoint returns a ModelResponse object:

Field Type Required Description
id string Unique completion identifier
created integer Unix timestamp of creation
object string Object type: "chat.completion" or "chat.completion.chunk"
model string | null Model that generated the response
system_fingerprint string | null System configuration fingerprint
choices array[Choices | StreamingChoices] Array of completion choices

Supporting Response Schemas

Choices Schema

Choices

Field Type Required Description
finish_reason string Reason for completion: stop, length, function_call, tool_calls, content_filter
index integer Choice index in array
message Message Generated message
logprobs ChoiceLogprobs | null Log probabilities (if requested)
provider_specific_fields object | null Provider-specific metadata
Message Schema

Message

Field Type Required Description
role enum Message role: assistant, user, system, tool, function
content string | null Message text content
tool_calls array[ChatCompletionMessageToolCall] | null Tool/function calls made
function_call FunctionCall | null Legacy function call (deprecated)
audio ChatCompletionAudioResponse | null Audio response (if modality enabled)
reasoning_content string | null Reasoning trace (o1 models)
thinking_blocks array[ThinkingBlock] | null Extended thinking blocks (Anthropic)
annotations array[ChatCompletionAnnotation] | null Content annotations (citations, etc.)
Tool Call

ChatCompletionMessageToolCall

Dynamic object containing tool call information. Structure depends on the tool.

Function Call

FunctionCall (deprecated, use tool_calls instead)

Field Type Required Description
arguments string JSON string of function arguments
name string | null Function name
Audio Response

ChatCompletionAudioResponse

Field Type Required Description
id string Audio data identifier
data string Base64-encoded audio data
expires_at integer Unix timestamp of expiration
transcript string Text transcript of audio
Thinking Block

ChatCompletionThinkingBlock or ChatCompletionRedactedThinkingBlock

Thinking block (full):

Field Type Required Description
type string Must be "thinking"
thinking string Thinking content
signature string Signature/hash of thinking
cache_control object | null Caching metadata

Redacted thinking block:

Field Type Required Description
type string Must be "redacted_thinking"
data string Redacted placeholder
cache_control object | null Caching metadata
Annotation

ChatCompletionAnnotation

Field Type Required Description
type string Must be "url_citation"
url_citation object Citation metadata
url_citation.start_index integer Start position in content
url_citation.end_index integer End position in content
url_citation.url string Citation URL
url_citation.title string Citation title
Logprobs Schema

ChoiceLogprobs

Field Type Required Description
content array[ChatCompletionTokenLogprob] | null Token-level log probabilities
Token Logprob

ChatCompletionTokenLogprob

Field Type Required Description
token string Token string
bytes array[integer] | null UTF-8 byte representation
logprob number Log probability of token
top_logprobs array[TopLogprob] Alternative tokens with probabilities
Top Logprob

TopLogprob

Field Type Required Description
token string Alternative token
bytes array[integer] | null UTF-8 byte representation
logprob number Log probability
Streaming Choices

StreamingChoices

Dynamic object for streaming responses. Contains partial delta updates instead of complete messages.


Example Request/Response

JSON
{
  "model": "auto",
  "messages": [
    {
      "role": "user",
      "content": "Explain quantum computing in simple terms"
    }
  ],
  "temperature": 0.7,
  "max_tokens": 500,
  "enable_anonymization": false
}
JSON
{
  "id": "chatcmpl-9k3Xj2L1M0pQ8rS5tU6vW7x",
  "object": "chat.completion",
  "created": 1704151200,
  "model": "anthropic/claude-3-5-sonnet",
  "system_fingerprint": "fp_voidon_v1.2.3",
  "choices": [
    {
      "index": 0,
      "message": {
        "role": "assistant",
        "content": "Quantum computing is a revolutionary approach to computation that leverages quantum mechanics...",
        "tool_calls": null,
        "function_call": null
      },
      "finish_reason": "stop",
      "logprobs": null,
      "provider_specific_fields": {
        "stop_reason": "end_turn",
        "stop_sequence": null
      }
    }
  ],
  "usage": {
    "prompt_tokens": 18,
    "completion_tokens": 247,
    "total_tokens": 265
  }
}

Error Responses

When validation fails, the API returns a 422 Validation Error:

JSON
1
2
3
4
5
6
7
8
9
{
  "detail": [
    {
      "loc": ["body", "model"],
      "msg": "field required",
      "type": "value_error.missing"
    }
  ]
}

HTTPValidationError

Field Type Description
detail array[ValidationError] Array of validation errors
Validation Error
Field Type Description
loc array[string | integer] Error location path
msg string Human-readable error message
type string Error type identifier