# Sample API calls

Ready-to-run cURL examples for every endpoint on the Phoeniqs Model Service.

Before you start, make sure you have your Base URL, Model Name, and API Key. See How to inference an AI model for a primer.


# List available models

Fetch the full list of models your API Key can reach.

curl --location 'https://maas.phoeniqs.com/v1/models' \
  --header 'Authorization: Bearer <API_Key>'

# Chat completion

Send a prompt to any chat or reasoning model.

curl --location 'https://maas.phoeniqs.com/v1/chat/completions' \
  --header 'Content-Type: application/json' \
  --header 'Authorization: Bearer <API_Key>' \
  --data '{
    "model": "inference-llama4-maverick",
    "messages": [
      { "role": "user", "content": "How do I make sourdough bread?" }
    ],
    "temperature": 0.7
  }'

# Embeddings

Use v1/embeddings for most cases. If that path is not reachable, fall back to /embeddings.

curl --location 'https://maas.phoeniqs.com/v1/embeddings' \
  --header 'Content-Type: application/json' \
  --header 'Authorization: Bearer <API_Key>' \
  --data '{
    "model": "inference-bge-m3",
    "input": "OpenAI develops AI models that understand and generate text."
  }'
curl --location 'https://maas.phoeniqs.com/embeddings' \
  --header 'Content-Type: application/json' \
  --header 'Authorization: Bearer <API_Key>' \
  --data '{
    "model": "inference-bge-m3",
    "input": "OpenAI develops AI models that understand and generate text."
  }'

# Multimodal (text and image)

Send an image URL alongside a text prompt to a vision-capable model.

curl --location 'https://maas.phoeniqs.com/v1/chat/completions' \
  --header 'Content-Type: application/json' \
  --header 'Authorization: Bearer <API_Key>' \
  --data '{
    "model": "inference-granite-vision-2b",
    "messages": [
      {
        "role": "user",
        "content": [
          { "type": "text", "text": "What is shown in this image?" },
          {
            "type": "image_url",
            "image_url": {
              "url": "https://upload.wikimedia.org/wikipedia/commons/thumb/a/a9/Example.jpg/800px-Example.jpg"
            }
          }
        ]
      }
    ],
    "temperature": 0.7
  }'

# OCR

Extract text from an image using a dedicated OCR model.

curl --location 'https://maas.phoeniqs.com/v1/chat/completions' \
  --header 'Content-Type: application/json' \
  --header 'Authorization: Bearer <API_Key>' \
  --data '{
    "model": "inference-deepseek-ocr",
    "messages": [
      {
        "role": "user",
        "content": [
          {
            "type": "image_url",
            "image_url": {
              "url": "https://ofasys-multimodal-wlcb-3-toshanghai.oss-accelerate.aliyuncs.com/wpf272043/keepme/image/receipt.png"
            }
          },
          { "type": "text", "text": "Free OCR." }
        ]
      }
    ],
    "max_tokens": 2048,
    "temperature": 0.0
  }'

# Kimi-K2

Kimi-K2 requires an additional parameter stop_token_ids with the value 163586.

curl --location 'https://maas.phoeniqs.com/v1/chat/completions' \
  --header 'Content-Type: application/json' \
  --header 'Authorization: Bearer <API_Key>' \
  --data '{
    "model": "inference-kimi-k2",
    "messages": [
      { "role": "user", "content": "How are you?" }
    ],
    "temperature": 0.7,
    "stop_token_ids": [163586]
  }'

# See also

How to inference an AI model
../how-to-inference-a-model/
Active Models
../../active-models/