POST
/
chat
/
completions
curl --request POST \
  --url https://geekai.dev/api/v1/chat/completions \
  --header 'Authorization: Bearer <token>' \
  --header 'Content-Type: application/json' \
  --data '{
  "model": "gpt-4o-mini",
  "messages": [
    {
      "content": "your are a helpful assistant",
      "role": "system"
    },
    {
      "content": "hi",
      "role": "user"
    }
  ],
  "thinking": {
    "type": "enabled",
    "budget_tokens": 16000,
    "reasoning_effort": "medium"
  },
  "stream": true,
  "enable_search": true,
  "retries": 0,
  "temperature": 1.3,
  "max_completion_tokens": 1024,
  "json_mode": true,
  "tools": [
    "<any>"
  ],
  "tool_choice": "auto",
  "parallel_tool_calls": true,
  "stop": [
    "<string>"
  ],
  "logprobs": false,
  "top_logprobs": 2,
  "frequency_penalty": 0,
  "presence_penalty": 0,
  "top_p": 1,
  "seed": 123,
  "n": 1,
  "metadata": {},
  "sess_id": "123e4567-e89b-12d3-a456-426614174000"
}'
{
  "id": "<string>",
  "created": 123,
  "model": "<string>",
  "object": "<string>",
  "choices": [
    {
      "index": 123,
      "message": {
        "role": "assistant",
        "content": "<string>",
        "reasoning_content": "<string>",
        "audio": {
          "id": "<string>",
          "data": "<string>",
          "expires_at": 123,
          "transcript": "<string>"
        },
        "tool_calls": [
          {
            "id": "<string>",
            "type": "function",
            "function": {
              "name": "<string>",
              "arguments": "<string>"
            }
          }
        ]
      },
      "finish_reason": "stop",
      "logprobs": {
        "content": [
          {
            "token": "<string>",
            "logprob": 123,
            "bytes": [
              123
            ],
            "top_logprobs": [
              {
                "token": "<string>",
                "logprob": 123,
                "bytes": [
                  123
                ]
              }
            ]
          }
        ]
      }
    }
  ],
  "usage": {
    "prompt_tokens": 123,
    "completion_tokens": 123,
    "total_tokens": 123,
    "billed_units": 123,
    "prompt_tokens_details": {
      "text_tokens": 123,
      "audio_tokens": 123,
      "cached_tokens": 123,
      "image_tokens": 123,
      "video_tokens": 123,
      "citation_tokens": 123
    },
    "completion_tokens_details": {
      "text_tokens": 123,
      "audio_tokens": 123,
      "reasoning_tokens": 123,
      "accepted_prediction_tokens": 123,
      "rejected_prediction_tokens": 123
    }
  },
  "citations": [
    "<string>"
  ],
  "system_fingerprint": "<string>"
}

Note: For setting the chat model name, refer to System Supported Chat Model List. The request/response parameter structure is fully compatible with OpenAI. When switching models, you only need to modify the corresponding model name. If the model request/response parameters are inconsistent with OpenAI, GeekAI will automatically convert and align them. Except for Baidu ERNIE Bot and iFlytek Spark, all other platforms support function calling (the specific supported models are subject to platform restrictions). When calling the GPTs model API, you need to replace the * in gpt-4-gizmo-* with the corresponding GPTs’ gizmo_id, which can be obtained by extracting it from the GPTs URL. For example, for https://chatgpt.com/g/g-bo0FiWLY7-researchgpt, the corresponding gizmo_id is g-bo0FiWLY7.

The response data structure is fully compatible with OpenAI and builds upon it to add new features for adapting to other models. It provides support for cited links (citations), search billing units (billed_units), video in message content, image/video input tokens, and inference mode settings (thinking). The response structure changes depending on whether it is a streaming output or not. You can refer to the request examples below to make a judgment:

cURL Request Example

curl --location --request POST 'https://geekai.dev/api/v1/chat/completions' \
--header 'Authorization: Bearer {YOUR_GEEKAI_API_KEY}' \
--header 'Content-Type: application/json' \
--data-raw '{
    "model": "gpt-4o-mini",
    "messages": [
        {
            "role": "user",
            "content": "hi"
        }
    ],
    "stream": false
}'

Authorizations

Authorization
string
header
required

token

Body

application/json

Response

200
application/json

Successful response

The response is of type object.