Skip to main content
POST
/
chat
/
completions
Chat Completion API
curl --request POST \
  --url https://geekai.dev/api/v1/chat/completions \
  --header 'Authorization: Bearer <token>' \
  --header 'Content-Type: application/json' \
  --data '{
  "model": "gpt-4o-mini",
  "messages": [
    {
      "content": "your are a helpful assistant",
      "role": "system"
    },
    {
      "content": "hi",
      "role": "user"
    }
  ],
  "thinking": {
    "type": "enabled",
    "budget_tokens": 16000,
    "reasoning_effort": "medium"
  },
  "stream": true,
  "enable_search": true,
  "retries": 0,
  "temperature": 1.3,
  "max_completion_tokens": 1024,
  "json_mode": true,
  "tools": [
    "<any>"
  ],
  "tool_choice": "auto",
  "parallel_tool_calls": true,
  "stop": [
    "<string>"
  ],
  "logprobs": false,
  "top_logprobs": 2,
  "frequency_penalty": 0,
  "presence_penalty": 0,
  "top_p": 1,
  "seed": 123,
  "n": 1,
  "metadata": {},
  "sess_id": "123e4567-e89b-12d3-a456-426614174000"
}'
{
  "id": "<string>",
  "created": 123,
  "model": "<string>",
  "object": "<string>",
  "choices": [
    {
      "index": 123,
      "message": {
        "role": "assistant",
        "content": "<string>",
        "reasoning_content": "<string>",
        "audio": {
          "id": "<string>",
          "data": "<string>",
          "expires_at": 123,
          "transcript": "<string>"
        },
        "tool_calls": [
          {
            "id": "<string>",
            "type": "function",
            "function": {
              "name": "<string>",
              "arguments": "<string>"
            }
          }
        ]
      },
      "finish_reason": "stop",
      "logprobs": {
        "content": [
          {
            "token": "<string>",
            "logprob": 123,
            "bytes": [
              123
            ],
            "top_logprobs": [
              {
                "token": "<string>",
                "logprob": 123,
                "bytes": [
                  123
                ]
              }
            ]
          }
        ]
      }
    }
  ],
  "usage": {
    "prompt_tokens": 123,
    "completion_tokens": 123,
    "total_tokens": 123,
    "billed_units": 123,
    "prompt_tokens_details": {
      "text_tokens": 123,
      "audio_tokens": 123,
      "cached_tokens": 123,
      "image_tokens": 123,
      "video_tokens": 123,
      "citation_tokens": 123
    },
    "completion_tokens_details": {
      "text_tokens": 123,
      "audio_tokens": 123,
      "reasoning_tokens": 123,
      "accepted_prediction_tokens": 123,
      "rejected_prediction_tokens": 123
    }
  },
  "citations": [
    "<string>"
  ],
  "system_fingerprint": "<string>"
}
Note: For setting the chat model name, refer to System Supported Chat Model List. The request/response parameter structure is fully compatible with OpenAI. When switching models, you only need to modify the corresponding model name. If the model request/response parameters are inconsistent with OpenAI, GeekAI will automatically convert and align them. Except for Baidu ERNIE Bot and iFlytek Spark, all other platforms support function calling (the specific supported models are subject to platform restrictions). When calling the GPTs model API, you need to replace the * in gpt-4-gizmo-* with the corresponding GPTs’ gizmo_id, which can be obtained by extracting it from the GPTs URL. For example, for https://chatgpt.com/g/g-bo0FiWLY7-researchgpt, the corresponding gizmo_id is g-bo0FiWLY7.
The response data structure is fully compatible with OpenAI and builds upon it to add new features for adapting to other models. It provides support for cited links (citations), search billing units (billed_units), video in message content, image/video input tokens, and inference mode settings (thinking). The response structure changes depending on whether it is a streaming output or not. You can refer to the request examples below to make a judgment:

cURL Request Example

curl --location --request POST 'https://geekai.dev/api/v1/chat/completions' \
--header 'Authorization: Bearer {YOUR_GEEKAI_API_KEY}' \
--header 'Content-Type: application/json' \
--data-raw '{
    "model": "gpt-4o-mini",
    "messages": [
        {
            "role": "user",
            "content": "hi"
        }
    ],
    "stream": false
}'

Authorizations

Authorization
string
header
required

token

Body

application/json
model
string
default:gpt-4o-mini
required

chat model

Example:

"gpt-4o-mini"

messages
object[]
required

input messages

Example:
[
{
"content": "your are a helpful assistant",
"role": "system"
},
{ "content": "hi", "role": "user" }
]
thinking
object

reasoning parameters,only o1/o3-mini/claude-3.7-sonnet support

stream
boolean
default:false

enable stream output

Example:

true

enable web search

Example:

true

retries
integer
default:0

auto retry times, default 0, means no retry

Example:

0

temperature
number
default:1.3

sampling temperature, controls the randomness of the output

Example:

1.3

max_completion_tokens
integer

maximum completion token length

Example:

1024

json_mode
boolean
default:false

enable json output mode

Example:

true

tools
any[]

tool functions

tool_choice
string
default:auto

tool choice

Example:

"auto"

parallel_tool_calls
boolean
default:true

enable parallel tool calls

Example:

true

stop
string[]

stop words

logprobs
boolean
default:false

enable logprobs output

Example:

false

top_logprobs
integer

top logprobs

Example:

2

frequency_penalty
number
default:0

frequency penalty coefficient

Example:

0

presence_penalty
number
default:0

presence penalty coefficient

Example:

0

top_p
number
default:1

top p sampling

Example:

1

seed
integer

random seed

n
integer
default:1

number of completions

Example:

1

metadata
object

additional metadata

sess_id
string<uuid>

session ID

Example:

"123e4567-e89b-12d3-a456-426614174000"

Response

Successful response

id
string

Request ID

created
integer

Unix timestamp of the request creation

model
string

Model used for this conversation turn

object
string

Response object type

choices
object[]

The list of generated dialogues.

usage
object

Token usage statistics

citations
string[]

List of cited documents/links

system_fingerprint
string

System fingerprint

I