Data Models Reference
This section provides detailed documentation for all data structures used in the API.
OpenGithubModelsApi.Function_Params
— TypeFunction_Params
Function_Params(;
name=nothing,
description=nothing,
parameters=nothing,
)
- name::String : The name of the function to be called.
- description::String : A description of what the function does. The model will use this description when selecting the function and interpreting its parameters.
- parameters::A Json Object that describes the function params
OpenGithubModelsApi.InferenceRequest
— TypeInferenceRequest
InferenceRequest(;
model=nothing,
messages=nothing,
frequency_penalty=nothing,
max_tokens=nothing,
modalities=nothing,
presence_penalty=nothing,
response_format=nothing,
seed=nothing,
stream=false,
stream_options=nothing,
stop=nothing,
temperature=nothing,
tool_choice=nothing,
tools=nothing,
top_p=nothing,
)
- model::String : ID of the specific model to use for the request.
- messages::Vector{Message} : The collection of context messages associated with this chat completion request. Typical usage begins with a chat message for the System role that provides instructions for the behavior of the assistant, followed by alternating messages between the User and Assistant roles.
- frequency_penalty::Float64 : A value that influences the probability of generated tokens appearing based on their cumulative frequency in generated text. Positive values will make tokens less likely to appear as their frequency increases and decrease the likelihood of the model repeating the same statements verbatim. Supported range is [-2, 2].
- max_tokens::Int64 : The maximum number of tokens to generate in the completion. The token count of your prompt plus max_tokens cannot exceed the model's context length. For example, if your prompt is 100 tokens and you set max_tokens to 50, the API will return a completion with a maximum of 50 tokens.
- modalities::Vector{String} : The modalities that the model is allowed to use for the chat completions response. The default modality is text. Indicating an unsupported modality combination results in a 422 error. Supported values are: text, audio
- presence_penalty::Float64 : A value that influences the probability of generated tokens appearing based on their existing presence in generated text. Positive values will make tokens less likely to appear when they already exist and increase the model's likelihood to output new tokens. Supported range is [-2, 2].
- response_format::InferenceRequestResponseFormat
- seed::Int64 : If specified, the system will make a best effort to sample deterministically such that repeated requests with the same seed and parameters should return the same result. Determinism is not guaranteed.
- stream::Bool : A value indicating whether chat completions should be streamed for this request.
- stream_options::InferenceRequestStreamOptions
- stop::Vector{String} : A collection of textual sequences that will end completion generation.
- temperature::Float64 : The sampling temperature to use that controls the apparent creativity of generated completions. Higher values will make output more random while lower values will make results more focused and deterministic. It is not recommended to modify temperature and top_p for the same completion request as the interaction of these two settings is difficult to predict. Supported range is [0, 1]. Decimal values are supported.
- tool_choice::String : If specified, the model will configure which of the provided tools it can use for the chat completions response.
- tools::Vector{InferenceRequestToolsInner} : A list of tools the model may request to call. Currently, only functions are supported as a tool. The model may respond with a function call request and provide the input arguments in JSON format for that function.
- top_p::Float64 : An alternative to sampling with temperature called nucleus sampling. This value causes the model to consider the results of tokens with the provided probability mass. As an example, a value of 0.15 will cause only the tokens comprising the top 15% of probability mass to be considered. It is not recommended to modify temperature and top_p for the same request as the interaction of these two settings is difficult to predict. Supported range is [0, 1]. Decimal values are supported.
OpenGithubModelsApi.InferenceRequestResponseFormat
— TypeInferenceRequestresponseformat
InferenceRequestResponseFormat(;
type=nothing,
json_schema=nothing,
)
- type::String : The type of the response.
- json_schema::Any : The JSON schema for the response.
OpenGithubModelsApi.InferenceRequestStreamOptions
— TypeInferenceRequeststreamoptions
InferenceRequestStreamOptions(;
include_usage=false,
)
- include_usage::Bool : Whether to include usage information in the response.
OpenGithubModelsApi.InferenceRequestToolsInner
— TypeInferenceRequesttoolsinner
InferenceRequestToolsInner(;
call_function=nothing,
type=nothing,
)
- call_function::Function_Params
- type::String
OpenGithubModelsApi.InferenceResponse
— TypeInferenceResponse
InferenceResponse(;
choices=nothing,
data=nothing,
)
- choices::Vector{NonStreamingResponseChoices}
- data::StreamingResponseData
OpenGithubModelsApi.Message
— TypeInferenceRequestmessagesinner
Message(;
role=nothing,
content=nothing,
)
- role::String : The chat role associated with this message
- content::String : The content of the message
OpenGithubModelsApi.ModelData
— TypeModelData
ModelData(;
id=nothing,
name=nothing,
publisher=nothing,
summary=nothing,
rate_limit_tier=nothing,
tags=nothing,
supported_input_modalities=nothing,
supported_output_modalities=nothing,
)
- id::String : The unique identifier for the model
- name::String : The name of the model
- publisher::String : The publisher of the model
- summary::String : A brief summary of the model's capabilities
- rate_limit_tier::String : The rate limit tier for the model
- tags::Vector{String} : A list of tags associated with the model
- supported_input_modalities::Vector{String} : A list of input modalities supported by the model
- supported_output_modalities::Vector{String} : A list of output modalities supported by the model
OpenGithubModelsApi.NonStreamingResponseChoices
— TypeNonStreamingResponse_choices
NonStreamingResponseChoices(;
message=nothing,
)
- message::NonStreamingResponseMessage
OpenGithubModelsApi.NonStreamingResponseMessage
— TypeNonStreamingResponse_message The message associated with the completion.
NonStreamingResponseMessage(;
content=nothing,
role=nothing,
)
- content::String : The content of the message.
- role::String : The role of the message.
OpenGithubModelsApi.StreamingResponseData
— TypeStreamingResponsedata Some details about the response.
StreamingResponseData(;
choices=nothing,
)
- choices::Vector{StreamingResponseDataChoices}
OpenGithubModelsApi.StreamingResponseDataChoices
— TypeStreamingResponsedata_choices
StreamingResponseDataChoices(;
delta=nothing,
)
- delta::StreamingResponseDataDelta
OpenGithubModelsApi.StreamingResponseDataDelta
— TypeStreamingResponsedata_delta Container for the content of the streamed response.
StreamingResponseDataDelta(;
content=nothing,
)
- content::String : The content of the streamed response.
OpenGithubModelsApi.create_chat_completion
— MethodCreates a chat completion.
Params:
- auth_token::String (required)
- api_version::String (required)
- inference_request::InferenceRequest (required)
Return: InferenceResponse, OpenAPI.Clients.ApiResponse
OpenGithubModelsApi.create_org_chat_completion
— MethodCreates a chat completion for a given organization.
Params:
- org::String (required)
- auth_token::String (required)
- api_version::String (required)
- inference_request::InferenceRequest (required)
Return: InferenceResponse, OpenAPI.Clients.ApiResponse
OpenGithubModelsApi.list_models
— MethodLists available models.
Params:
- auth_token::String (required)
- api_version::String (required)
Return: Vector{ModelData}, OpenAPI.Clients.ApiResponse
Parameter Validation Rules
Temperature and Top P
- Range: 0.0 to 1.0
- Decimal values are supported
- Not recommended to modify both simultaneously
Message Roles
Valid values: "assistant"
, "developer"
, "system"
, "user"
Tool Choice
Valid values: "auto"
, "required"
, "none"
Modalities
Supported values: "text"
, "audio"
Warning
Setting stream=true
is not supported and will result in an error.
Best Practices for Data Models
- Always validate required fields before making API calls
- Use the provided type constraints to ensure valid parameters
- Handle optional fields appropriately in your application
- Be aware of enum constraints for string parameters