Register the service ai.engine.register
We are still updating this page
Some data may be missing — we will fill it in shortly.
Scope:
ai_adminWho can execute the method: administrator
REST method for adding a custom service. The method registers an engine and updates it upon subsequent calls. This is not quite an embedding location, as the partner's endpoint must follow strict formats.
|
Parameter |
Description |
Version |
|
|
name |
A meaningful and concise name that will appear in the user interface. |
||
|
code |
Unique engine code |
||
|
category |
Can be either text (text generation), image (image generation), or audio (text recognition). |
||
|
completions_url |
endpoint for processing the user request. |
||
|
settings |
Type of AI (see description below). Optional. |
23.800 |
The method will return the ID of the added engine upon success.
Type of AI
Array of parameters:
|
Parameter |
Description |
Version |
|
|
code_alias |
Type of AI. Available values: ChatGPT (Open AI) |
||
|
model_context_type |
Type of context counting. Available values: token - tokens, symbol - symbols. Default is token. |
||
|
model_context_limit |
Volume of context (default is 16K). Before sending your user request, the context limit is checked according to the counting type. |
Examples
try
{
const response = await $b24.callMethod(
'ai.engine.register',
{
name: 'Smith GPT',
code: 'smith_gpt',
category: 'text',
completions_url: 'https://antonds.com/ai/aul/completions/',
settings: {
code_alias: 'ChatGPT',
model_context_type: 'token',
model_context_limit: 16*1024,
},
}
);
const result = response.getData().result;
console.info(result);
}
catch( error )
{
console.error(error);
}
try {
$response = $b24Service
->core
->call(
'ai.engine.register',
[
'name' => 'Smith GPT',
'code' => 'smith_gpt',
'category' => 'text',
'completions_url' => 'https://antonds.com/ai/aul/completions/',
'settings' => [
'code_alias' => 'ChatGPT',
'model_context_type' => 'token',
'model_context_limit' => 16 * 1024,
],
]
);
$result = $response
->getResponseData()
->getResult();
echo 'Success: ' . print_r($result, true);
// Your required data processing logic
processData($result);
} catch (Throwable $e) {
error_log($e->getMessage());
echo 'Error registering AI engine: ' . $e->getMessage();
}
BX24.callMethod(
'ai.engine.register',
{
name: 'Smith GPT',
code: 'smith_gpt',
category: 'text',
completions_url: 'https://antonds.com/ai/aul/completions/',
settings: {
code_alias: 'ChatGPT',
model_context_type: 'token',
model_context_limit: 16*1024,
},
},
function(result)
{
if(result.error())
{
console.error(result.error());
}
else
{
console.info(result.data());
}
}
);
Endpoint
Attention!
In the script, everything is in a single code flow, this is for example purposes. In production mode, it is necessary to separate the lines of code into a different section.
Template for creating a custom endpoint can be used for customizing your own service.
Important points:
- The script must accept the request, process it quickly, and add it to its internal queue.
- A service with the type "image" must send asynchronous requests.
- It should be able to return various response statuses (as shown in the example):
- 200 — normal link transition;
- 202 — if you accepted the request and added it to the queue;
- 503 — if the service is unavailable.
A response is expected within a certain time, after which the callback becomes invalid.
Attention!
In addition to the response code, in case of successful generation, the handler must return json_encode(['result' => 'OK']).
When working with the provider category audio, in the prompt key, you receive an array that includes the following elements:
- file: Link to the file. It is important to note that the file may have no extension.
- fields: An auxiliary internal array that contains:
- type: Content-type of the file, which is especially important if the file has no extension (e.g., "audio/ogg").
- prompt: An auxiliary prompt for the audio file, which may contain key information to assist in recognizing the file, such as your company name.
The provider also receives additional fields:
|
Fields |
Description |
Version |
|
|
auth |
Authorization data, |
23.600.0 |
|
|
payload_raw |
Raw value of the prompt (when using Copilot, there will be a character code of the used prompt) |
23.600.0 |
|
|
payload_provider |
Character code of the provider pre-prompt (when using Copilot, there will be prompt). |
23.600.0 |
|
|
payload_prompt_text |
If |
23.800.0 |
|
|
payload_markers |
Array of additional markers from the user ( |
23.800.0 |
|
|
payload_role |
Role (instruction) used when forming the prompt. In GPT-like systems, you should send this role as a system in the message array. |
23.800.0 |
|
|
context |
Array of preceding messages in chronological order. For example, a list of comments on a post. The first in such a context list is the author's message (the post itself). Important: The volume of context sent to your provider depends on the volume specified by you and the counting type (more details in the provider documentation). By default, the counting method is "tokens", volume 16K. You should send context to the neural network only if the parameter collect_context is set to true (1). In other cases, it is sent as additional information at your discretion. |
23.800.0 |
|
|
max_tokens |
Maximum number of lexemes. This parameter controls the length of the output. Optional. |
||
|
temperature* |
Temperature. This parameter controls the randomness of the output (low values make the output more focused and deterministic). Required. |
* - Required parameters
Example
Suppose you receive (in addition to other information) three data arrays.
- prompt - contains the current request, this is just text;
- payload_role - some text containing instructions;
- context - an array (presumably also not empty).
In this case, the resulting array we obtain is:
[
[
"role": "system",
"content": "$payload_role"
],
[
// the entire context array, or part of it if you want to save the request
// but remember that it goes in chronological order (the most recent messages are at the bottom)
],
[
"role": "user",
"content": "$prompt" // this is the current request, and it is NOT included in the context
]
]
How to Use Examples in Documentation