The LLM Processing API provides a local interface on network nodes for performing inference requests.
General Usage
The base URI for all calls is: http://localhost:11434/api/
Endpoint accepts a JSON object of parameters within a POST request, unless otherwise specified.
API Definition
Charity Engine Internal LLM Processing API 1.0.0
OAS 3.0
https://api.bitbucket.org/2.0/repositories/gridrepublic/ce-apis/src/main/llm-server-internal-api-v1.yamlinferenceOperations related to text inference requests.
Operations related to text inference requests.
Parameters
No parameters
Request body
{
"model": "llama3.2:3b",
"prompt": "Respond with a single, random word.",
"stream": false,
"images": [
"R0lGODdhAQABAPAAAP8AAAAAACwAAAAAAQABAAACAkQBADs="
],
"context": "Based",
"system": "You are a cool high school teacher who uses a lot of Gen Z slang.",
"template": "string",
"options": {
"temperature": 0.9,
"seed": 42,
"stop": "vibe"
}
}
Responses
Code | Description | Links |
200 | Successfully processed request. Media typeControls Accept header.
| No links |
400 | Request is invalid; JSON could not be parsed or the model does not provide this function. | No links |
404 | Model with the given name was not found. | No links |
Parameters
No parameters
Request body
{
"model": "nomic-embed-text",
"prompt": "Respond with a single, random word."
}
Responses
Code | Description | Links |
200 | Successfully processed request. Media typeControls Accept header.
| No links |
400 | Request is invalid; JSON could not be parsed or the model does not provide this function. | No links |
404 | Model with the given name was not found. | No links |