The LLM Processing API provides a local interface on network nodes for performing inference requests.
General Usage
The base URI for all calls is: http://localhost:11434/api/
Endpoint accepts a JSON object of parameters within a POST request, unless otherwise specified.
API Definition
Charity Engine Internal LLM Processing API 1.0.0
OAS 3.0
https://api.bitbucket.org/2.0/repositories/gridrepublic/ce-apis/src/main/llm-server-internal-api-v1.yamlServers
inferenceOperations related to text inference requests.
Operations related to text inference requests.