You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 6 Next »

The Distributed LLM Service is accessible via WebUI, a notebook application, and an API. All interfaces support a range of models.

Use of each of these endpoints is documented below.

WebUI

The WebUI allows the user to select up to three supported models, enter a prompt, and run inference.

  1. Choose your models: select preferred models (up to 3 at a time), via the checkboxes
  2. Enter your prompt (plain text only)
  3. Click the "Process" button to submit a job to the network

Notebook application

Inference requests can alsp be submitted to the LLM service via notebook by running the python-based API client in the notebook environment and then providing the request and model details. An available LLM server will process the request and return the result.

 API

The WebUI and notebook application both use a job submission API to interact with the LLM service. Other such interfaces can also be developed to integrate the LLM service with other applications and platforms.

Submit a job

Submitting inference jobs to the LLM service is accomplished through the jobs API endpoint:

https://api.gridrepublic.services/remotejobs/v2/jobs

This endpoint accepts a JSON payload via POST with the following parameters:

app // Name of the application; in this case, "gridrepublic:text-inference"
commandLine // JSON string of the prompt in an array named "input"
hours // Runtime limit for the job; by default, use: 1
tag // Name of the model to use for inference

The format of the JSON data is as follows, with the "input" and "tag" strings populated with a prompt and specific model name:

{
  "app": "gridrepublic:text-inference",
  "commandLine": "{'input': ['When does 1+1=10?'] }",
  "tag": "gemma:2b"
}

When the job is submitted, the API returns a success indicator and either an array of job "ids", when success is true, or a string indicating an "error", when success is false. For example:

{
  "success":true,
  "ids":["a9ab011455bb6aeb0161b5fc08766b42"]
}

Get job status

To retrieve the current status of a job that has been submitted to the LLM service, the jobs API endpoint accepts GET requests with a comma-separated list of one or more job IDs as a path parameter:

https://api.gridrepublic.services/remotejobs/v2/jobs/{ids}

In the following example, \{ids\} was replaced with 9f22472031ef57c3fd517061d116ad68; the output of the inference process is contained in the "log" property and is updated as the process runs:

{
  "success":true,
  "jobs":{
    "9f22472031ef57c3fd517061d116ad68":{
      "vmStatus":"running",
      "states":{
        "default":{
          "status":"running",
          "outputFiles":[],
          "log":"1+1=2. When you add numbers, the result",
          "commandLine":"{\"inputs\":[\"How can 1+1=10?\"]}",
          "app":"gridrepublic:text-inference"
        }
      },
      "created":"2024-05-06T21:26:00+00:00",
      "copy":0,
      "tag":"llama2-uncensored",
      "runtime":23
    }
  }
}


  • No labels