The Charity Engine Smart Proxy Post Processing (SP3) interface allows web data that is collected via smart proxy scripts to be processed by any application or execution environment available on the Charity Engine network. This processing takes place on the same node that collects the data.
General Usage
To submit data for local processing on the proxy node, include the charityengine module in a smart proxy script and call the desired processing function. See the Charity Engine Application Library for details on the specific functionality of applications or execution environments.
Processing functions return a Promise object, which resolves with the response from the model or application.
For example, to generate embeddings for text collected using the Llama3.2 model:
charityengine.embeddings('llama3.2:3b', 'What is the weather') .then(response => console.log('Embedding response:', response)) .catch(error => console.error('Error:', error));
To generate a chi-squared distribution with v
degrees of freedom using Wolfram Engine:
charityengine.wolframengine('ChiSquareDistribution[v]') .then(response => console.log('Response:', response)) .catch(error => console.error('Error:', error));
Processing Functions
A variety of post-processing functions are available based on applications that are compatible with the Charity Engine network.
docker
Any image that can be pulled from Docker Hub can be used for post-processing by calling the docker() function:
charityengine.docker(image, commandline, inputfile) PARAMETERS image // Name of the Docker image to run [string] [required] commandline // Command to execute within the container [string] [required] inputfile // Names of local files to use as input [array of string]
vina
charityengine.vina(commandline, inputfile) PARAMETERS commandline // Command line for the application [string] [required] inputfile // Names of local files to use as input [array of string]
blastp
charityengine.blastp(commandline, inputfile) PARAMETERS commandline // Command line for the application [string] [required] inputfile // Names of local files to use as input [array of string]
wolframengine
charityengine.wolframengine(commandline, inputfile) PARAMETERS commandline // Command to execute within the application [string] [required] inputfile // Names of local files to use as input [array of string]
inference
charityengine.inference(model, prompt, assets, context, system, template, options) PARAMETERS model // Name of the model to use [string] [required] prompt // Text to pass to the model as input [string] [required] assets // Images or files [array of string] context // Additional context for the model [string] system // System parameters for the model [string] template // Template to guide response format [string] options // Additional options for the request [object]
embeddings
charityengine.embeddings(model, prompt) PARAMETERS model // Name of the model to use [string] [required] prompt // Text to pass to the model as input [string] [required]