The Charity Engine Smart Proxy enables allows the running Javascript of JavaScript applications within a fullfully-featured web browser browsers on a vast network of volunteer computing devices.Smart Smart Proxy service currently uses PhantomJS to run a headless browser and exposes its API; we may add additional browsers/APIs , which can be controlled via the PhantomJS API. Additional browsers and APIs may be supported in the future.
Warning | ||
---|---|---|
| ||
The features may get
|
Info |
---|
Unless otherwise noted, the mechanisms of established by the Charity Engine Distributed Proxy for authenticating to the proxy authentication and proxy configuration still apply, as Smart Proxy is an extension of the generic Distributed Proxy. Refer to the Charity Engine Distributed Proxy documentation for details. |
Initiating
...
Smart Proxy crawls
Smart Proxy crawls are initiated by connecting to the Distributed Proxy service and supplying additional HTTP headers.
x-proxy-phantomjs-script-url
A URL for This header indicates the URL of the script to run on the URL that was supplied to the proxythat Smart Proxy should run. Nodes on the Charity Engine network will download and cache the script from this URL when processing a Smart Proxy request that requires it. The URL of the target page to crawl will then be passed as an argument to this script.
Note |
---|
While currently any URL is accepted currently, the service will only allow whitelisted URLs or hostnames in the future. |
x-proxy-phantomjs-script-md5
An To ensure the integrity of Smart Proxy results, an MD5 hash of the script file defined within by the x-proxy-phantomjs-script-url
HTTP header . Used to verify that the scripts get downloaded correctly. Scripts that do is required with each request. A script that does not match the supplied MD5 hash will no not be run.
Response structure
Data can be retrieved from PhantomJS in two different ways: either as plaintext or with JSON encodedencoding.Plaintext Plaintext data is passed as-is, and a HTTP 200 OK
status code is generated automatically when returning a result (see example 2).It may however be If it is useful to return a different status code or custom HTTP headers instead. A , a specifically formatted JSON output object can be used in this case instead (see example 1):
Code Block | ||
---|---|---|
| ||
{ body: null, headers: null, statusCode: null, statusMessage: null, httpVersion: null } |
...
Known issues
The following limitations are currently expected in the Smart Proxy service:
- It may be impossible difficult to use the built-in PhantomJS functionality to render the page into as an image as that and return the result through Smart Proxy. PhantomJS generates an output file instead of writing , but the proxy requires results to be returned via stdout. One possible solution would be to convert the image file to base64 format and print that to stdout.
- It may be difficult to retrieve structured data from multiple pages; all of the data would have to be transferred through stdout, most often JSON encoded, which may be suboptimal. A possibility would be to write to an output file instead.or global variable with sufficient hierarchy to contain data for multiple pages and then return all of it as the body of the response.
- Scripts currently Currently, the scripts time out after 20 seconds. For a larger scale crawls it , this may be insufficient.
Example scripts
Sample scripts demonstrating the power of the Smart Proxy service are included below. For an extended function reference to use in customization of these scripts or development of new scripts, see the both API documentation and examples of for PhantomJS.
Anchor | ||||
---|---|---|---|---|
|
...
content of JavaScript pages
It is impossible for search engines to extract content directly from websites that essentially are Javascript applicationsJavaScript applications or web pages that rely on JavaScript to render content. Therefore, either for those both search engines or and SEO applications, it is desirable necessary to obtain a full and execute the source code of the page in order to receive the full and proper output. The following code retrieves the page source code content after JS JavaScript manipulation, including HTTP headers and status code, and returns it back to the requester.:
Code Block | ||
---|---|---|
| ||
var page = require('webpage').create(),; var system = require('system'),; var address; address = system.args[1]; // The URL that is submitted to the proxy service var resultaddress = { system.args[1]; // Standard response structure, see Response structure section in the // documentation var result = { body: null, headers: null, statusCode: null, statusMessage: null, httpVersion: null, }; page.onResourceReceived = function(response) { // Used to obtainObtain response headers and status code from the loaded page page.onResourceReceived if (decodeURIComponent= function(response.url) =={ address) { // Verify that it is the actual page that has finished loading (and not // internal resources that have finished loaded ) if (decodeURIComponent(response.url) == address) { result.headers = {}; for (var i in response.headers) { // Clone headers into the final response result.headers[response.headers[i].name] = response.headers[i].value; // Clone} headers into the final response } // Clone HTTP status code and text into the final response result.statusCode = response.status; result.statusMessage = response.statusText; } }; page.onLoadFinished = function(status) { // Page load has completed, including all internal assets has completed result.body = page.content; // CloneCopy page HTML source code (as manipulated by any internal JS scripts) // into final response result.body = page.content; // Write out final response and exit console.log(JSON.stringify(result)); phantom.exit(); } page.open(address, function (status) { if (status !== 'success') { // Handle failures console.log('FAILED loading the address'); phantom.exit(); } }); |
Anchor | ||||
---|---|---|---|---|
|
In many cases, the vast majority of data transferred in response to a page crawl is unnecessary and a waste of network resources. If the format of the results is known or pertinent data can be recognized, Smart Proxy can be used to pre-process results prior to returning it. The following code navigates to a submitted Google result page (e.g. http://www.google.com/search?q=example) and returns a plain text list of page addresses found in that page.
...
:
...
Code Block | ||
---|---|---|
| ||
var page = require('webpage').create(),; var system = require('system'),; var address; address = system.args[1]; // The URL that is submitted to the proxy service address = system.args[1]; // Fake Set up a fake user agent page.settings.userAgent = 'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/46.0.2490.86 Safari/537.36'; page.open(address, function (status) { if (status !== 'success') { console.log('FAILED loading the address'); } } else { var urls = page.evaluate(function() { // Execute code in the scope of the page var urls = page.evaluate(function() { var list = document.querySelectorAll('h3.r a'); var urls = []; for (var i in list) { if (list[i].href !== undefined) { urls.push(list[i].href); } } return urls; }); // Return URLs, one per line for (var i in urls) { // Return URLs, one per line console.log(urls[i]); } } phantom.exit(); }); |
Info |
---|
Note: The Google search result page structure could change at any time, causing this example script to not work as intended. However, it should be easy to adapt the script to work with an updated format or with other search engines. |