Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

Version 1 Next »

Background

Scrapers are scripts that read data or urls from websites or servers, complete dataset metadata and create datasets in HDX. They either read urls to include as metadata in HDX resources or download files, do some processing and upload them into the HDX filestore.

HDX has a RESTful API largely unchanged from the underlying CKAN API which can be used from any programming language that supports HTTP GET and POST requests. The HDX Python API provides a simple interface that communicates with HDX using the CKAN Python API, a thin wrapper around the CKAN REST API. It is a mature library that supports Python 2.7 and 3 with tests that have a high level of code coverage. The major goal of the library is to make pushing and pulling data from HDX as simple as possible for the end user. HDX objects, such as datasets and resources, are represented by Python classes. Scrapers use this library to communicate with HDX.

Current Platform

The current platform is an external service called ScraperWiki. It has a web based user interface that enables the status of scrapers to be viewed. Through the UI, it is possible to create a new environment on which to run new scrapers. The environment is an Amazon Web Services virtual server with various packages included on it like Python. The UI gives the url that allows sshing into that server where the scraper can be set up by, for example, git cloning its code onto the server and then setting up a virtualenv with the required Python packages. Cron is used to execute the scrapers according to the desired schedule.



  • No labels