Page Contents
Introduction
The HDX Python Library is designed to enable you to easily develop code that interacts with the Humanitarian Data Exchange platform which is built on top of the CKAN open-source data management system. The major goal of the library is to make pushing and pulling data from HDX as simple as possible for the end user. There are several ways this is achieved. It provides a simple interface that communicates with HDX using the CKAN Python API, a thin wrapper around the CKAN JSON API. The HDX objects, such as datasets and resources, are represented by Python classes. This should make the learning curve gentle and enable users to quickly get started with using HDX programmatically.
You can jump to the Getting Started page or continue reading below about the purpose and design philosophy of the library.
Keeping it Simple
- The library hides CKAN's idiosyncrasies and tries to make the library match the HDX user interface experience. The user does not need to learn about CKAN and the library makes it easier to understand what will be the result in HDX when calling a Python method.
- The class structure of the library should be as logical as possible (within the restrictions of the CKAN API it relies on). In HDX, a dataset can contain zero or more resources and it can be in one or more showcases (which themselves can contain more than one dataset), so the library reflects this even though the showcase API comes from a plugin and is not part of the core CKAN API.
Datasets, resources and showcases can use dictionary methods like square brackets to handle metadata which feels natural. (The HDXObject class extends UserDict.) eg.
dataset['name'] = 'My Dataset'
Static metadata can be imported from a YAML file, recommended for being very human readable, or a JSON file eg.
dataset.update_yaml([path])
Static metadata can be passed in as a dictionary on initialisation of a dataset, resource or showcase eg.
dataset = Dataset({
'name': slugified_name,
'title': title,
})There are functions to help with adding more complicated types like dates and date ranges, locations etc. eg.
dataset.set_date_of_dataset('START DATE', 'END DATE')
There are separate country code and utility libraries that provide functions to handle converting between country codes, dictionary merging, loading multiple YAML or JSON files and a few other helpful tasks eg.
Country.get_iso3_country_code_fuzzy('Czech Rep.')
Easy Configuration and Logging
- Logging is something often neglected so the library aims to make it a breeze to get going with logging and so avoid the spread of print statements. A few handlers are created in the default configuration:
console:
class: logging.StreamHandler
level: DEBUG
formatter: color
stream: ext://sys.stdouterror_file_handler:
class: logging.FileHandler
level: ERROR
formatter: simple
filename: errors.log
encoding: utf8
mode: w If using the default logging configuration, then it is possible to also add the default email (SMTP) handler:
error_mail_handler:
class: logging.handlers.SMTPHandler
level: CRITICAL
formatter: simple
mailhost: localhost
fromaddr: noreply@localhostConfiguration is made as simple as possible with a Configuration class that handles the HDX API key and the merging of configurations from multiple YAML or JSON files or dictionaries:
class Configuration(UserDict):
"""Configuration for HDX
Args:
**kwargs: See below
hdx_key_file (Optional[str]): Path to HDX key file. Defaults to ~/.hdxkey.
hdx_config_dict (dict): HDX configuration dictionary OR
hdx_config_json (str): Path to JSON HDX configuration OR
hdx_config_yaml (str): Path to YAML HDX configuration. Defaults to library's internal hdx_configuration.yml.
project_config_dict (dict): Project configuration dictionary OR
project_config_json (str): Path to JSON Project configuration OR
project_config_yaml (str): Path to YAML Project configuration. Defaults to config/project_configuration.yml.
"""The library itself uses logging at appropriate levels to ensure that it is clear what operation are being performed eg.
WARNING - 2016-06-07 11:08:04 - hdx.data.dataset - Dataset exists. Updating acled-conflict-data-for-africa-realtime-2016
The library makes errors plain by throwing exceptions rather than returning a False or None (except where that would be more appropriate) eg.
hdx.configuration.ConfigurationError: More than one project configuration file given!
- There are facades to simplify setup to which the project's main function is passed. They neatly cloak the setup of logging and one of them hides the required calls for pushing status into ScraperWiki (used internally in HDX) eg.
from hdx.facades.scraperwiki import facade
def main():
dataset = generate_dataset(datetime.now())
...
if __name__ == '__main__':
facade(main)
Documentation of the API
The code is very well documented. Detailed API documentation (generated from Google style docstrings using Sphinx) is available and mentioned in the Getting Started guide.
def load_from_hdx(self, id_or_name: str) -> bool:
"""Loads the dataset given by either id or name from HDXArgs:
id_or_name (str): Either id or name of datasetReturns:
bool: True if loaded, False if not"""
IDEs can take advantage of the documentation eg.
- The method arguments and return parameter have type hints. (Although this is a feature of Python 3.5, it has been backported.) Type hints enable sophisticated IDEs like PyCharm to warn of any inconsistencies in using types bringing one of the major benefits of statically typed languages to Python.
def merge_dictionaries(dicts: List[dict]) -> dict:
gives:
- Default parameters mean that there is a very easy default way to get set up and going eg.
def update_yaml(self, path: Optional[str] = join('config', 'hdx_dataset_static.yml')) -> None: