Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

The HDX Python Library is designed to simplify using the HDX JSON API which is built on top of CKAN. The underlying GET and POST requests are wrapped in Python methods. The HDX objects, such as datasets and resources, are represented by Python classes. The API documentation can be found here: http://mcarans.github.io/hdx-python-api/

Keeping it Simple

The major goal of the library is to make interacting with HDX as simple as possible for the user. There are several ways this is achieved.

  1. The library avoids CKAN syntax instead using HDX terminology. Hence there is no reference to CKAN related items, only gallery items. The user does not need to learn about CKAN and makes it easier to understand what will be the result in HDX when calling a Python method.

  2. The class structure of the library should be as logical as possible (within the restrictions of the CKAN API it relies on). In HDX, a datasets can contain zero or more resources and a gallery (consisting of gallery items), so the library reflects this even though the CKAN API presents a different interface for gallery items to resources. 

    The UML diagram below shows the relationships between the major classes in the library.  

     

    Drawio
    baseUrlhttps://humanitarian.atlassian.net/wiki
    diagramNameClasses
    width601
    pageId6356996
    height421
    revision3

  3. Datasets, resources and gallery items can use dictionary methods like square brackets to handle metadata which feels natural. (The HDXObject class extends UserDict.) eg.

    dataset['name'] = 'My Dataset'

     

  4. Static metadata can be imported from a YAML file, recommended for being very human readable, or a JSON file eg.

    dataset.update_yaml([path])

    Static metadata can be passed in as a dictionary on initialisation of a dataset, resource or gallery item eg.

    dataset = Dataset(configuration, {
    'name': slugified_name,
    'title': title,
    'dataset_date': dataset_date, # has to be MM/DD/YYYY
    'groups': iso
    })

     

  5. The code is very well documented. Detailed API documentation (generated from Google style docstrings using Sphinx) can be found here: http://mcarans.github.io/hdx-python-api/at the link mentioned above. 
    def load_from_hdx(self, id_or_name: str) -> bool:
    """Loads the dataset given by either id or name from HDX
       Args:
    id_or_name (str): Either id or name of dataset
       Returns:
    bool: True if loaded, False if not
    """

     

  6. The method arguments and return parameter have type hints. (Although this is a feature of Python 3.5, it has been backported.) Type hints enable sophisticated IDEs like PyCharm to warn of any inconsistencies in using types bringing one of the major benefits of statically typed languages to Python.
    def merge_dictionaries(dicts: List[dict]) -> dict:

    gives:

  7. Default parameters mean that there is a very easy default way to get set up and going.
  8. Configuration is made as simple as possible with configuration files containing sensible defaults that can be overridden if necessary.
  9. Logging is something often neglected so the library aims to make it a breeze to get going with logging and so avoid the spread of print statements.
  10. There are utility functions to handle dictionary merging, loading multiple YAML or JSON files and a few other helpful tasks.
  11. There are setup wrappers to which the scraper's main function is passed. They neatly cloak the setup of logging and one of them hides the required calls for pushing status into ScraperWiki.

...