Home

Awesome

cldfbench

Tooling to create CLDF datasets from existing data.

Build Status Documentation Status PyPI

Overview

This package provides tools to curate cross-linguistic data, with the goal of packaging it as CLDF datasets.

In particular, it supports a workflow where:

This workflow is supported via:

With this workflow and the separation of the data into three directories we want to provide a workbench for transparently deriving CLDF data from data that has been published before. In particular we want to delineate clearly:

Further reading

This paper introduces cldfbench and uses an extended, real-world example:

Forkel, R., & List, J.-M. (2020). CLDFBench: Give your cross-linguistic data a lift. In N. Calzolari, F. Béchet, P. Blache, K. Choukri, C. Cieri, T. Declerck, et al. (Eds.), Proceedings of the 12th Conference on Language Resources and Evaluation (LREC 2020) (pp. 6995-7002). Paris: European Language Resources Association (ELRA). [PDF]

Installation

cldfbench can be installed via pip - preferably in a virtual environment - by running:

pip install cldfbench

cldfbench provides some functionality that relies on python packages which are not needed for the core functionality. These are specified as extras and can be installed using syntax like:

pip install cldfbench[<extras>]

where <extras> is a comma-separated list of names from the following list:

The command line interface cldfbench

Installing the python package will also install a command cldfbench available on the command line:

$ cldfbench -h
usage: cldfbench [-h] [--log-level LOG_LEVEL] COMMAND ...

optional arguments:
  -h, --help            show this help message and exit
  --log-level LOG_LEVEL
                        log level [ERROR|WARN|INFO|DEBUG] (default: 20)

available commands:
  Run "COMAMND -h" to get help for a specific command.

  COMMAND
    check               Run generic CLDF checks
    ...

As shown above, run cldfbench -h to get help, and cldfbench COMMAND -h to get help on individual subcommands, e.g. cldfbench new -h to read about the usage of the new subcommand.

Dataset discovery

Most cldfbench commands operate on an existing dataset (unlike new, which creates a new one). Datasets can be discovered in two ways:

  1. Via the python module (i.e. the *.py file, containing the Dataset subclass). To use this mode of discovery, pass the path to the python module as DATASET argument, when required by a command.

  2. Via entry point and dataset ID. To use this mode, specify the name of the entry point as value of the --entry-point option (or use the default name cldfbench.dataset) and the Dataset.id as DATASET argument.

Discovery via entry point is particularly useful for commands that can operate on multiple datasets. To select all datasets advertising a given entry point, pass "_" (i.e. an underscore) as DATASET argument.

Workflow

For a full example of the cldfbench curation workflow, see the tutorial.

Creating a skeleton for a new dataset directory

A directory containing stub entries for a dataset can be created running

cldfbench new

This will create the following layout (where <ID> stands for the chosen dataset ID):

<ID>/
├── cldf               # A stub directory for the CLDF data
│   └── README.md
├── cldfbench_<ID>.py  # The python module, providing the Dataset subclass
├── etc                # A stub directory for the configuration data
│   └── README.md
├── metadata.json      # The metadata provided to the subcommand serialized as JSON
├── raw                # A stub directory for the raw data
│   └── README.md
├── setup.cfg          # Python setup config, providing defaults for test integration
├── setup.py           # Python setup file, making the dataset "installable" 
├── test.py            # The python code to run for dataset validation
└── .github            # Integrate the validation with GitHub actions

Implementing CLDF creation

cldfbench provides tools to make CLDF creation simple. Still, each dataset is different, and so each dataset will have to provide its own custom code to do so. This custom code goes into the cmd_makecldf method of the Dataset subclass in the dataset's python module. (See also the API documentation of cldfbench.Dataset.)

Typically, this code will make use of one or more cldfbench.CLDFSpec instances, which describes what kind of CLDF to create. A CLDFSpec also gives access to a cldfbench.CLDFWriter instance, which wraps a pycldf.Dataset.

The main interfaces to these objects are:

cldfbench supports several scenarios of CLDF creation:

When creating CLDF, it is also often useful to have standard reference catalogs accessible, in particular Glottolog. See the section on Catalogs for a description of how this is supported by cldfbench.

Catalogs

Linking data to reference catalogs is a major goal of CLDF, thus cldfbench provides tools to make catalog access and maintenance easier. Catalog data must be accessible in local clones of the data repository. cldfbench provides commands:

See:

for a list of reference catalogs which are currently supported in cldfbench.

Note: Cloning glottolog/glottolog - due to the deeply nested directories of the language classification - results in long path names. On Windows this may require disabling the maximum path length limitation.

Curating a dataset on GitHub

One of the design goals of CLDF was to specify a data format that plays well with version control. Thus, it's natural - and actually recommended - to curate a CLDF dataset in a version controlled repository. The most popular way to do this in a collaborative fashion is by using a git repository hosted on GitHub.

The directory layout supported by cldfbench caters to this use case in several ways:

Archiving a dataset with Zenodo

Curating a dataset on GitHub also provides a simple way to archiving and publishing released versions of the data. You can hook up your repository with Zenodo (following this guide). Then, Zenodo will pick up any released package, assign a DOI to it, archive it and make it accessible in the long-term.

Some notes:

Thus, with a setup as described here, you can make sure you create FAIR data.

Extending cldfbench

cldfbench can be extended or built-upon in various ways - typically by customizing core functionality in new python packages. To support particular types of raw data, you might want a custom Dataset class, or to support a particular type of CLDF data, you would customize CLDFWriter.

In addition to extending cldfbench using the standard methods of object-oriented programming, there are two more ways of extending cldfbench: Commands and dataset templates. Both are implemented using entry ponits. So packages which provide custom commands or dataset templates must declare these in metadata that is made known to other Python packages (in particular the cldfbench package) upon installation.

Commands

A python package (or a dataset) can provide additional subcommands to be run from cldfbench. For more info see the commands.README.

Custom dataset templates

A python package can provide alternative dataset templates to be run with cldfbench new. Such templates are implemented by:

    entry_points={
        'cldfbench.scaffold': [
            'template_name=mypackage.scaffold:DerivedTemplate',
        ],
    },