Home

Awesome

goodtables-py

Travis Coveralls PyPi Github Gitter

Goodtables is a framework to validate tabular data. It can check the structure of your data (e.g. all rows have the same number of columns), and its contents (e.g. all dates are valid).

[Important Notice] goodtables was renamed to frictionless since version 3. The framework got various improvements and was extended to be a complete data solution. The change in not breaking for the existing software so no actions are required. Please read the Migration Guide to start working with Frictionless for Python.

Features

Contents

<!--TOC--> <!--TOC-->

Getting Started

For faster goodtables-combatible Pandas dataframes validation take a look at https://github.com/ezwelty/goodtables-pandas-py

Installing

pip install goodtables
pip install goodtables[ods]  # If you need LibreOffice's ODS file support

Running on CLI

goodtables data.csv

Use goodtables --help to see the different options.

Running on Python

from goodtables import validate

report = validate('invalid.csv')
report['valid'] # false
report['table-count'] # 1
report['error-count'] # 3
report['tables'][0]['valid'] # false
report['tables'][0]['source'] # 'invalid.csv'
report['tables'][0]['errors'][0]['code'] # 'blank-header'

You can read a more in depth explanation on using goodtables with Python on the developer documentation section. Check also the examples folder for other examples.

Documentation

Goodtables validates your tabular dataset to find structural and content errors. Consider you have a file named invalid.csv. Let's validate it:

report = validate('invalid.csv')

We could also pass a remote URI instead of a local path. It supports CSV, XLS, XLSX, ODS, JSON, and all other formats supported by the tabulator library.

Report

The validation report follows the JSON Schema defined on goodtables/schemas/report.json.

The output of the validate() method is a report dictionary. It includes information if the data was valid, count of errors, list of table reports, which individual checks failed, etc. A report will be looking like this:

{
    "time": 0.009,
    "error-count": 1,
    "warnings": [
        "Table \"data/invalid.csv\" inspection has reached 1 error(s) limit"
    ],
    "preset": "table",
    "valid": false,
    "tables": [
        {
            "errors": [
                {
                    "row-number": null,
                    "message": "Header in column 3 is blank",
                    "row": null,
                    "column-number": 3,
                    "code": "blank-header"
                }
            ],
            "error-count": 1,
            "headers": [
                "id",
                "name",
                "",
                "name"
            ],
            "scheme": "file",
            "row-count": 2,
            "valid": false,
            "encoding": "utf-8",
            "time": 0.007,
            "schema": null,
            "format": "csv",
            "source": "data/invalid"
        }
    ],
    "table-count": 1
}

The errors are divided in one of the following categories:

Checks

Check is a main validation actor in goodtables. The list of enabled checks can be changed using checks and skip_checks arguments. Let's explore the options on an example:

report = validate('data.csv') # by default structure and schema (if available) checks
report = validate('data.csv', checks=['structure']) # only structure checks
report = validate('data.csv', checks=['schema']) # only schema (if available) checks
report = validate('data.csv', checks=['bad-headers']) # check only 'bad-headers'
report = validate('data.csv', skip_checks=['bad-headers']) # exclude 'bad-headers'

By default a dataset will be validated against all available Data Quality Spec errors. Some checks can be unavailable for validation. For example, if the schema isn't provided, only the structure checks will be done.

Presets

Goodtables support different formats of tabular datasets. They're called presets. A tabular dataset is some data that can be split in a list of data tables, as:

Dataset

We can change the preset using the preset argument for validate(). By default, it'll be inferred from the source, falling back to table. To validate a data package, we can do:

report = validate('datapackage.json') # implicit preset
report = validate('datapackage.json', preset='datapackage') # explicit preset

This will validate all tabular resources in the datapackage.

It's also possible to validate a list of files using the "nested" preset. To do so, the first argument to validate() should be a list of dictionaries, where each key in the dictionary is named after a parameter on validate(). For example:

report = validate([{'source': 'data1.csv'}, {'source': 'data2.csv'}]) # implicit preset
report = validate([{'source': 'data1.csv'}, {'source': 'data2.csv'}], preset='nested') # explicit preset

Is similar to:

report_data1 = validate('data1.csv')
report_data2 = validate('data2.csv')

The difference is that goodtables validates multiple tables in parallel, so calling using the "nested" preset should run faster.

Data Quality Errors

Base report errors are standardized and described in Data Quality Spec.

Source errors

The basic checks can't be disabled, as they deal with goodtables being able to read the files.

checkdescription
io-errorData reading error because of IO error.
http-errorData reading error because of HTTP error.
source-errorData reading error because of not supported or inconsistent contents.
scheme-errorData reading error because of incorrect scheme.
format-errorData reading error because of incorrect format.
encoding-errorData reading error because of an encoding problem.

Structure errors

These checks validate that the structure of the file are valid.

checkdescription
blank-headerThere is a blank header name. All cells in the header row must have a value.
duplicate-headerThere are multiple columns with the same name. All column names must be unique.
blank-rowRows must have at least one non-blank cell.
duplicate-rowRows can't be duplicated.
extra-valueA row has more columns than the header.
missing-valueA row has less columns than the header.

Schema errors

These checks validate the contents of the file. To use them, you need to pass a Table Schema. If you don't have a schema, goodtables can infer it if you use the infer_schema option.

If your schema only covers part of the data, you can use the infer_fields to infer the remaining fields.

Lastly, if the order of the fields in the data is different than in your schema, enable the order_fields option.

checkdescription
schema-errorSchema is not valid.
non-matching-headerThe header's name in the schema is different from what's in the data.
extra-headerThe data contains a header not defined in the schema.
missing-headerThe data doesn't contain a header defined in the schema.
type-or-format-errorThe value can’t be cast based on the schema type and format for this field.
required-constraintThis field is a required field, but it contains no value.
pattern-constraintThis field value's should conform to the defined pattern.
unique-constraintThis field is a unique field but it contains a value that has been used in another row.
enumerable-constraintThis field value should be equal to one of the values in the enumeration constraint.
minimum-constraintThis field value should be greater or equal than constraint value.
maximum-constraintThis field value should be less or equal than constraint value.
minimum-length-constraintA length of this field value should be greater or equal than schema constraint value.
maximum-length-constraintA length of this field value should be less or equal than schema constraint value.

Custom errors

checkdescription
blacklisted-valueEnsure there are no cells with the blacklisted values.
deviated-valueEnsure numbers are within a number of standard deviations from the average.
foreign-keyEnsure foreign keys are valid within a data package
sequential-valueEnsure numbers are sequential.
truncated-valueDetect values that were potentially truncated.
custom-constraintDefines a constraint based on the values of other columns (e.g. value * quantity == total).
blacklisted-value

Sometimes we have to check for some values we don't want to have in out dataset. It accepts following options:

optiontypedescription
columnint/strColumn number or name
blacklistlist of strList of blacklisted values

Consider the following CSV file:

id,name
1,John
2,bug
3,bad
5,Alex

Let's check that the name column doesn't contain rows with bug or bad:

from goodtables import validate

report = validate('data.csv', checks=[
    {'blacklisted-value': {'column': 'name', 'blacklist': ['bug', 'bad']}},
])
# error on row 3 with code "blacklisted-value"
# error on row 4 with code "blacklisted-value"
deviated-value

This check helps to find outlines in a column containing positive numbers. It accepts following options:

optiontypedescription
columnint/strColumn number or name
averagestrAverage type, either "mean", "median" or "mode"
intervalintValues must be inside range average ± standard deviation * interval

Consider the following CSV file:

temperature
1
-2
7
0
1
2
5
-4
100
8
3

We use median to get an average of the column values and allow interval of 3 standard deviations. For our case median is 2.0 and standard deviation is 29.73 so all valid values must be inside the [-87.19, 91.19] interval.

report = validate('data.csv', checks=[
    {'deviated-value': {'column': 'temperature', 'average': 'median', 'interval': 3}},
])
# error on row 10 with code "deviated-value"
foreign-key

We support here relative paths. It MUST be used only for trusted data sources.

This check validate foreign keys within a data package. Consider we have a data package defined below:

DESCRIPTOR = {
  'resources': [
    {
      'name': 'cities',
      'data': [
        ['id', 'name', 'next_id'],
        [1, 'london', 2],
        [2, 'paris', 3],
        [3, 'rome', 4],
        # [4, 'rio', None],
      ],
      'schema': {
        'fields': [
          {'name': 'id', 'type': 'integer'},
          {'name': 'name', 'type': 'string'},
          {'name': 'next_id', 'type': 'integer'},
        ],
        'foreignKeys': [
          {
            'fields': 'next_id',
            'reference': {'resource': '', 'fields': 'id'},
          },
          {
            'fields': 'id',
            'reference': {'resource': 'people', 'fields': 'label'},
          },
        ],
      },
    }, {
      'name': 'people',
      'data': [
        ['label', 'population'],
        [1, 8],
        [2, 2],
        # [3, 3],
        # [4, 6],
      ],
    },
  ],
}

Running goodtables on it will raise a few foreign-key errors because we have commented some rows in the data package's data:

report = validate(DESCRIPTOR, checks=['structure', 'schema', 'foreign-key'])
print(report)
{'error-count': 2,
 'preset': 'datapackage',
 'table-count': 2,
 'tables': [{'datapackage': '...',
             'error-count': 2,
             'errors': [{'code': 'foreign-key',
                         'message': 'Foreign key "[\'next_id\']" violation in '
                                    'row 4',
                         'message-data': {'fields': ['next_id']},
                         'row-number': 4},
                        {'code': 'foreign-key',
                         'message': 'Foreign key "[\'id\']" violation in row 4',
                         'message-data': {'fields': ['id']},
                         'row-number': 4}],
             'format': 'inline',
             'headers': ['id', 'name', 'next_id'],
             'resource-name': 'cities',
             'row-count': 4,
             'schema': 'table-schema',
             'source': 'inline',
             'time': 0.031,
             'valid': False},
            {'datapackage': '...',
             'error-count': 0,
             'errors': [],
             'format': 'inline',
             'headers': ['label', 'population'],
             'resource-name': 'people',
             'row-count': 3,
             'source': 'inline',
             'time': 0.038,
             'valid': True}],
 'time': 0.117,
 'valid': False,
 'warnings': []}

It experimetally supports external resource checks, for example, for a foreignKey definition like these:

{"package": "../people/datapackage.json", "resource": "people", "fields": "label"}
{"package": "http:/example.com/datapackage.json", "resource": "people", "fields": "label"}
sequential-value

This checks is for pretty common case when a column should have integers that sequentially increment. It accepts following options:

optiontypedescription
columnint/strColumn number or name

Consider the following CSV file:

id,name
1,one
2,two
3,three
5,five

Let's check if the id column contains sequential integers:

from goodtables import validate

report = validate('data.csv', checks=[
    {'sequential-value': {'column': 'id'}},
])
# error on row 5 with code "sequential-value"
truncated-value

Some database or spreadsheet software (like MySQL or Excel) could cutoff values on saving. There are some well-known heuristics to find this bad values. See https://github.com/propublica/guides/blob/master/data-bulletproofing.md for more detailed information.

Consider the following CSV file:

id,amount,comment
1,14000000,good
2,2147483647,bad
3,32767,bad
4,234234234,bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbad

To detect all probably truncated values we could use truncated-value check:

report = validate('data.csv', checks=[
    'truncated-value',
])
# error on row 3 with code "truncated-value"
# error on row 4 with code "truncated-value"
# error on row 5 with code "truncated-value"
custom-constraint

With Table Schema we could create constraints for an individual field but sometimes it's not enough. With a custom constraint check every row could be checked against given limited python expression in which variable names resolve to column values. See list of available operators. It accepts following options:

<dl> <dt>constraint (str)</dt> <dd>Constraint definition (e.g. <code>col1 + col2 == col3</code>)</dd> </dl>

Consider csv file like this:

id,name,salary,bonus
1,Alex,1000,200
2,Sam,2500,500
3,Ray,1350,500
4,John,5000,1000

Let's say our business rule is to be shy on bonuses:

report = validate('data.csv', checks=[
    {'custom-constraint': {'constraint': 'salary > bonus * 4'}},
])
# error on row 4 with code "custom-constraint"

Frequently Asked Questions

How can I add a new custom check?

To create a custom check user could use a check decorator. This way the builtin check could be overridden (use the spec error code like duplicate-row) or could be added a check for a custom error (use type, context and position arguments):

from goodtables import validate, check, Error

@check('custom-check', type='custom', context='body')
def custom_check(cells):
    errors = []
    for cell in cells:
        message = 'Custom error on column {column_number} and row {row_number}'
        error = Error(
            'custom-error',
            cell,
            message
        )
        errors.append(error)
    return errors

report = validate('data.csv', checks=['custom-check'])

Recommended steps:

How can I add support for a new tabular file type?

To create a custom preset user could use a preset decorator. This way the builtin preset could be overridden or could be added a custom preset.

from tabulator import Stream
from tableschema import Schema
from goodtables import validate

@preset('custom-preset')
def custom_preset(source, **options):
    warnings = []
    tables = []
    for table in source:
        try:
            tables.append({
                'source':  str(source),
                'stream':  Stream(...),
                'schema': Schema(...),
                'extra': {...},
            })
        except Exception:
            warnings.append('Warning message')
    return warnings, tables

report = validate(source, preset='custom-preset')

For now this documentation section is incomplete. Please see builtin presets to learn more about the dataset extraction protocol.

API Reference

cli

cli()

Command-line interface

Usage: cli.py [OPTIONS] COMMAND [ARGS]...

Options:
  --version  Show the version and exit.
  --help     Show this message and exit.

Commands:
  validate*  Validate tabular files (default).
  init       Init data package from list of files.

validate

validate(source, **options)

Validates a source file and returns a report.

Arguments

Raises

Returns

dict: The validation report.

preset

preset(name)

Register a custom preset (decorator)

Example

@preset('custom-preset')
def custom_preset(source, **options):
    # ...

Arguments

check

check(name, type=None, context=None, position=None)

Register a custom check (decorator)

Example

@check('custom-check', type='custom', context='body')
def custom_check(cells):
    # ...

Arguments

Error

Error(self, code, cell=None, row_number=None, message=None, message_substitutions=None)

Describes a validation check error

Arguments

Raises

spec

dict() -> new empty dictionary dict(mapping) -> new dictionary initialized from a mapping object's (key, value) pairs dict(iterable) -> new dictionary initialized as if via: d = {} for k, v in iterable: d[k] = v dict(**kwargs) -> new dictionary initialized with the name=value pairs in the keyword argument list. For example: dict(one=1, two=2)

GoodtablesException

GoodtablesException(self, /, *args, **kwargs)

Base goodtables exception

Contributing

The project follows the Open Knowledge International coding standards.

Recommended way to get started is to create and activate a project virtual environment. To install package and development dependencies into active environment:

$ make install

To run tests with linting and coverage:

$ make test

Changelog

Here described only breaking and the most important changes. The full changelog and documentation for all released versions could be found in nicely formatted commit history.

v2.5
v2.4
v2.3
v2.2
v2.1
v2.0

Breaking changes:

Bug fixes:

v1.5

New API added:

v1.4

Improved behaviour:

v1.3

New advanced checks added:

v1.2

New API added:

v1.1

New API added:

v1.0

This version includes various big changes. A migration guide is under development and will be published here.

v0.6

First version of goodtables.