Skip to content
Permalink
Branch: master
Find file Copy path
Find file Copy path
7 contributors

Users who have contributed to this file

@micimize @Stephen-Gates @michaelamadi @johanricher @roll @cbenz @amercader
780 lines (603 sloc) 27.8 KB

_model: page

title: Patterns

updated: 22 May 2019

created: 11 December 2017

author:

  • Rufus Pollock (Open Knowledge International)
  • Paul Walsh (Open Knowledge International)
  • Michael Amadi (Nimble Learn)
  • Christophe Benz (Jailbreak)
  • Johan Richer (Jailbreak)
  • Michael Rosenthal

summary: A collection of patterns for frictionless handling of data.

body:

Meta

This document describes various patterns for solving common problems, in ways that are not (yet) specified in any Frictionless Data specification. If we see increased adoption, or wide support, for any pattern, it is a prime candidate for formalising as part of a specification.

Private properties

Overview

Some software that implements the Frictionless Data specifications may need to store additional information on the various Frictionless Data descriptors.

For example, a data registry that provides metadata via datapackage.json may wish to set an internal version or identifier that is system-specific, and should not be considered as part of the user-generated metadata.

Properties to store such information should be considered "private", and by convention, the names should be prefixed by an underscore _.

Implementations

There are no known implementations at present.

Specification

On any Frictionless Data descriptor, data that is not generated by the author/contributors, but is generated by software/a system handling the data, SHOULD be considered as "private", and be prefixed by an underscore _.

To demonstrate, let's take the example of a data registry that implements datapackage.json for storing dataset metadata.

A user might upload a datapackage.json as follows:

{
  "name": "my-package",
  "resources": [
    {
      "name": "my-resource",
      "data": [ "my-resource.csv" ]
    }
  ]
}

The registry itself may have a platform-specific version system, and increment versions on each update of the data. To store this information on the datapackage itself, the platform could save this information in a "private" _platformVersion property as follows:

{
  "name": "my-package",
  "_platformVersion": 7
  "resources": [
    {
      "name": "my-resource",
      "data": [ "my-resource.csv" ]
    }
  ]
}

Usage of "private" properties ensures a clear distinction between data stored on the descriptor that is user (author/contributor) defined, and any additional data that may be stored by a 3rd party.

Caching of resources

Overview

All Frictionless Data specifications allow for referencing resources via http or a local filesystem.

In the case of remote resources via http, there is always the possibility that the remote server will be unavailable, or, that the resource itself will be temporarily or permanently removed.

Applications that are concerned with the persistent storage of data described in Frictionless Data specifications can use a _cache property that mirrors the functionality and usage of the data property, and refers to a storage location for the data that the application can fall back to if the canonical resource is unavailable.

Implementations

There are no known implementations of this pattern at present.

Specification

Implementations MAY handle a _cache property on any descriptor that supports a data property. In the case that the data referenced in data is unavailable, _cache should be used as a fallback to access the data. The handling of the data stored at _cache is beyond the scope of the specification. Implementations might store a copy of the resources in data at ingestion time, update at regular intervals, or any other method to keep an up-to-date, persistent copy.

Some examples of the _cache property.

{
  "name": "my-package",
  "resources": [
    {
      "name": "my-resource",
      "data": [ "http://example.com/data/csv/my-resource.csv" ],
      "_cache": "my-resource.csv"
    },
    {
      "name": "my-resource",
      "data": [ "http://example.com/data/csv/my-resource.csv" ],
      "_cache": "http://data.registry.com/user/files/my-resource.csv"
    },
    {
      "name": "my-resource",
      "data": [
        "http://example.com/data/csv/my-resource.csv",
        "http://somewhere-else.com/data/csv/resource2.csv"
      ],
      "_cache": [
        "my-resource.csv",
        "resource2.csv"
      ]
    },
    {
      "name": "my-resource",
      "data": [ "http://example.com/data/csv/my-resource.csv" ],
      "_cache": "my-resource.csv"
    }
  ]
}

Compression of resources

Overview

It can be argued that applying compression to data resources can make data package publishing more cost-effective and sustainable. Compressing data resources gives publishers the benefit of reduced storage and bandwidth costs and gives consumers the benefit of shorter download times.

Implementations

Specification

All compressed resources MUST have a path that allows the compression property to be inferred. If the compression can't be inferred from the path property (e.g. a custom file extension is used) then the compression property MUST be used to specify the compression.

Supported compression types:

  • gz
  • zip

Example of a compressed resource with implied compression:

{
  "name": "data-resource-compression-example",
  "path": "http://example.com/large-data-file.csv.gz",
  "title": "Large Data File",
  "description": "This large data file benefits from compression.",
  "format": "csv",
  "mediatype": "text/csv",
  "encoding": "utf-8",
  "bytes": 1073741824
}

Example of a compressed resource with the compression property:

{
  "name": "data-resource-compression-example",
  "path": "http://example.com/large-data-file.csv.gz",
  "title": "Large Data File",
  "description": "This large data file benefits from compression.",
  "format": "csv",
  "compression" : "gz",
  "mediatype": "text/csv",
  "encoding": "utf-8",
  "bytes": 1073741824
}

Language support

Overview

Language support is a different concern to translation support. Language support deals with declaring the default language of a descriptor and the data it contains in the resources array. Language support makes no claim about the presence of translations when one or more languages are supported in a descriptor or in data. Via the introduction of a languages array to any descriptor, we can declare the default language, and any other languages that SHOULD be found in the descriptor and the data.

Implementations

There are no known implementations of this pattern at present.

Specification

Any Frictionless Data descriptor can declare the language configuration of its metadata and data with the languages array.

languages MUST be an array, and the first item in the array is the default (non-translated) language.

If no languages array is present, the default language is English (en), and therefore is equivalent to:

{
  "name": "my-package",
  "languages": ["en"]
}

The presence of a languages array does not ensure that the metadata or the data has translations for all supported languages.

The descriptor and data sources MUST be in the default language. The descriptor and data sources MAY have translations for the other languages in the array, using the same language code. IF a translation is not present, implementing code MUST fallback to the default language string.

Example usage of languages, implemented in the metadata of a descriptor:

{
  "name": "sun-package",
  "languages": ["es", "en"],
  "title": "Sol"
}

# which is equivalent to
{
  "name": "sun-package",
  "languages": ["es", "en"],
  "title": {
    "": "Sol",
    "en": "Sun"
  }
}

Example usage of languages implemented in the data described by a resource:

# resource descriptor
{
  "name": "solar-system",
  "data": [ "solar-system.csv" ]
  "fields": [
    ...
  ],
  "languages": ["es", "en", "he", "fr", "ar"]
}

# data source
# some languages have translations, some do not
# assumes a certain translation pattern, see the related section
id,name,name@fr,name@he,name@en
1,Sol,Soleil,שמש,Sun
2,Luna,Lune,ירח,Moon

Translation support

Overview

Following on from a general pattern for language support, and the explicit support of metadata translations in Frictionless Data descriptors, it would be desirable to support translations in source data.

We currently have two patterns for this in discussion. Both patterns arise from real-world implementations that are not specifically tied to Frictionless Data.

One pattern suggests inline translations with the source data, reserving the @ symbol in the naming of fields to denote translations.

The other describes a pattern for storing additional translation sources, co-located with the "source" file described in a descriptor data.

Implementations

There are no known implementations of this pattern in the Frictionless Data core libraries at present.

Specification

Inline

Uses a column naming convention for accessing translations.

Tabular resource descriptors support translations using {field_name}@{lang_code} syntax for translated field names. lang_code MUST be present in the languages array that applies to the resource.

Any field with the @ symbol MUST be a translation field for another field of data, and MUST be parsable according to the {field_name}@{lang_code} pattern.

If a translation field is found in the data that does not have a corresponding field (e.g.: title@es but no title), then the translation field SHOULD be ignored.

If a translation field is found in the data that uses a lang_code not declared in the applied languages array, then the translation field SHOULD be ignored.

Translation fields MUST NOT be described in a schema fields array.

Translation fields MUST match the type, format and constraints of the field they translate, with a single exception: Translation fields are never required, and therefore constraints.required is always false for a translation field.

Co-located translation sources

Uses a file storage convention for accessing translations.

To be contributed by @jheeffer

  • Has to handle local and remote resources
  • Has to be explicit about the translation key/value pattern in the translation files
# local
data/file1.csv
data/lang/file1-en.csv
data/lang/file1-es.csv

# remote
http://example/com/data/file2.csv
http://example/com/data/lang/file2-en.csv
http://example/com/data/lang/file2-es.csv

Table Schema: Foreign Keys to Data Packages

Overview

A foreign key is a reference where values in a field (or fields) in a Tabular Data Resource link to values in a field (or fields) in a Tabular Data Resource in the same Tabular Data Package.

This pattern allows users to link values in a field (or fields) in a Tabular Data Resource to values in a field (or fields) in a Tabular Data Resource in a different Tabular Data Package.

Specification

The foreignKeys array MAY have a property package. This property MUST be, either:

  • a string that is a fully qualified HTTP address to a Data Package datapackage.json file
  • a data package name that can be resolved by a canonical data package registry

If the referenced data package has an id that is a fully qualified HTTP address, it SHOULD be used as the package value.

For example:

"foreignKeys": [{
    "fields": ["code"],
    "reference": {
      "package": "https://raw.githubusercontent.com/frictionlessdata/example-data-packages/master/donation-codes/datapackage.json",
      "resource": "donation-codes",
      "fields": ["donation code"]
    }
  }]

Data Package Version

Specification

The Data Package version format follows the Semantic Versioning specification format: MAJOR.MINOR.PATCH

The version numbers, and the way they change, convey meaning about how the data package has been modified from one version to the next.

Given a Data Package version number MAJOR.MINOR.PATCH, increment the:

MAJOR version when you make incompatible changes, e.g.

  • Change the data package, resource or field name or identifier
  • Add, remove or re-order fields
  • Change a field type or format
  • Change a field constraint to be more restrictive
  • Combine, split, delete or change the meaning of data that is referenced by another data resource

MINOR version when you add data or change metadata in a backwards-compatible manner, e.g.

  • Add a new data resource to a data package
  • Add new data to an existing data resource
  • Change a field constraint to be less restrictive
  • Update a reference to another data resource
  • Change data to reflect changes in referenced data

PATCH version when you make backwards-compatible fixes, e.g.

  • Correct errors in existing data
  • Change descriptive metadata properties

Scenarios

  • You are developing your data though public consultation. Start your initial data release at 0.1.0
  • You release your data for the first time. Use version 1.0.0
  • You append last months data to an existing release. Increment the MINOR version number
  • You append a column to the data. Increment the MAJOR version number
  • You relocate the data to a new URL or path. No change in the version number
  • You change a title, description, or other descriptive metadata. Increment the PATCH version
  • You fix a data entry error by modifying a value. Increment the PATCH version
  • You split a row of data in a foreign key reference table. Increment the MAJOR version number
  • You update the data and schema to refer to a new version of a foreign key reference table. Increment the MINOR version number

Data Dependencies

Consider a situation where data packages are part of a tool chain that, say, loads all of the data into an SQL db. You can then imagine a situation where one requires package A which requires package B + C.

In this case you want to specify that A depends on B and C -- and that "installing" A should install B and C. This is the purpose of dataDependencies property.

Specification

dataDependencies is an object. It follows same format as CommonJS Packages spec v1.1. Each dependency defines the lowest compatible MAJOR[.MINOR[.PATCH]] dependency versions (only one per MAJOR version) with which the package has been tested and is assured to work. The version may be a simple version string (see the version property for acceptable forms), or it may be an object group of dependencies which define a set of options, any one of which satisfies the dependency. The ordering of the group is significant and earlier entries have higher priority. Example:

"dataDependencies": {
   "country-codes": "",
   "unemployment": "2.1",
   "geo-boundaries": {
     "acmecorp-geo-boundaries": ["1.0", "2.0"],
     "othercorp-geo-boundaries": "0.9.8",
   },
}

Implementations

None known.

Table Schema: metadata properties

Overview

Table Schemas need their own metadata to be stand-alone and interpreted without relying on other contextual informations (Data Package metadata for example). Adding metadata to describe schemas in a structured way would help users to understand them and would increase their sharing and reuse.

Currently it is possible to add custom properties to a Table Schema, but the lack of consensus about those properties restricts common tooling and wider adoption.

Use cases

  • Documentation: generating Markdown documentation from the schema itself is a useful use case, and contextual information (description, version, authors...) needs to be retrieved.
  • Cataloging: open data standardisation can be increased by improving Table Schemas shareability, for example by searching and categorising them (by keywords, countries, full-text...) in catalogs.
  • Machine readibility: tools like Goodtables could use catalogs to access Table Schemas in order to help users validate tabular files against existing schemas. Metadata would be needed for tools to find and read those schemas.

Specification

This pattern introduces the following properties to the Table Schema spec (using the Frictionless Data core dictionary as much as possible):

  • name: An identifier string for this schema.
  • title: A human-readable title for this schema.
  • description: A text description for this schema.
  • keywords: The keyword(s) that describe this schema. Tags are useful to categorise and catalog schemas.
  • countryCode: The ISO 3166-1 alpha-2 code for the country where this schema is primarily used. Since open data schemas are very country-specific, it's useful to have this information in a structured way.
  • homepage: The home on the web that is related to this schema.
  • path: A fully qualified URL for this schema. The direct path to the schema itself can be useful to help acessing it (i.e. machine readibility).
  • image: An image to represent this schema. An optional illustration can be useful for example in catalogs to differenciate schemas in a list.
  • licenses: The license(s) under which this schema is published.
  • resources: Example tabular data resource(s) validated or invalidated against this schema. Oftentimes, schemas are shared with example resources to illustrate them, with valid or even invalid files (e.g. with constraint errors).
  • sources: The source(s) used to created this schema. In some cases, schemas are created after a legal text or some draft specification in a human-readable document. In those cases, it's useful to share them with the schema.
  • created: The datetime on which this schema was created.
  • lastModified: The datetime on which this schema was last modified.
  • version: A unique version number for this schema.
  • contributors: The contributors to this schema.

Example schema

{
  "$schema": "https://frictionlessdata.io/schemas/table-schema.json",
  "name": "irve",
  "title": "Infrastructures de recharge de véhicules électriques",
  "description": "Spécification du fichier d'échange relatif aux données concernant la localisation géographique et les caractéristiques techniques des stations et des points de recharge pour véhicules électriques",
  "keywords": [
      "electric vehicle",
      "ev",
      "charging station",
      "mobility"
  ],
  "countryCode": "FR",
  "homepage": "https://github.com/etalab/schema-irve",
  "path": "https://github.com/etalab/schema-irve/raw/v1.0.1/schema.json",
  "image": "https://github.com/etalab/schema-irve/raw/v1.0.1/irve.png",
  "licenses": [
    {
      "title": "Creative Commons Zero v1.0 Universal",
      "name": "CC0-1.0",
      "path": "https://creativecommons.org/publicdomain/zero/1.0/"  
    }
  ],
  "resources": [
    {
      "title": "Valid resource",
      "name": "exemple-valide",
      "path": "https://github.com/etalab/schema-irve/raw/v1.0.1/exemple-valide.csv"
    },
    {
      "title": "Invalid resource",
      "name": "exemple-invalide",
      "path": "https://github.com/etalab/schema-irve/raw/v1.0.1/exemple-invalide.csv"      
    }    
  ],
  "sources": [
    {
      "title": "Arrêté du 12 janvier 2017 relatif aux données concernant la localisation géographique et les caractéristiques techniques des stations et des points de recharge pour véhicules électriques",      
      "path": "https://www.legifrance.gouv.fr/eli/arrete/2017/1/12/ECFI1634257A/jo/texte"
    }
  ],
  "created": "2018-06-29",
  "lastModified": "2019-05-06",
  "version": "1.0.1",  
  "contributors": [
    {
      "title": "John Smith",
      "email": "john.smith@etalab.gouv.fr",
      "organisation": "Etalab",
      "role": "author"
    },
    {
      "title": "Jane Doe",
      "email": "jane.doe@aol.com",
      "organisation": "Civil Society Organization X",
      "role": "contributor"
    }
  ],
  "fields": [ ]
}

Implementations

The following links are actual examples already using this pattern, but not 100 % aligned with our proposal. The point is to make the Table Schema users converge towards a common pattern, before considering changing the spec.

JSON Data Resources

Overview

A simple format to describe a single structured JSON data resource. It includes support both for metadata such as author and title and a schema to describe the data.

Introduction

A JSON Data Resource is a type of Data Resource specialized for describing structured JSON data.

JSON Data Resource extends Data Resource in following key ways:

  • The schema property MUST follow the JSON Schema specification, either as a JSON object directly under the property, or a string referencing another JSON document containing the JSON Schema

Examples

A minimal JSON Data Resource, referencing external JSON documents, looks as follows.

// with data and a schema accessible via the local filesystem
{
  "profile": "json-data-resource",
  "name": "resource-name",
  "path": [ "resource-path.json" ],
  "schema": "jsonschema.json"
}

// with data accessible via http
{
  "profile": "json-data-resource",
  "name": "resource-name",
  "path": [ "http://example.com/resource-path.json" ],
  "schema": "http://example.com/jsonschema.json"
}

A minimal JSON Data Resource example using the data property to inline data looks as follows.

{
  "profile": "json-data-resource",
  "name": "resource-name",
  "data": {
    "id": 1,
    "first_name": "Louise"
  },
  "schema": {
    "type": "object",
    "required": [
      "id"
    ],
    "properties": {
      "id": {
        "type": "integer"
      },
      "first_name": {
        "type": "string"
      }
    }
  }
}

A comprehensive JSON Data Resource example with all required, recommended and optional properties looks as follows.

{
  "profile": "json-data-resource",
  "name": "solar-system",
  "path": "http://example.com/solar-system.json",
  "title": "The Solar System",
  "description": "My favourite data about the solar system.",
  "format": "json",
  "mediatype": "application/json",
  "encoding": "utf-8",
  "bytes": 1,
  "hash": "",
  "schema": {
    "$schema": "http://json-schema.org/draft-07/schema#",
    "type": "object",
    "required": [
      "id"
    ],
    "properties": {
      "id": {
        "type": "integer"
      },
      "name": {
        "type": "string"
      }
      "description": {
        "type": "string"
      }
    }
  },
  "sources": [{
    "title": "The Solar System - 2001",
    "path": "http://example.com/solar-system-2001.json",
    "email": ""
  }],
  "licenses": [{
    "name": "CC-BY-4.0",
    "title": "Creative Commons Attribution 4.0",
    "path": "https://creativecommons.org/licenses/by/4.0/"
  }]
}

Specification

A JSON Data Resource MUST be a Data Resource, that is it MUST conform to the Data Resource specification.

In addition:

  • The Data Resource schema property MUST follow the JSON Schema specification, either as a JSON object directly under the property, or a string referencing another JSON document containing the JSON Schema
  • There MUST be a profile property with the value json-data-resource
  • The data the Data Resource describes MUST, if non-inline, be a JSON file

JSON file requirements

When "format": "json", files must strictly follow the JSON specification. Some implementations MAY support "format": "jsonc", allowing for non-standard single line and block comments (// and /* */ respectively).

Implementations

None known.

Describing Data Package Catalogs using the Data Package Format

Overview

There are scenarios where one needs to describe a collection of data packages, such as when building an online registry, or when building a pipeline that ingests multiple datasets.

In these scenarios, the collection can be described using a "Catalog", where each dataset is represented as a single resource which has:

{
    "profile": "data-package",
    "format": "json"
}

Specification

The Data Package Catalog builds directly on the Data Package specification. Thus a Data Package Catalog MUST be a Data Package and conform to the Data Package specification.

The Data Package Catalog has the following requirements over and above those imposed by Data Package:

  • There MUST be a profile property with the value data-package-catalog, or a profile that extends it
  • Each resource MUST also be a Data Package

Examples

A generic package catalog:

{
  "profile": "data-package-catalog",
  "name": "climate-change-packages",
  "resources": [
    {
      "profile": "json-data-package",
      "format": "json",
      "name": "beacon-network-description",
      "path": "https://http://beacon.berkeley.edu/hypothetical_deployment_description.json"
    },
    {
      "profile": "tabular-data-package",
      "format": "json",
      "path": "https://pkgstore.datahub.io/core/co2-ppm/10/datapackage.json"
    },
    {
      "profile": "tabular-data-package",
      "name": "co2-fossil-global",
      "format": "json",
      "path": "https://pkgstore.datahub.io/core/co2-fossil-global/11/datapackage.json"
    }
  ]
}

A minimal tabular data catalog:

{
  "profile": "tabular-data-package-catalog",
  "name": "datahub-climate-change-packages",
  "resources": [
    {
      "path": "https://pkgstore.datahub.io/core/co2-ppm/10/datapackage.json"
    },
    {
      "name": "co2-fossil-global",
      "path": "https://pkgstore.datahub.io/core/co2-fossil-global/11/datapackage.json"
    }
  ]
}

Data packages can also be declared inline in the data catalog:

{
  "profile": "tabular-data-package-catalog",
  "name": "my-data-catalog",
  "resources": [
    {
      "profile": "tabular-data-package",
      "name": "my-dataset",
      // here we list the data files in this dataset
      "resources": [
        {
          "profile": "tabular-data-resource",
          "name": "resource-name",
          "data": [
            {
              "id": 1,
              "first_name": "Louise"
            },
            {
              "id": 2,
              "first_name": "Julia"
            }
          ],
          "schema": {
            "fields": [
              {
                "name": "id",
                "type": "integer"
              },
              {
                "name": "first_name",
                "type": "string"
              }
            ],
            "primaryKey": "id"
          }
        }
      ]
    }
  ]
}

Implementations

None known.

You can’t perform that action at this time.