Dedicated support for HTTP compliant datasets #1086
Labels
dcat
DesignPrinciples
Somehow related to the design principles, e.g.levels of machine readability, ontological commitment
feedback
Issues stemming from external feedback to the WG
future-work
issue deferred to the next standardization round
requires discussion
Issue to be discussed in a telecon (group or plenary)
Milestone
I understand that DCAT 2 content is frozen, so this is a feature request to be considered for a future version.
While working with DCAT data catalogs I came across this challenge: The link between datasets and distributions seems to be used pretty much arbitrarily in practice. For example, picking an arbitrary entry from data.gov, I can see a zip file, web resources, REST endpoint. In the typical CKAN-DCAT mapping, all these resources become distributions and my impression is, that the DCAT 2 standard does (intentionally?) not impose many restrictions here.
Of course, a little semantic goes a long way, but after nearly 2 decades of Semantic Web, I think many people in the RDF community want to go a bit further.
And with this lax modeling, it is impossible for application to refer to a (DCAT) dataset and to have it do something smart with it.
So what is a dataset in the first place?
There is 5.1 DCAT scope which states
I would like to make the following proposal:
Dataset descriptions that adhere to these rules, can be unambigously served according the HTTP principles, notably content negotiation, by a DCAT-based HTTP proxy.
As I see it, there is a strong link between how HTTP functions and how datasets - according to the strict definition - correspond to HTTP resources that thus can be served in a standard way based on catalog metadata. This aspect is in my impression not yet adequately considered in the DCAT spec.
The text was updated successfully, but these errors were encountered: