Join GitHub today
Developing apps that collaborate on data
Levels of Datacore adoption
There are several increasing levels of adoption of the Datacore by an application :
merely replacing lists of values, especially in the case of (almost) constant data types, such as cities. This is very easy to do, typically by using a very heavily cached HTTP client instead of a properties file reading library, but not yet true Datacore adoption since local data is not linked i.e. reconciled to Datacore data.
referring to existing Datacore data Resources, in the case of some data types outside the application's core business. This case differs from the previous one in that the application's database contains, in addition to said value and next to it, the URI of the corresponding Datacore data Resource. In other words, this is reusing data. This notably allows to analyze local data according to any other Datacore data that is now indirectly linked to it.
replacing application data by Datacore data Resources, at least in the case of some core business data types. Or more truthfully merging both, so that application data becomes a cache of the Datacore's data Resource or a draft of its next version waiting to be synchronized. This allows to share and collaborate on data, which is typically the application's core business data.
Obviously, an application never fully adopts the Datacore, in that there are always some fields, such as technical ones, or even some data types that are not shared.
Reusing existing data
Have a look at examples of most common use cases' queries in the live Playground.
Collaborating on data
Last but not least, besides the business and technical details of using the Datacore APIs, it is very important for applications consuming the Datacore API to adapt the way they work with data - their data and workflow - correspondingly. And especially so when they not only reuse but share and collaborate on data.
In short, the way to do it is to
- design a collaborative data workflow that not only minds other users but also other apps,
- design the rights policy that goes with it and setting it up.
And when interacting with data always "think Datacore", that is at all times having the Datacore's data in mind in addition to the application-local data :
- taking in to account the facts that current, local data might not be up-to-date or in sync with the Datacore's,
- and that there may be more data in the Datacore.
Thinking Datacore - the example of Agrilocal and the Portal
Agrilocal uses organisation data from the Datacore. Those data are also used by the Portal. Agrilocal's organisation data are then a cache of those from the Datacore and / or the opposite.
Therefore, the Datacore has to be called to ensure having the last changes in the data. This can be done effectively with a proper use of the GET method with the version in a header, that is to say use of the HTTP client cache with Entity Tag (ETag)
Then, the only remaining question is : when and how frequently the Datacore has to be called ?
At least :
- every time up to date data is required (for instance, when copying an organisation field into another data)
- every time a PUT is issued : catch "obsolete version" errors then update local data and inform the user of this update (either asking him to make its changes again or showing him a diff between the data)
- if possible, avoid having to deal with obsolete data by checking them more frequently (for instance, at the time the user starts to edit them and / or by checking at a regular interval when he is editing them)
When synchronising data, do not forget to put project and version information in the request's header. For more details about this, please see the documentation in the Playground and, above in the FAQ, the "Access denied" and "Version of data resource to update is required" entries.
The version allows optimistic locking and guarantees the atomicity of an organisation change and data consistency. Consistency involving more than one data resource must be checked by the applications themselves.