The vision known as Common Ground emerges from local municipalities in The Netherlands. With information systems increasingly lagging behind society's demand, Common Ground envisions a radical change for the better, breaking free form the status quo in which innovation and dynamic adaptation of new technology has become nearly impossible.
Seen from the perspective of today's technological state of the art this vision brings nothing radically new, yet in the context of governmental information management it introduces some ambitious goals that will require long term commitment of both politicians and technicians to achieve.
The information provision of an organisation is the combination of people, assets and measures aimed at the information demand of that organisation. The information demand by society, more specifically the government, is increasing at a far greater pace than can be accommodated by the current information provision of municipalities. The current system has reached it limits and needs to be innovated to be able to keep up with demand. The Common Ground vision aims to innovate both the technical and organisational aspects of the current system. The initial scope for Common Ground is the municipal domain as this is the domain where direct influence can be exerted. This is also a group of organisations that struggle with challenges introduced by European and local legislation which, so of which are too great to tackle with the current information provision.
Examples of legislation which forces municipalities to innovate their information provision are the General Data Protection Regulation (GDPR) and the Dutch “Wet Open Overheid (WOO)” legislation. The GDPR forces organisations to “implement appropriate technical and organisational measures” to protect the privacy of personal data and provides citizens’ with a wide array of rights that can be enforced against organisations that process personal data. These rights may limit the ability of organisations to process the personal data of data subjects, and in some cases these rights can have a significant impact upon an organisation's information provision. The Dutch WOO aims to make governments and semi-governments more transparent to better serve public information for the democratic constitutional state, citizens, government and economic development. Even though it is widely recognized that the goals pursued by this law are desirable, its introduction has been delayed because of concerns about the practicality and implementability of the law.
In the recently presented Dutch governments’ coalition agreement it is announced that residents of our country should get more control over their own personal data. Even without all these new challenges, the provision of information is currently lagging compared to the pace at which the information needs of our society is increasing. The following innovations are therefore proposed.
Moving from information silos to a layered architecture
The silo's need to be broken down and converted to layers in order to create an application landscape that is robust and flexible. To roughly describe the desired software architecture, five layers are envisioned in Common Ground:
In the past decade municipalities have implemented many information systems which are not, or poorly, interconnected. Each of these information systems has implemented its own software architecture and combines user interfaces, business logic and data storage. These systems are basically “information silos”. They have been developed as “vertical” systems making it difficult, or impossible for the system to work with unrelated systems and have for the most part not been designed with privacy and security by design principles in mind. Due to their closed nature, they limit the ability of municipalities to comply with regulations. In a single local municipality one can expect to find hundreds of these information silo's. The Common Ground movement advocates the implementation of a layered architecture where processes are separated from data and business rules thereby increasing flexibility and increasing interconnectivity and openness of data. Common Ground is based on the 'API-first' strategy which means that the API is the first interface of an application and the people developing against the API are the prime users.
A five-layer architecture model is used by Common Ground to logically separate concerns for user interfaces, business logic and data access (APIs). The layers are:
The bottom three layers of the model (data, services and integration) are the layers where data access, business logic, API provisioning and API access are managed. The top two layers of the model contain the business processes and user interfaces for end-users.
The model has been designed to define clear separations between the layers. Each layer handles a specific aspects of software functionality and interfaces with other layers through well-defined interfaces. This principle of “separation of concerns” ensures that changing the user interface does not require changing the business logic code, and vice versa. Each layer of the Common Ground architecture framework addresses specific concerns and each layer is standardised by specifications and standards. These specifications are especially import for the services and integration layers of the framework. These layers specify both the way applications publish their functionality and how applications can use this published functionality. They provide a uniform method for accessing data (interconnectivity) and offer a standardised authentication and authorisation scheme. The service and integration layers are the vital components for breaking up information silos into manageable applications.
The bottom layer of the model, contains the data. These data are standardised by sectoral information models in which syntax of attributes, logical grouping of attributes in objects and the relationship between those objects is defined. The goal is to initially model sectoral information models as fit for purpose depending on business needs. Perfecting information models is a process that will happen step by step in the coming years.
The services layer, contains the services (APIs) offered by service providers. A service in this context refers to a software functionality or a set of software functionalities that different clients can reuse for different purposes, together with the policies that control its usage.
The integration layer, contains the elements needed for connectivity to the services offered in the API provisioning layer. Through this layer services (APIs) are made available to both internal and external consumers. The upper two levels contain the business processes and user interaction. These layers will be worked out in detail later. The expectation is that development on those layers will be swift when the bottom three layers are operational and work according to the Common Ground vision.
The integration functionality layer in the Common Ground vision is loosely based on the Estonian X-Road project. The Dutch implementation of the integration functionality is called NLX. Through NLX a standardised method for service providers and consumers to interact is offered using standardised authentication and authorisation protocols.
As Common Ground envisions a federated system the various layers of the architecture will be implemented and distributed over many different organisations and servers. Some organisations will make their services available through the NLX while others will act as consumers of these services. It is important to keep in mind that in each of these organisations the architecture layers must be implemented separately, with a basic set of technical standards on how these layers interact to ensure manoeuvrability and flexibility in the information provision.
Moving from data duplication to accessing data at the source
Current information systems maintain a database in which all data relevant to its processes are stored. These information systems basically function as “information silos” for a specific problem area. The databases of these information silos often contain redundant copies of data which is maintained in another information silo. Personal data are, for example, formally maintained in the local GBA information system, but exist in dozens of databases in every Dutch municipality. A change in the GBA (e.g. a move) therefore leads to this data being copied to dozens of other databases. This tedious process of data duplication has been slightly optimised over the years by using the StUF-standard (Standard Exchange Format for municipal data).
Common Ground aims to end the practice of duplicating data. The aim is to store data only once at the data source and reuse this data when needed. A future information system in this vision only stores data which originates in its own processes. All other data needed for the execution of the process are retrieved from the original data source when they are needed. When for example, personal data are required, they are no longer stored redundantly in the information system’s database. Instead they are retrieved from the GBA information system using APIs provisioned in layer 2 at the time they are needed.
To enable this way of thinking it is necessary to adopt modern software development and deployment techniques. Today’s development environments are aimed at controlling complexity, optimising performance and enabling software developers to quickly get started with an API. These development environments enable developers to quickly gather all data required for the processing of a process step and hide complexity from the end-user.
By accessing data at its source instead of duplicating it a goal can be reached for which we’ve been striving for many years; store once, use multiple times. In this scenario the data owner mutates and stores the data, all other consumers access the data though well-defined services offered by the data owner.
Moving from local authorisation schemes to a federated authorisation scheme
The innovation proposed by Common Ground involves a major change of both technology and organisation. Service providers will be connected to municipalities and other government organisations through a national infrastructure. All organisations connected to this infrastructure can offer services and consume services offered by others. Citizens can use the services offered by these parties and request data in real time. In this infrastructure, it is crucial that authorisation of services and data is securely arranged. To arrange this properly, Common Ground introduces a technical component, the NLX, a so-called API gateway. This gateway offers a standardised way of making services available to consumers, both internal and external. The NLX standardises the way in which services are offered, authorised logged. Implementation of the NLX in the local network is mandatory for every organisation that connects to the national infrastructure.
A federative authorisation model is envisaged by Common Ground. Internal identities of end-users are not used in service calls, the NLX transforms the internal identity of a person or system to an organisational identity before invoking a service. This organisational identity is included in the service call. Service providers are required to check the access rights of a requesting organisation when a service is invoked. Setting up authorisation at the level of organisations instead of end users prevents the need to setup a central user management scheme, and leads to a manageable system. Service providers checks access rights of requesting organisations for all service being invoked. The access rights are arranged in a way which is appropriate for that specific organisation.
The role of the employee is used in combination with the purpose binding to determine whether access is authorised (A in the figure). For requests to another organisation (B in the figure), the internal identity associated with the request is converted by the NLX to the identity of the organisation as a whole - the idea is: the employee represents the organisation. In organisation B, the NLX regulates access to the specific service for external requests at the organisation level. All requests are logged so that audits can be carried out.
For example, when an employee of Organisation A uses a business application to retrieve data from Organisation B the business application invokes a service hosted by Organisation B. Invocation of this service is performed through NLX. The internal identity associated with the request is converted by the NLX to the identity of the organisation as a whole - the idea is: the employee represents the organisation. NLX then invokes the service and logs the invocation in the NLX transaction log. The NLX node hosted by Organisation B receives the invocation, logs it in the NLX transaction log and checks the access rights for Organisation A. This service only knows that it has been invoked by Organisation A, it has no knowledge of the specific employee that was responsible for the invocation. If access rights are in order then the endpoint API is invoked. Introduction of the delegated authorisation mechanism has consequences. Service providers must trust the organisations they do business with and must rely on them to perform the correct authorisation procedures for end users. This asks for a high degree of trust between organisations. Whether that trust is justified is subsequently periodically determined based on audits. An effect of implementing delegated is that the data owner arranges the authorisation for external calls at the organisational level while giving organisations the opportunity to organise end-users' authorisations themselves.
Moving from manual to automated accountability
One of the challenges facing municipalities and other organisations is accountability for the processing of (personal) data by the organisation as detailed in the General Data Protection Directive (GDPR). The GDPR states that personal data shall be collected for specified, explicit and legitimate purposes and not further processed in a manner that is incompatible with those purposes. This “purpose limitation” principle has been embraced by Common Ground as part of the privacy by design principle. Building on the framework for delegated authorisation, the challenges related to purpose limitation will be solved automatically in the future information provision. Where in the current practice the purpose of purpose binding is tantamount to justifying on paper the processes in which certain data are used while actual use cannot be made easily accessible, much more is possible through the Common Ground architecture. In this architecture, every API call is accompanied by a “declaration of purpose” which is provided by the entity calling the API. This declaration of purpose must be selected from a “purpose binding register” by the person or organisation making the API call. This register holds the various purposes for processing, the legal grounds for processing and a description of the categories of data involved. When an API is invoked a declaration of purpose selected from the purpose binding register is added to the request as part of the metadata of the invocation. The declaration of purpose is logged as part of the metadata by both the information system that initiated the call, the requesting and responding NLX and the information system which provides the API.
For all invocations of APIs, the declaration of purpose is logged in transaction logs. These logs are kept by all links in the chain and together they form an auditable transaction log through which it is possible to determine, to the individual level, what data have been processed for which purpose. The transaction logs can be made available to the persons (citizens) involved in these calls through standardised APIs. In this way, citizens can check for themselves which organisation uses what data, for what purpose and at what time.
Moving from prioritising client interaction to a more holistic approach
The most visible part of any organisation is the part where interaction with customers occurs. This can be interaction between a customer and an employee of the organisation, or a customer using an electronic form or website made available by an organisation. The organisations provision of information serves these interactions by providing the data needed for the interaction. When the provision of information is not aligned with the customer interactions this is immediately visible in an organisation. The customers’ user experience drops below the desired level and the customer is not serviced adequately. The customer interactions are positioned at layer four and five of the Common Ground architecture. The tendency is to immediately repair any information misalignment in these layers without making necessary changes in the underlying layers that are responsible for data provisioning. This often leads to complications and complexity in these layers.
This is in fact a paradox. To properly serve the layers where the interaction takes place, the information provision must be aligned for this specific purpose. In today's reality, with many information silos functioning autonomously side by side, this is not feasible. When, for example, client interaction processes are standardised, this now leads to a "spaghetti architecture" in the layers below as the information needed are stored in different non-interconnected information silos. In a current best case scenario, the original problem is handled in either the interaction or process layer making the information puzzle in underlying layers even more complicated. This unfortunately has been the way municipal software development has worked the last decade, and because of it an unworkable situation has arisen.
The solution must be sought in correctly implementing the necessary changes in all architecture layers when a business need changes. This unfortunately is virtually impossible in the current architecture because architecture layers are not clearly defined and separated from each other. Using the Common Ground architecture, it is possible to adapt quickly to changing business needs. New APIs can easily be added specifically to serve a specific App or process. The right approach is therefore to remodel the entire information provision for a specific business need when that arises and to implement a layered architecture model for that specific need. An interesting side effect of this approach is that as a result consolidation and standardisation of processes will take place automatically. After all, many processes run the way they do because data models of information silos almost compulsorily dictate what the possibilities are in the rest of the architecture - and these data models differ per supplier. An important side note is that the uniformity of processes referred to here is different from the standardisation of chain processes. The latter must always be pursued, for example by clearly defining the various parts and responsibilities for that part of the process.
Moving from supplier specific data models to unified data models
Currently every supplier uses their own data models which they have developed over the years. These data models determine the way in which data is recorded in their own (proprietary) databases. Because all suppliers use different attribute syntaxes and data models, data portability and information system interconnectivity is very low for Dutch municipal information systems. In the future, information models on which data models are based will be defined the data owner together with relevant stakeholders. Dutch municipalities will have a leading role with respect to the modelling and standardisation of municipal data.
Standardisation of data models goes beyond describing the syntax of attributes and the coherence of objects. It also deals with the way in which registrations must handle the recording of the history of mutations. It must be possible to request data on any given reference date and to supply for instance the legal reality at that time. The proper recording of history is a major information challenge. If the data owner does not store and provide this information properly, consumers are forced to record copies of the data themselves, for example snapshots of important moments in time for a given process. Without this far-reaching standardisation, the introduction of architecture principles such as querying data at the source is not possible.
Managing the retention of historical data in a is not as easy as it seems. Retention policies must be part of the business logic of the application using temporal tables. Managing temporal table data retention begins with determining the required retention period for each temporal table. For example, municipal processes have firm requirements in terms of for how long historical data must be available for online querying. A potential complication for managing historical data are situations where a court decision changes the legal reality with retroactive effect. The information models and APIs required for informationally correct retention of historical data in an information system have been well thought through in the recently completed operation BRP project. This thinking can possibly be reused.
Moving from complex all-encompassing services to fit for purpose services
Many of the (StUF) services used in municipal applications are relatively coarse-grained services which can be used for many different business needs. These services pass large amounts of data, including data that is not specifically required for the task. On a technical level this complicates message processing in the endpoint and might in turn harm performance. On the business side, it may mean that (some of) these services are not compliant with privacy regulations as data minimisation principles are not implemented. Municipal software suppliers have indicated that current standards are too complex and complicated to implement.
In the Common Ground vision, we strive for the development of APIs which are easy to implement and conform with privacy regulations. This means limiting the message content to the data which is needed for the task for which the service is used (the purpose binding). It also means reducing the time needed for a developer to make a first successful call to an API, also called the Time To First Successful Call (TTFSC). That means APIs should be platform independent and conform to best-practices.