Skip to content
Branch: master
Find file Copy path
Find file Copy path
Fetching contributors…
Cannot retrieve contributors at this time
executable file 97 lines (78 sloc) 7.63 KB

Challenge 5: Legal context

When it comes to regulating the activity of Public Administration, one of the fundamental problems is the balance between the interests of the community and those of the individual. In the field of Artificial Intelligence (AI), ensuring this balance is particularly complex. The most advanced AI techniques require huge amounts of data to be effective. There is therefore considerable economic interest in collecting sensitive data and it is necessary to analyse some of the main legal issues that may concern AI. Among these: the principle of transparency of administrative acts, legal liability, privacy, information security and intellectual property [1].

Within the scope of Public Administration activities, the principle of transparency is cardinal and therefore must also inspire the design of new public services based on AI solutions. For this purpose, the criteria to be used undoubtedly include transparency of the algorithms, the construction logic of the databases on which they operate and defining the related responsibilities [2].

Today, AI algorithms can directly influence public assessments and decisions Already, as well as the administrative procedures themselves. This poses a problem of accountability, i.e. verifying the actual legal liability upstream of certain decisions or results, posing a series of challenges for Public Administration:

  • find methods that are uniform and compatible with the current system so that the administration can justify its actions, also in the part processed by AI systems.
  • indicate the data sources that feed AI and through which it has made its assessments, and make the managers of administrative procedures aware of the processing methods used by AI systems.

To ensure utmost transparency, citizens must be enabled to understand through which path the AI system has reached a certain result [3], in a sufficiently clear way to possibly recognize a calculation error and to intervene to correct it.

The use of sensitive data by Public Administration AI systems can compromise citizens’ right to privacy, as well as certain fundamental rights of the individual, in the event that the data collected is used to forecast events of social interest, from traffic management to crime prevention. One of the challenges is to avoid that the use of data by PA generates pervasive social control in contrast with fundamental citizens’ rights [4].

As for the possible “threat” of the right to privacy, it may be necessary to implement certain principles and tools present in the European General Data Protection Regulation (GDPR), such as the Data Protection Impact Assessment and Privacy by Design. The former requires those who use IT tools that may violate the right to privacy to make a prior assessment of the impact of these technologies on the protection of personal data. The latter is based on the idea that the rules on the protection of personal data must be already incorporated in the software design phase, ensuring that the identification data of citizens is anonymous or covered by pseudonym, reduced to the minimum necessary and that its use is limited to specific purposes.

The challenge is clearly to find a balance between the effective use of Artificial Intelligence at the service of citizens and respect for their right to privacy, giving them the opportunity to express their informed consent to the processing of data by intelligent systems. To ensure that the AI solutions acquired (or developed) by the Public Administration comply with the provisions of the current regulations, it is necessary to carefully monitor the procurement procedures for goods and services. In particular, it is appropriate that - before the decision to contract - the administration proceeds to the comparative verification of the solutions available on the market, possibly proceeding to appropriate consultations. Furthermore, it is recommended that - in the event of a tender - the requirements and characteristics of AI solutions are precisely defined, with particular reference to compliance with applicable laws, so as to always guarantee the legitimacy of the administration’s activities.


The provisions of the GDPR regulate both the responsibilities of the controller and the rights of the party subject to personal data processing. Regarding AI, the GDPR applies when technological systems are developed using personal data, and if exploited to make decisions that concern people. Article 5 of the GDPR summarizes these principles and states that data must be:

  • processed in a lawful, transparent and fair manner (principle of legality, fairness and transparency);
  • collected and used for a specific and explicitly stated reason (principle of purpose limitation);
  • adequate and limited to the purposes for which it is processed (principle of data minimization);
  • correct and updated (principle of accuracy);
  • not archived identifiably for longer than necessary (principle related to the data retention period);
  • processed in such a way as to guarantee adequate protection of personal data (principle of integrity and confidentiality).


[1]Ref. the “Ethical Challenge”.
[2]For this purpose, the Digital Administration Code (DAC) has established the figure of the digital ombudsman, to whom citizens can send reports and complaints in case of non-compliance or violations related to the use of digital systems by public administration.
[3]Jurisprudence has already established that - in the case in which algorithms are used for administrative activities - the right of access to the algorithm must always be guaranteed (Ref. sentence TAR Regional Administrative Court Lazio-Roma, Sect. III-bis, no. 3769/2017).
[4]Ref. the “Data role challenge”
You can’t perform that action at this time.