Crimson Publishers Publish With Us Reprints e-Books Video articles

Full Text

COJ Robotics & Artificial Intelligence

Towards a Study on an Interpretation Mathematical Model of Legal Rules within AI Environments

Seremeti L1,2* and Kougias I2

1School of Education, Frederick University, Cyprus

2Laboratory of Interdisciplinary Semantic Interconnected Symbiotic Education Environments, Department of Electrical and Computer Engineering, University of Peloponnese, Greece

*Corresponding author: Seremeti L, School of Education, Frederick University, Nicosia, Cyprus and Laboratory of Interdisciplinary Semantic Interconnected Symbiotic Education Environments, Department of Electrical and Computer Engineering, University of Peloponnese, Greece

Submission: January 18, 2023;Published: February 10, 2023

DOI: 10.31031/COJRA.2023.03.000551

Volume3 Issue1


Keeping a balance between the evolving artificial intelligence applications and a stable legal framework is impossible in modern times. In this short paper, an interim report of ongoing research about this problematic issue is presented. The research is in accordance with the global efforts to legally embed AI services into society. The highlights of the main problem are briefly presented and ideas for its solution are given.

Keywords:Artificial Intelligence; Law; Interpretation; Category Theory

Abbreviations:AI: Artificial Intelligence; CT: Category Theory


The ultimate purpose of the ongoing research is the design of a mathematical model able to properly interpret the existent rules of law in changing conditions (e.g., environments that are enriched with Artificial Intelligence (AI) services). The originality of the research is mainly based on the legal gap that exists concerning the regulation of unforeseen situations of using AI services, in which issues of violation of fundamental human rights are raised [1-3]. The importance of dealing with this specific issue lies within their contribution to the definition of global standards towards the protection of fundamental rights [4]. The methodology, which this study is based on, is the bibliographic research, for otherwise it cannot be based on empirical data, but on critical analysis and clarification of specific terms that constitute the basic conceptual tools of the study. In this research, the unequivocal interpretation of the applicable rules of law is proposed, through Category Theory (CT) semantics, in the context of technological-social conflicts of legal interest. More specifically, the ongoing research focuses on (a) the consideration of the lawful use of AI services and the lawful development of AI applications, (b) the determination of the legal identity of AI services, as co-formators of social cohesion, and (c) the interpretation of the rules of law in conditions of uncertainty, through their Category-Theoretical approach.

Defining the grand challenges

Since the ongoing research aims at contributing on the determination of global standards for AI, compatible with the preservation of fundamental rights, the course of study of this topic was based on three pillars: (a) the moral and ethical issues raised given the propagation of applications of AI in areas of global interest, such as health, freedom, environment, (b) the legal issues of self-determination and hetero-determination of AI systems, and (c) the convergence, in a mathematical sense, between the continuous evolution of AI and the firm human-centered approach to safeguarding fundamental rights. More specifically, a bibliographic review was carried out, in documenting the originality and importance of the research topic. Based, mainly on articles of the European Parliament (Special Committee on Artificial Intelligence in the Digital Age - AIDA) [4], the necessity of finding a legal framework for AI systems was identified, such that, on the one hand, it allows the development of AI for social, economic or individual benefit, and on the other to anticipate and manage with a ‘sense of justice’, the dangers that threaten fundamental rights and democracy. In this context, numerous publications [5-8] document the necessity to resolve issues of legal nature, arising from the use of AI systems in daily activities.

Furthermore, from the investigation of the extent and intensity of the relationship between the two main variables of the research concern, namely the use of evolving AI systems and the safeguarding of fundamental rights, two issues emerged: (a) the need for selfdefinition and hetero-definition of the legal nature of AI systems, so that they integrate smoothly into the anthropocentric regulatory normality and (b) the need to interpret the rules of law in such a way as to allow, on the one hand, the evolution of AI, while, on the other hand, uphold the principles of law. On the first issue, various legal analogues have been proposed for AI systems [9,10], among them, to bring artificial intelligence services under Energy Law [11], since AI systems can be considered energy consuming and generating systems, if the energy, in this case, is to be identified with “data”. Another issue investigated, was the need to interpret the rules of law in the context of the evolution of AI, through the relevant literature review. In this context, Category Theory was proposed as the appropriate mathematical background for the interpretation of rules of law in the circumstances of cases of legal interest. According to [12], after presenting the problem of the nonexistence of a legislative framework that covers the whole range of possible conflicts in environments where software agents and people coexist, a model of interpretation of the existing legal rules was proposed, which should be adaptable to new circumstances. This model is based on System Theory and Category Theory, as it is the mathematical tool for modeling complex and multi-level systems, such as the social one.


Given the need of finding an interpretative framework of the rules of law in AI-based anthropocentric environments, due to their intrinsic features, such as uncertainty, unpredictability and fuzzy regulation, the ongoing research seeks to standardize the normative regulation of AI applications and, thus, it proposes CT semantics as the appropriate mathematical tool.


  1. Pagallo U (2017) The legal challenges of big data: Putting secondary rules first in the field of EU data protection. European Data Protection Law Review 3(1): 36-46.
  2. Mecaj SE (2022) Artificial intelligence and legal challenges. Journal Juridical Opinion 20(34): 180-196.
  3. Seremeti L, Kougias I (2020) Legal issues within ambient intelligence environments. Proceedings of the 10th International Conference on Information Intelligence Systems and Applications, Patras, Greece.
  4. Voss A (2022) Report on artificial intelligence in a digital age. European Parliament A9-0088/2022.
  5. Axpe MRV (2021) Ethical challenges from artificial intelligence to legal practice. Lecture Notes in Computer Science 12886: 196-206.
  6. Bird E, Fox-Skelly J, Jenner N, Larbey R, Weitkamp E, et al. (2020). The ethics of artificial intelligence: Issues and initiatives. Study of the Panel for the Future of Science and Technology for the European Parliament.
  7. Coeckelbergh M (2021) AI for climate: Freedom, justice, and other ethical and political challenges. AI Ethics 1: 67-72.
  8. Giuggioli G, Pellegnini MM (2022) Artificial intelligence as an enabler for entrepreneurs: A systematic literature review and an agenda for future research. International Journal of Entrepreneurial Behavior & Research.
  9. Zibren J (2018) Legal personhood: Animals, artificial intelligence and the unborn. Marasyk University Journal of Law and Technology 12(1): 81-87.
  10. Nowik P (2021) Electronic personhood for artificial intelligence in the workplace. Computer Law & Security Review 42: 105584.
  11. Seremeti L, Kougias I (2021) The legalhood of artificial intelligence: AI applications as energy services. Journal of Artificial Intelligence and Systems 3: 83-92
  12. Seremeti L, Kougias I (2021) Category theory as interpretation law model in artificial intelligence era. Journal of Artificial Intelligence and Systems 3: 35-47.

© 2023 Seremeti L. This is an open access article distributed under the terms of the Creative Commons Attribution License , which permits unrestricted use, distribution, and build upon your work non-commercially.