`
Crimson Publishers Publish With Us Reprints e-Books Video articles

Full Text

Trends in Telemedicine & E-health

Towards a Hybrid Society for Personalized Healthcare

Stefania Costantini1*, Abeer Dyoub1, Claudio Ferri1, Fabio Persia1, Rafael Bordini2, Matteo Cristani3, Daniela D’Auria4, Giancarlo Guizzardi4, Giovanna Costanzo5, Pasquale De Meo5, Rino Falcone6 and Silvia Rossi7

1Department of Information Engineering, Computer Science and Mathematics, Italy

2Faculty of Informatics, Brasil

3Department of Informatics -DI-, Italy

4Faculty of Informatics, Italy

5Department of Ancient and Modern Civilizations, Italy

6Institute of Cognitive Sciences and Technologies, National Research Council, Italy

7Department of Electrical Engineering and Information Technologies, Italy

*Corresponding author:Stefania Costantini, Department of Information Engineering, Computer Science and Mathematics, Italy

Submission: January 10, 2023; Published: April 24, 2023

DOI: 10.31031/TTEH.2023.04.000582

ISSN: 2689-2707
Volume 4 Issue 2

Commentary

One of the main achievements of Artificial Intelligence (AI) consists in the Autonomous Systems (AS), based on Intelligent Software Agents and Multi-Agent Systems (MAS), possibly endowed with Machine Learning capabilities (we will often refer to AS, agents or MAS interchangeably). A multi-agent system is a computer system composed of multiple interacting intelligent agents, where an intelligent (software) agent is a software component which perceives (to some extent) its environment, reacts to environment stimula, takes actions autonomously in order to achieve goals, and is able to communicate with other agents (cf., for surveys on agents and MAS, to [1-3]). Such systems are more and more pushing our technological reach forward, outperforming humans in an ever-growing number of fields. Despite the promise of hugely improving our quality of life under many respects, we may notice that humans take an ambivalent stance w.r.t. AS. On the one hand, humans fear that AS may overcome human control, and take decisions disrespectfully of the intentions and goals of the humans or of the human values themselves.

Such concerns are emphasized by the usually low explain ability level of these systems. On the other hand, the growing success of AS in facing complex problems and the perceived benevolence of such systems may lead to uncritical acceptance of their decisions. The authors believe that, especially for what concerns healthcare applications, issues concerning the relationship between humans and AS must be properly addressed by elaborating a novel notion of “Hybrid Society” (HS), much more involved and profound than existing ones. The term “Hybrid Society” denotes a complex ecosystem in which humans and autonomous intelligent entities coexist. In fact, since AS such as, e.g., self-driving cars, drones, health monitoring devices, etc., start to play a fundamental role in our lives, significant research efforts are being devoted to harmonizing interactions between human beings and artificial entities. The Collaborative Research Centre “Hybrid Societies: Humans Interacting with Embodied Technologies” (https://hybrid-societies.org/) is funded by the German Research, and the aim of the Centre is to gather an international group of scientists with the objective to study the conditions for successful coordination between humans and machines in public spaces, within projects concerning Embodied Sensor and Motor Capabilities, Artificial Bodies, Shared Environments, Intentionality in Hybrid Societies.

These projects are aimed to cope with all aspects related to the physical interaction between humans and (embodied) machines, including self-driving cars. Aspects of Complex Event Processing are to some extent covered, while ethical aspects, trustworthiness and explain ability are not considered. A small international workshop on “Methods for Self-Organizing Distributed Systems” was held in Laubusch, Germany, in 2015. Its outcome is reported in [4], from which we quote what we found to be the most significant statements: “hybrid societies are made of different components instead of having a homogeneous identity. We call them “societies” because the components possess individual agency and interact persistently. Such societies can be comprised both natural and artificial agents or different types of artificial agents only”.

Concerning the application of ICT technologies to healthcare, for the European Union (http://ec.europa.eu/health/ehealth/policy/ index_en.htm), “Digital health and care [eHealth] refers to tools and services that use Information and Communication Technologies (ICTs) to improve prevention, diagnosis, treatment, monitoring and management of health-related issues and to monitor and manage lifestyle-habits that impact health. [This] can improve access to care and the quality of that care, as well as increase the overall efficiency of the health sector.” The relevance and potentially huge impact of the synergy between ICT and Artificial Intelligence (AI) in this field are demonstrated by the IBM Watson attempt, which lead to implement the Health Platform, that however, as it is wellknown, has been dismissed as it has been shown to be unreliable. The provision of quality healthcare in a cost-effective way is in fact a critical issue in all countries, due to the aging population, the reappearance of diseases that were considered extinct, and the development of challenging new issues, such as the Ebola and Covid-19 pandemics.

Intelligent healthcare systems, if carefully developed, can be helpful to cope with the above-mentioned issues in the interest of patients, doctors, personnel and all other parties involved, including patients’ families. Wearable devices for detecting patients’ healthrelated data are gradually becoming cost-affordable [5], and, in view of “assistive robotics”, many inexpensive robotic hardware solutions are now available on the market, and many experiments have been found that “care robots” have a positive cognitive and emotional impact for senior adults and hospitalized children [6]. Software applications managing these devices can be interfaced with several medical information systems (e.g., patient databases, medical archives). Health support systems have therefore been developed. In general, they are based on scalable architectures covering a wide spectrum of roles and models. AI, and in particular Agents and MAS architectures, are a natural choice to implement such systems [7], and in fact they have been profitably employed (cf., e.g., [9] and the review [8], with useful references therein).

However, issues arise with the growing popularity of eHealth, and with the proliferation of “apps” that offer various kinds of services to patients: quoting from a recent review [10] (cf. also the references therein), “There is a lack of studies about the ethics of eHealth services from the service users’ perspective.” In this reference, an exam of the evolution over time of relevant literature about the main ethical aspects concerning digital health can be found. From [11], we quote: “New technologies promise to improve healthcare by … new approaches to cope with the limitations of current practice. However, … ethical implications … [must be] discussed, as well as potential questions arising from the utilisation of Artificial Intelligence in the healthcare settings. The importance of pre-evaluating ethical implications before implementation of new digital solutions into clinical practice [must be] highlighted...” What is needed by doctors, patients, and administrations, is not proprietary closed systems, but rather interpretable systems able to provide explanations, so that users perceive to have sufficient guarantees that such systems act always and exclusively for the good of the patients and that their decisions do not conflict with the patients’ moral principles. Thus, dimensions such as trust, ethics, explain ability, levels of autonomy are issues that must be properly taken care of.

The scenario that we advocate is a novel notion of “Hybrid Society” (that we call for short HS), different from and more general than what is found in current literature. In HS, humans and Autonomous Systems (AS) should be coupled at multiple levels, based on shared agreed-upon principles and standards which must by definition enforce tight constraints on the behaviour of agents. Such principles should include values and social norms, and ethical and professional conduct codes relative to the healthcare fields. They should be flexible, and capable of evolving over time according to changes and evolution in context, needs or norms (we refer to High-Level Expert Group on Artificial Intelligence.

‘Ethics Guidelines for Trustworthy AI’ Brussels:
European Commission, 2019, https://ec.europa.eu/info/ funding-tenders/opportunities/docs/2021-2027/horizon/ guidance/ethical-guidance-for-research-with-a-potential-forhuman- enhancement-sienna_he_en.pdf).

We are especially interested, as a base for the HS, to agentoriented approaches based on Computational Logic [3] because these technologies enable trustworthiness, in the sense that agents should be relied upon to do what is expected of them, while not exhibiting unwanted behaviour. So, agents should not behave in improper/forbidden/unethical ways, and they should not devise new behaviours that might be in contrast with their specification or however with the user’s expectations. They should be transparent, in the sense of being able to explain their actions and choices when required. Trustworthiness can be ensured by various a-priori and run-time verification techniques (cf. [12] and the references therein). The design of the HS should be mainly aimed to machinesupported assistance of persons with special needs - in particular, the ill, the older adults and the disabled – to improve their independence and general quality of life. A prototype system of this kind may in perspective constitute a building block of long-term solutions in the everyday management of an aging population, even in dramatic and pressing circumstances such as a pandemic.

Within the hybrid society, AS can usefully play the role of special actors, taking care of humans, and promoting interactions to human’s benefit. This on the one hand on the purely utilitarian side, e.g., by adjusting the dosage of drugs or identifying the best specialist to treat some symptoms; on the other hand, in a wider perspective, by eliciting and promoting user’s interests and social and affective needs and helping to build new useful social connections; eventually, by steadily providing company, help and assistance. In our vision, each human user will be enhanced by a Personal Assistant Agent (PA) which will represent the user’s entry point into the hybrid society. The PAs will be equipped with a detailed and anytime evolving knowledge of the user’s needs, preferences, and expectations (note that, due to the implications of continuous learning on vulnerable groups’ personal data, legal experts should produce, for such a system, a careful Data Protection Impact Assessment, to be regularly updated).

Thanks to the PAs, users in the hybrid society will perceive themself to be into some extent free of the limitations of body, health state, space and time. This is particularly important for people who are to in a certain degree impaired: they may wish to be taken care of, but also, often so strongly, they may wish to be enabled to transcend their contingent problems and limitations. In order to be able to enact such a crucial role, PAs should understand and follow social norms - forbidding AS to exploit vulnerabilities of humans - and earn trust from other humans/AS in the hybrid society.

The proposed notion of Hybrid Society (HS), and the envisaged approach to the development of AS and HS, should be grounded on three main pillars:
a) Verifiability (trustworthiness). This notion involves mechanisms to ensure that all components of the hybrid society are compliant with respect to their expected behaviour and, in case of violations, that suitable measures will be enacted, independently of the size and complexity of the hybrid society.
b) Trust. The user’s perception of the level of trust and of faith in the ethical behaviour attributable to the PA has an impact on the motivations and intentions of humans to use and to embrace such a system.
c) Explainability. What drives decisions of AS are, initially, the goals instilled by system designers, and, later on, beliefs, intentions and goals developed during their operation (according to the well-known BDI logic model, to which languages to program such systems are usually inspired). Therefore, their behaviour and decisions may be hard to understand for humans. Human users would (rationally) trust more those AS that could provide an intelligible explanation of their behaviours and choices. Numerous studies have linked trust to the possibility of having a system that is verifiable and can provide explanations which are understandable by each specific category of users. Agents based on Computational Logic are better able to provide explanations, as logical inferences are (relatively) easy to transpose into natural language (cf., e.g., [13]).

For all the three pillars above, there is a need to embed in the design, development and deployment of the PAs, adequate levels of accountability and traceability. Accountability and traceability duties find cornerstones in several regulations (e.g. the GDPR, in terms of accountability and of paths to balance the protection of fundamental rights and the pursuit of innovation) and in the high standard of care by developers and researchers (with reference to national and international standards). The envisaged system should be designed to generate personalized care plans where doctors from different specialties may cooperate to improve patients’ health, with a positive impact on the well-being of patients while allowing doctors to better organize their work. Existing health management systems include some useful features, such as the ability of continuously monitoring the health conditions and checking whether the patient is actually following the prescribed therapy. The present proposal advances over the state of the art, because plans devised by doctors for personalized medical and psychological care are meant to be managed by a Personal Assistant Agent (PA) entertaining a trust relationship with the patient. The PA can thus unobtrusively monitor that a patient adheres to the prescribed protocol, can provide company and support in achieving nutritional and physical goals that involve some effort and potential discomfort, and can check whether the expected benefits are actually achieved. This should result in better patient satisfaction, and in perceived higher well-being. The PA can provide support also in case of serious illnesses involving invasive and painful therapies. In any case, doctors rest assured to be consulted if necessary, and to be provided by the PAs with punctual and reliable reports on the patient’s conditions. Thus, our envisaged system is useful to alleviate the workload of physicians, allowing, on the large scale, a more productive usage of human and financial resources of the National Health Systems.

References

  1. Bordini RH, Braubach L, Dastani M, Fallah AE, Gomez-Sanz JJ, et al. (2006) A survey of programming languages and platforms for multi-agent systems. Informatica 30(1): 33-44.
  2. Garro A, Muhlhauser M, Tundis A, Baldoni M, Baroglio C, et al. (2019) Intelligent agents: Multi-agent systems. In: Ranganathan S, Gribskov M, Nakai K, Schonbach C (Eds.), Encyclopedia of bioinformatics and computational biology. Reference Module in Life Sciences 1: 315-320.
  3. Calegari R, Ciatto G, Mascardi V, Omicini A (2021) Logic-based technologies for multi-agent systems: A systematic literature review. Auton Agents Multi Agent Syst 35: 1.
  4. Hamann H, Khaluf Y, Botev J, Soorati MD, Ferrante E, et al. (2016) Hybrid societies: Challenges and perspectives in the design of collective behavior in self-organizing systems. Front Robot AI 3: 14.
  5. Lauretis LD, Costantini S, Pallotta E, Balsano C (2022) An ontology of medical wearables. IEEE 1-4.
  6. Rossi S, Staffa M, Tamburro A (2018) Socially assistive robot for providing recommendations: Comparing a humanoid robot with a mobile application. International Journal of Social Robotics 256-278.
  7. Deters R (2001) Scalability & multi-agent systems. In: Proc. of Workshop "Infrastructure for scalable multi-agent systems" at Agents.
  8. Iqbal S, Altaf W, Aslam M, Mahmood W, Khan MU (2016) Application of intelligent agents in health-care: Review. Artif Intell Rev 46: 83-112.
  9. Gupta S, Pujari S (2009) A multi-agent system (MAS) based scheme for health care and medical diagnosis system. In: International Conference on Intelligent Agent & Multi-Agent Systems. IEEE, Chennai, India.
  10. Jokinen A, Stolt M, Suhonen R (2021) Ethical issues related to eHealth: An Integrative Review. Nurs Ethics 28(2): 253-271.
  11. Caiani E (2020) Ethics of digital health tools. e-Journal of Cardiology Practice 18, N°
  12. Costantini S (2022) Ensuring trustworthy and ethical behaviour in intelligent logical agents. Journal of Logic and Computation 32(2): 443-478.
  13. Martinich A (2009) The philosophy of language. (5th edn), Oxford University Press, New York, USA, pp. 1063.

© 2023 Stefania Costantini. This is an open access article distributed under the terms of the Creative Commons Attribution License , which permits unrestricted use, distribution, and build upon your work non-commercially.