SciELO - Scientific Electronic Library Online

 
vol.34 número2EditorialLa discontinuidad embrionaria y la unidad de la persona en el pensamiento de santo Tomás de Aquino. Algunos impactos en la bioética actual índice de autoresíndice de materiabúsqueda de artículos
Home Pagelista alfabética de revistas  

Servicios Personalizados

Revista

Articulo

Indicadores

Links relacionados

  • No hay artículos similaresSimilares en SciELO

Compartir


Medicina y ética

versión On-line ISSN 2594-2166versión impresa ISSN 0188-5022

Med. ética vol.34 no.2 Ciudad de México abr./jun. 2023  Epub 30-Jun-2023

https://doi.org/10.36105/mye.2023v34n2.01 

Articles

Rome call for AI Ethics: the birth of a movement

*Chancellor of the Pontifical Academy for Life, Vatican City, Italy

**Press Office of the RenAIssance Foundation, Vatican City, Italy. E-mail: elisabetta@gmail.com.


Abstract

The use of Artificial Intelligence (AI) and its development in recent years, as well as the role it played during the pandemic in 2020, has aroused great interest in its applications in favor of life on the one hand and, on the other, fears regarding its use. Adherence to ethical criteria that promote and defend human dignity, justice, the principle of sociability and subsidiarity, that is why the intention of Pope Francis and Monsignor Paglia, from the Pontifical Academy for Life, arises to launch the call to reflect on the incorporation of ethics in AI. At the same time, the RenAIssance Foundation calls for the incorporation of ethical principles in the different applications of AI from a global perspective and in all sectors.

Keywords: artificial intelligence; ethics; human dignity; justice; health; rome call

Resumen

El uso de la Inteligencia Artificial (IA) y su desarrollo en los últimos años, así como el papel que jugó durante la pandemia en 2020 ha suscitado, por un lado, un gran interés en sus aplicaciones en favor de la vida y, por el otro, temores respecto a su apego a criterios éticos que promueven y defienden la dignidad humana, la justicia, el principio de sociabilidad y de subsidiaridad. Por ello surge la intención del papa Francisco y de monseñor Paglia, desde la Academia Pontificia para la Vida, de lanzar el llamado a reflexionar sobre la incorporación de la ética en la IA; a la par, la fundación RenAIssance alberga el llamado para que, desde una perspectiva global y en todos los sectores, se incorporen principios éticos en las diferentes aplicaciones de la IA.

Palabras clave: inteligencia artificial; ética; dignidad humana; justicia; salud; rome call

1. AI: applications and risks

In the words of the Holy Father Francis, “the digital galaxy, and specifically AI, is at the very heart of the epochal change we are experiencing” (1). The circumstances that on 28 February 2020 led to the first signing of the Rome Call for AI Ethics(2) and to all the actions and reactions that followed are very complex: in order to clarify the reasons that led to this call, it is necessary to illustrate its historical context and its objectives.

According to the official definition of the European Union, which is devoting attention and ample resources to the topic (3), AI “is the ability of a machine to display human-like capabilities such as reasoning, learning, planning and creativity”. AI enables a technological system to understand its environment, relate to what it perceives and solve problems, and above all to act to achieve a specific goal. The mechanism seems simple enough: a computer receives data (either already prepared or collected through specific sensors, such as a camera), processes it and provides a response. The core of the debate surrounding AI -what makes this specific technology unique and enormously powerful- is its ability to act on its own: AI adapts its behavior according to the situation, analyses the effects of its previous actions and works autonomously.

In itself, AI is not a modern-day novelty (4), but advances in computer power, the availability of huge amounts of data and the development of new algorithms in recent years have allowed it to make leaps forward on an epoch-making scale.

Are there risks? Many, and differing in nature. A first aspect to consider is its pervasiveness, of which few are fully aware. AI is used daily in advertising and online shopping to provide suggestions based, for example, on previous purchases, searches and other behaviors recorded online; in machine translation; in the development of self-driving vehicles; in cybersecurity, to recognize trends in the continuous flow of data; in the detection of fake news and disinformation, to identify suspicious words or expressions; and again in transportation, manufacturing plants, the agricultural and food supply chain, in the public administration and services.

All this does not take place in science fiction scenarios or far-fetched predictions, nor does it belong to those niche skills (winning over human champions in a game of chess (5), writing a theatrical script (6), creating works of art (7) that provide the media with attractive headlines. The uses mentioned so far are already possible and real in many parts of the planet.

However, there are three areas among others in the application of AI that deserve special attention, as they are particularly close to the dimension of the individual and make what is happening in the world of technology more tangible.

2. Security, senior care, and health

The first area concerns security, or rather what in 2019 Shoshana Zuboff called “surveillance capitalism” (8). According to Zuboff, a professor at Harvard Business School, the main concern in the field of AI emerges from the lack of defined and perceivable boundaries - which would allow the big players in Silicon Valley to build “a new economic logic” (9), according to criteria that ‘will shape the moral and political milieu of 21st- century society and the values of our information civilization (9). The AI giants, starting from Google, says the scholar, are trying to redefine the global market through their

ability to find data that users had opted to keep private and to infer extensive personal information that users did not provide. These operations were designed to bypass user awareness and, therefore, eliminate any possible friction (9).

Among the many pieces of data provided freely and unknowingly to the Web through social networks or electronic devices, Zuboff mentions information about one’s acquaintances, sleeping habits, the decibel level of the music played in one’s living room, the steps of one’s running shoes, one’s location (which some apps detect every two seconds), floor mapping by robot vacuum cleaners, and much more. In her book The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power, Zuboff also mentions a Facebook company report from 2017 which talks about “detailed psychological insights” that can “pinpoint the exact moment when a teenager needs a ‘confidence boost’ and is, therefore, most vulnerable to a specific configuration of advertising nudges and cues” (9).

In this perspective, for technology the value of the human being lies in its production of a constant flow of data. The huge amount of data that is constantly collected constitutes itself a point of reflection. It is clear that this brings out sensitive topics that are worth mentioning. In this context, we will briefly discuss two among others: i) confidentiality and privacy; and ii) the informed consent.

The topic of confidentiality and privacy of this constant flow of data is a major problem, that must be considered. In fact, what happens to these data? Which is the proper way to store them in order to preserve users’ privacy? The General Data Protection Regulation of European Union (GDPR), also known as Regulation 679/2016 is a good starting point. In Italy, the operability of the GDPR is guaranteed by the Legislative Decree 101/2018, that amends the Privacy Code 196/2003. However, it is not evident how to manage the data coming from AI, especially because of the increasingly faster development of AI itself. As an illustration of this complexity, the right to be forgotten (RTBF) or right to oblivion (art.17), which establish the right to remove private information about a person from the Internet, seems at least not obvious to be applied in the context of AI. Especially in the sensitive context of health, new technologies based on AI collect information about patients that are strictly confidential, and it is even more important to understand how to properly store these pieces of information. E-health is the new frontier of medicine and brings consequences. An interesting example is a relatively recent application of AI at the Mayo Clinic called Sensely (10), a virtual assistant that interacts with patients and help them in monitoring their health. On the official website, we can read the motto of this conversational platform: “Increasing access. Lowering costs. Improving health”; nonetheless, as already mentioned, this is not free from problematic in terms of privacy of data. Similarly, this happens with AI applied to human genetics in general (11).

The informed consent, a delicate topic itself, can be explained in terms of people’s right to know and, even more delicate, to understand what is happening to their self and what is related to it. In some circumstances, it may be difficult to properly inform users about the processing of their data; sometimes, the fuzziness of this information is fuzzy on purpose; for example, the so-called “cookies” we are invited to accept every time we browse are often difficult to be managed or avoided. As highlighted by Gefenas (12), an interesting example of the changes of paradigms when it comes to emerging technologies and informed consent is provided by the biobanks, that are “instrumental in reaching the ambitious goals of […] medicine as they are crucial to discover different types of biomarkers to diagnose, treat, or prevent various human diseases” (12). Nonetheless, it is undoubtedly sensitive how i) to proper store the collected data (in fact, similarly to the above-mentioned application of AI in the context of the healthcare system) and ii) to inform the patients about the process of collection of data, their storage, their treatment, and their rights throughout the process. Emerging technologies are challenging the right to patients (and citizens in general) to be properly informed and to have their personal data protected from the public domain; it is thus evident that AI, a further step in terms of new technologies, claims itself for a specific regulation in order to preserve individuals’ right to know.

A second area for reflection is that of so-called senior care, the medical care of the elderly. According to a recent report in the British medical journal The Lancet, in the context of long-term care of the elderly, AI-enhanced interventions are “promising innovations that could reshape the global landscape” (13).

The situation is the following: in more advanced countries, population is ageing, unpaid caregivers (i.e. family members or volunteers) are fewer and fewer, and paid caregivers are increasingly expensive. In this context, the adoption of technological services that “assist people with activities of daily living and encourage social participation and management of chronic health conditions” (13) is welcome, even when non-human. Wearable biometric trackers, capable of detecting sudden physical changes or falls and issuing an alarm, have been on the market for quite some time. There are different systems and different techniques: AiCure makes use of smartphone cameras and an AI algorithm to monitor the taking of medication by the elderly who may have problems with eyesight, dexterity, cognition or memory loss (14); ElliQ, developed by Israel Intuition Robotics, can entertain the elderly with easy conversations, remind them to take their medication, accompany them in light physical activities, and be integrated with messaging and social media platforms, to allow the family to remotely monitor the elderly person’s condition (15). The proposals of Californian Cherry Labs, which aim at “more safety, more productivity and less costs” (16), support elderly persons in managing the last part of their lives, provided they are willing to equip themselves with six AI cameras with sound recorders, agree to the use of facial recognition, and to the possibility for family members and caregivers to monitor in real time how and where patient are (17).

For AI, the elderly person is a subject to be studied in order to develop sustainable solutions. This leads to sensitive topics, such as the effort to increase people’s longevity and to diminish their physical and mental impairments and disabilities.

In fact, it has not to be forgotten that, even in developing AI, “we need the human touch”, as suggested by Takahashi (18). The danger of de-personalization of medicine due to the application of AI is tangible and may cause repercussion of the quality of care. In particular, patient-doctor relationship may suffer from the implementation of AI, as “doctors are losing their monopoly over medical skills, and patients may not respect doctors as much as they used to. Considering the professional responsibility which is part of the medical profession, erosion of the close relationship between doctors and patients is a crisis which cannot be ignored. Doctors must be aware of the impact of this crisis and look for ways to avoid such a disaster” (18). And this is true especially in elders’ care, in which patients are frail subjects not only because they are patients, but also because they are a vulnerable group of population.

From this point of view, we must pursue an anthropocentric approach from a legal perspective (19), too: not only on the theoretical level, but also on the practical one, it is important not to forget the centrality of the human dignity also when the decision-making process is automatic and delegated to AI. As suggested by Spiller, this assumption is a key element to analyse two main principles:

on the one hand, there is the principle of digital by default: a strategy based on the presumption that technology may positively contribute to the efficiency of decision-making procedures so that to make it a new right. On the other, instead, there are the issues concerning the so-called non-exclusivity principle: processes, ensuring the right to challenge data-driven decisions before a human expert operator (19).

Moreover, it is not obvious that the achievement of the incredible discoveries made thanks to AI would contribute to a more equitable healthcare system. In fact, as stated by Rigoli (20), on the one hand

the gradual advances using artificial intelligence in many aspects of health services enable to expand the reach and benefits of knowledge and cure […]. At the same time, a disquieting number of studies begin to show how this potential is also an amplifier of biased policies [and] may reflect the trends towards exclusion and discrimination, or alternatively, serve as a tool for facilitating and improving access of the vulnerable population both for care and prevention (20).

Also, AI has the potential “for amplifying existing injustices, the lack of transparency (even for its own designers) of the internal working of most algorithms as well as the global production and distribution of AI application” (20), making particularly delicate its application when dealing with vulnerable populations.

A final example concerns one of the fields that involves individuals in their most tangible dimension: health. According to PricewaterhouseCoopers (21) (a multinational network of professional services firms, operating in 158 countries), AI is clearly effective in so-called early detection, where identifying diseases such as cancer accurately in the early stages makes all the difference. The case of breast cancer is emblematic, where AI almost completely eliminates false positives: the accuracy achieved is 99%, and the costs of unnecessary biopsies are eliminated (22).

The leading players in the sector include DeepMind Health, a British company controlled by Alphabet (Google’s parent company) which initially rose to prominence for the accuracy of its diagnoses. Trained to identify ten ocular pathologies from optical scans, their system provided exact indications in 94% of cases: percentages worthy of the best specialists. At the same time, however, it acquired the sensitive data of 1.6 million patients without giving them any prior information. The data collected by the system included details of admissions, discharges, accidents, illnesses, critical care but also diagnoses of HIV or depression, overdoses and other emergencies.

In response to the scandal, DeepMind Health established an ethical committee; then, in 2018, the company was fully acquired by Google and the issue was put aside. Last year, the company added the so-called “protein folding problem” to its solution portfolio. The achievement, celebrated by the American Association for the Advancement of Science as “Discovery of the year in 2021” (23), consists in identifying the 3D projections, i.e. the structure, of more than 200,000 known proteins. This should lead to a deeper and faster understanding of diseases and the creation of new drugs (24).

As in so many other fields, also in health, medicine, biology, the fundamental problem of artificial intelligence is represented by its amazing efficiency. AI processes mammoth amounts of data in humanly inaccessible times. It creates connections and relations among diverse elements. It analyses and profiles users with astonishing precision. Above all, it is a powerful tool that accurately carries out orders to obtain results. At the same time, however, it fragments both the human being and the relationship between doctor and patient into mini-problems: a person is taken care of through an algorithm, an app, sensors, data analysis and much more.

3. Rome Call and development for a new Algor-ethics

In the context described so far, it is perhaps this last example, which concerns the care of the individual, that illustrates more clearly a risk which emerges in every field: hyper-specialist fragmentation produced by the most advanced technologies can lose sight of the human dimension.

How to overcome this fragmentation, and how to embrace the urgency expressed by Pope Francis in the following words: “Solid reasons need to be developed to promote perseverance in the pursuit of the common good, even when no immediate advantage is apparent”? (25) This is the question that in 2020 led the Pontifical Academy for Life to organize the conference “RenAIssance. For a Humanistic Artificial Intelligence”, and to jointly promote, on 28 February of the same year in Rome, the signing of a call to responsibility.

This document, named the Rome Call for AI Ethics, was first signed by Monsignor Vincenzo Paglia, President of the Pontifical Academy for Life, Brad Smith, President of Microsoft, John Kelly III, Deputy Executive Director of IBM, Qu Dongyu, Director-General of FAO, and the then Minister for Technological Innovation and Digitisation, Paola Pisano on behalf of the Italian government, with the presence and endorsement of the then President of the European Parliament David Sassoli (2).

The idea behind the Rome Call stems from the realization that new technologies, however powerful they may be, cannot be regarded as mere tools to perform certain functions more quickly and efficiently.

The real novelty is that this new wave of information technology classifies itself not as a specific technology, but as a general technology, that is, as a type of technology that does not perform a single specific task, but that changes the way we do all things (26).

By operating in this way, AI in which we understand reality and ourselves and poses radical questions about the identity of the human subject.

In order to steer AI’s challenges towards respecting the dignity of every human being, the Rome Call proposes an algor-ethics (27), that is, an ethics of algorithms, not as an instrument of restraint but to provide direction and guidance. In the words of the Pontiff, algor-ethics is aimed at

ensuring a competent and shared review of the processes by which we integrate relationships between human beings and today’s technology. In our common pursuit of these goals, a critical contribution can be made by the principles of the Church’s social teaching: the dignity of the person, justice, subsidiarity and solidarity. These are expressions of our commitment to be at the service of every individual in his or her integrity and of all people, without discrimination or exclusion. The complexity of the technological world demands of us an increasingly clear ethical framework, to make this commitment truly effective (28).

The target audience is society, organizations, governments, institutions, international tech companies: everyone is needed to share a sense of responsibility that will guarantee all mankind a future in which digital innovation and technological progress place the human being at the centre.

With the aim of promoting a new algor-ethics and sharing in its values, signatories pledge to demand the development of an AI that does not merely aim to make more profit or gradually replace humans in the workplace; those who sign it also commit themselves to six fundamental principles.

The principles of the Rome Call are: Transparency, as a matter of principle, AI systems must be comprehensible; Inclusion: the needs of all human beings must be taken into account so that everyone can benefit and all individuals can be offered the best possible conditions to express themselves and develop; Accountability: those who design and implement AI solutions must do so based on accountability and transparency; Impartiality: not to create or act according to prejudice, thus safeguarding fairness and human dignity; Reliability: AI systems must be able to operate in a reliable manner; security and privacy and also must function securely and respect the privacy of users (29).

An important contribution about the development of ethical principles in AI and digital systems is the article published in Nature by Sinibaldi, which encourages a transdisciplinary approach and discussion (30).

4. Rome Call and RenAIssance Foundation

The first signing of the Rome Call in 2020 was a historic event, but also a beginning. Just as in bioethics different branches of knowledge are involved in an ongoing debate aimed at finding how to best take care of human beings in the age of technology, so the Rome Call brought together its first partners. The call was answered by institutional partners, as bearers of value; technological ones, that implement solutions; and political ones, that regulate the limits of use and management of the digital world. The Rome call took shape on the basis of this background: having shared aspirations of personal dignity, justice, subsidiarity and solidarity with partners capable of making the difference is a fundamental first step.

The Rome Call, however, is not only a symbolic moment of encounter and endorsement; it is first and foremost a cultural movement that aims to bring about change, as demonstrated during the pandemic despite the countless difficulties caused by forced distancing and the impossibility of physical encounters among different realities.

Covid-19, which paralyzed the entire planet, highlighted the prophetic value of the Rome Call. At a time when meeting in person became impossible, issues such as data management, the relationship between doctor and patient, the role of human beings in carrying out of work, privacy, and many other aspects captured the attention of a range of actors who until recently had shown little interest in the urgency of an ethics of AI. Many companies have tried to make this vision their own. It is interesting to note that even the world of academia recognized that it lacked intellectual tools capable of training its members; as a consequence, they decided both to join the call and develop curricula capable of filling this educational gap.

Confirmation of the Holy See’s interest in a universal dialogue at the boundary between humanity and technology can also be found in the establishment of the RenAIssance Foundation. On 12 April 2021, the Holy Father Francis, upon the proposal of His Excellency Msgr. Vincenzo Paglia, President of the Pontifical Academy for Life, established this institution with public canonical juridical personality. Located in the Vatican City State at the Pontifical Academy for Life, to which it belongs and on whose behalf it acts, the non-profit RenAIssance Foundation aims to support the anthropological and ethical reflection on the impact of the new technologies on human life, and is registered in the NPOs (Non-Profit Organizations) list with the Governorate of the Vatican City State.

In order to achieve these goals, the RenAIssance Foundation aims to promote an anthropological and ethical reflection on AI and the new technologies among people qualified for their scientific, ecclesial, cultural, entrepreneurial and professional commitment in society; to encourage scientific initiatives and collaboration with International Bodies, Sovereign States, universities, research centres, private and public companies that develop activities, services and studies in the field of AI in order to disseminate the Rome Call for AI Ethics; to promote fundraising to support these activities.

The Call continues to attract commitment by many. In the two years since it was first signed, the Rome Call has made the object of careful and constant dissemination in organizations such as Oracle, Facebook, Mozilla Foundation, Unesco, the Bioethics Institute of the Catholic University of Buenos Aires, the Human Technology Lab of the Catholic University of Milan, MIT and countless others.

Alongside the debate with the various stakeholders, interest in the concrete application of AI became apparent as early as September 2020 with the organization of the online event AI, Food for All. Dialogue and Experiences. In this context, FAO, Microsoft and IBM joined the Pontifical Academy for Life in relaunching efforts to develop inclusive forms of AI and promote sustainable ways to achieve food and nutrition security (31). The high-profile event focused on identifying concrete ways in which AI can contribute to the goal of feeding an estimated world population of nearly 10 billion by 2050, while safeguarding natural resources and addressing challenges such as climate change and the impact of global events such as the COVID-19 pandemic.

5. Perspectives

The Call’s media impact -its ability to perceive and give shape to a universal sense of urgency- was rewarded in 2021 by Stanford University, which included the Rome Call in its AI Index Report, ranking it as one of the top five topical issues in the previous year in the field of the ethical use of AI (32). The AI Index Report provides prestigious international acknowledgment: produced by the Institute for Human-Centred AI at the same university, it is a comprehensive document that annually outlines, collects, analyses and presents data on AI with the aim of providing impartial, rigorously audited and international data to politicians, researchers, managers, journalists and the general public with a view to developing, reflecting and expanding knowledge of this specific technology. Furthermore, a particular attention deserves the application of AI in the healthcare system, which is extremely sensitive due to its direct relationship with frail subjects.

Also of note in 2021 was the presentation of the Rome Call to the Freedom Online Coalition, an association of 32 governments committed to working together to support internet freedom and protect fundamental human rights (freedom of expression, association, assembly and online privacy) worldwide.

With the much hoped-for weakening of the COVID-19 pandemic and the consequent possibility of traveling again, some key events are on the horizon. At the end of October 2022, in Indiana (USA), at the University of Notre Dame, representatives of universities from all over the world will sign the Rome Call as part of the Global University Summit.

In January 2023, the first interfaith signing of a document on the ethics of AI will take place in Rome, with representatives of the three Abrahamic religions (Ebraism, Christianity, and Islam) joining together to call for the development of an AI that respects the principles of the Call. A discussion and signing of the Call by the major Asian religions is planned for the end of the year 2023 in Tokyo.

The idea of confronting shared universal values, moving away from a purely Western model, represents the new horizon: a horizon grounded in respecting, in every step, the uniqueness and dignity of the human being. With this goal in mind, the ambition is to extend the agreement to include further signatures from different religions and cultures.

Referencias

1. Holy See Press Office. Address prepared by Pope Francis read by H.E. Msgr. Vincenzo Paglia, President of the Pontifical Academy for Life on the occasion of the meeting with the Participants in the Plenary assembly [Internet] 2020. [consultado 30 de septiembre de 2022]. Disponible en: https://press.vatican.va/content/salastampa/it/bollettino/pubblico/2020/02/28/0134/00291.htmlLinks ]

2. RenAIssance Foundation. Text of the Rome Call for AI Ethics [Internet]. [consultado 30 de septiembre de 2022]. Disponible en: http://www.romecall.org/Links ]

3. European Parliament. What is artificial intelligence and how is it used? [Internet]. [consultado 30 de septiembre de 2022]. Disponible en: https://www.europarl.europa.eu/news/it/headlines/society/20200827STO85804/che-cos-e-l-intelligenza-artificiale-e-come-viene-usataLinks ]

4. Stanford University. Appendix I: A Short History of AI [Internet]. [consultado 30 de septiembre de 2022]. Disponible en: https://ai100.stanford.edu/2016-report/appendix-i-short-history-ai#:~:text=The%20field%20of%20Artificial%20Intelligence,Research%20Project%20on%20Artificial%20Intelligence . [ Links ]

5. Wikipedia. Deep Blue (chess computer) [Internet]. [consultado 30 de septiembre de 2022]. Disponible en: https://en.wikipedia.org/wiki/Deep_Blue_(chess_computer)#:~:text=Deep%20Blue%20was%20a%20chess,University%20under%20the%20name%20ChipTest . [ Links ]

6. Science. Kinky and absurd: The first AI-written play isn’t Shakespeare-but it has its moments. Artificial intelligence generates a story about a robot trying to understand humanity. [Internet]. [consultado 30 de septiembre de 2022]. Disponible en: https://www.science.org/content/article/kinky-and-absurd-first-ai-written-play-isn-t-shakespeare-it-has-its-momentsLinks ]

7. The New York Times. An A.I.-Generated Picture Won an Art Prize. Artists Aren’t Happy [Internet]. [consultado 30 de septiembre de 2022]. Disponible en: https://www.nytimes.com/2022/09/02/technology/ai-artificial-intelligence-artists.htmlLinks ]

8. Wikipedia. The Age of Surveillance Capitalism [Internet]. [consultado 30 de septiembre de 2022]. Disponible en: https://en.wikipedia.org/wiki/The_Age_of_Surveillance_CapitalismLinks ]

9. Financial Times. Shoshana Zuboff: Facebook, Google and a dark age of surveillance capitalism [Internet]. [consultado 30 de septiembre de 2022]. Disponible en: https://www.ft.com/content/7fafec06-1ea2-11e9-b126-46fc3ad87c65Links ]

10. Sensely. [Internet]. [consultado 30 de septiembre de 2022]. Disponible en: http://www.sensely.comLinks ]

11. Lacobucci S. Privacy e sanità tra Regolamento 679/2016/UE e Codice privacy come modificato dal d.lgs. 101/2018. Il pacchetto europeo protezione dati per quanto riguarda privacy e sanità, in Salute della popolazione, Big Data e sistemi integrati. Una proposta etica. Mariani L. Pegoraro R. Ruggiu D, Piccin. 2019; 1-41. [ Links ]

12. Gefenas E. Biobanking as a case study for changing paradigms of ethics and governance in the context of emerging technologies, in Convergence of new emerging technologies. Ethical challenges and new responsibilities. Caenazzo L, Mariani L, Pegoraro R, Piccin. 2017; 73-82. [ Links ]

13. The Lancet. Artificial intelligence for older people receiving long-term care: a systematic review of acceptability and effectiveness studies [Internet]. [consultado 30 de septiembre de 2022]. Disponible en: https://www.thelancet.com/journals/lanhl/article/PIIS2666-7568(22)00034-4/fulltextLinks ]

14. AiCure [Internet]. [consultado 30 de septiembre de 2022]. Disponible en: https://aicure.com/Links ]

15. ElliQ [Internet]. [consultado 30 de septiembre de 2022]. Disponible en: https://elliq.com/Links ]

16. Cherry Labs [Internet]. [consultado 30 de septiembre de 2022]. Disponible en: https://www.cherrylabs.ai/Links ]

17. The Guardian. The future of elder care is here - and it’s artificial intelligence [Internet] [consultado 30 de septiembre de 2022]. Disponible en: https://www.theguardian.com/us-news/2021/jun/03/elder-care-artificial-intelligence-softwareLinks ]

18. Takahashi Y. The Clinical Consequence of AI, in the good algorithm? Artificial intelligence: ethics, las, health. XXVI General Assembly of Members. Paglia V, Pegoraro R. Rome: Pontifical Academy for Life. 2020; 103-115. [ Links ]

19. Spiller E. In Tech we Trust…but we need Human as a Right, in the good algorithm? Artificial intelligence: ethics, las, health. XXVI general assembly of members. Paglia V, Pegoraro R, Rome: Pontifical Academy for Life . 2020; 270. [ Links ]

20. Rigoli F. Artificial Intelligence in the road of Health for All. Perils and Hope, in The good algorithm? Artificial intelligence: ethics, las, health. XXVI general assembly of members. Paglia V, Pegoraro R, Rome: Pontifical Academy for Life . 2020; 123-140. [ Links ]

21. PricewaterhouseCoopers. No longer science fiction, AI and robotics are transforming healthcare [Internet]. [consultado 30 de septiembre de 2022]. Disponible en: https://www.pwc.com/gx/en/industries/healthcare/publications/ai-robotics-new- health/transforming-healthcare.htmlLinks ]

22. Wired. This AI software can tell if you’re at risk from cancer before symptoms appear [Internet]. [consultado 30 de septiembre de 2022]. Disponible en: https://www.wired.co.uk/article/cancer-risk-ai-mammogramsLinks ]

23. American Association for the Advancement of Science. Science’s 2021 Breakthrough: AI-powered Protein Prediction [Internet]. [consultado 30 de septiembre de 2022]. Disponible en: https://www.aaas.org/news/sciences-2021-breakthrough- ai-powered-protein-predictionLinks ]

24. The New York Times. A.I. Predicts the Shape of Nearly Every Protein Known to Science [Internet]. [consultado 30 de septiembre de 2022]. Disponible en: https://archive.ph/F12W4#selection-573.84-595.210 . [ Links ]

25. Holy See Press Office. Address prepared by Pope Francis read by H.E. Msgr. Vincenzo Paglia, President of the Pontifical Academy for Life on the occasion of the meeting with the Participants in the Plenary assembly [Internet] 2020. [Consultado 30 de septiembre de 2022]. Disponible en: https://press.vatican.va/content/salastampa/it/bollettino/pubblico/2020/02/28/0134/00291.htmlLinks ]

26. Benanti P. Le macchine sapienti. Bologna: Marietti; 2018. [ Links ]

27. Benanti P. Oracles: Tra algoretica e algocrazia. The social and ethical implications of AI and algorithms necessitate both an algor-ethics and governance of these invisible structures that increasingly regulate our world in order to avoid inhuman forms of what we might call an algocracy. Rome: Sossella; 2018. [ Links ]

28. Holy See Press Office. Address prepared by Pope Francis read by H.E. Msgr. Vincenzo Paglia, President of the Pontifical Academy for Life [Internet]. [consultado 30 de septiembre de 2022]. Disponible en: https://www.vatican.va/content/francesco/it/speeches/2020/february/documents/papa-francesco_20200228_accademia-perlavita.htmlLinks ]

29. Paglia V, Pegoraro R. The Good Algorithm? Artificial Intelligence: Ethics, Law, Health. XXVI General Assembly of Members 2020. VA: Pontifical Academy for Life; 2020. [ Links ]

30. Sinibaldi E. Gastmans C. Yàñez M et al. Contributions from the Catholic Church to ethical reflections in the digital era. Nature Machine Intelligence. 2020; 2: 242-244. [ Links ]

31. Pontifical Academy for Life. AI, Food for All. Dialogue and Experiences [Internet]. [consultado 30 de septiembre de 2022]. Disponible en: https://www.academyfor life.va/content/pav/en/news/2020/international-conference-ai-food-for-all.htmlLinks ]

32. Stanford University. Artificial Intelligence Index Report 2021 [Internet]. [consultado 30 de septiembre de 2022]. Disponible en: https://aiindex.stanford.edu/wp-content/ uploads/2021/11/2021-AI-Index-Report_Master.pdfLinks ]

Received: January 12, 2023; Accepted: January 25, 2023

Creative Commons License Este es un artículo publicado en acceso abierto bajo una licencia Creative Commons