Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Read by thought-leaders and decision-makers around the world. Phone Number: +1-650-246-9381 Email: [email protected]
228 Park Avenue South New York, NY 10003 United States
Website: Publisher: https://towardsai.net/#publisher Diversity Policy: https://towardsai.net/about Ethics Policy: https://towardsai.net/about Masthead: https://towardsai.net/about
Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Founders: Roberto Iriondo, , Job Title: Co-founder and Advisor Works for: Towards AI, Inc. Follow Roberto: X, LinkedIn, GitHub, Google Scholar, Towards AI Profile, Medium, ML@CMU, FreeCodeCamp, Crunchbase, Bloomberg, Roberto Iriondo, Generative AI Lab, Generative AI Lab Denis Piffaretti, Job Title: Co-founder Works for: Towards AI, Inc. Louie Peters, Job Title: Co-founder Works for: Towards AI, Inc. Louis-François Bouchard, Job Title: Co-founder Works for: Towards AI, Inc. Cover:
Towards AI Cover
Logo:
Towards AI Logo
Areas Served: Worldwide Alternate Name: Towards AI, Inc. Alternate Name: Towards AI Co. Alternate Name: towards ai Alternate Name: towardsai Alternate Name: towards.ai Alternate Name: tai Alternate Name: toward ai Alternate Name: toward.ai Alternate Name: Towards AI, Inc. Alternate Name: towardsai.net Alternate Name: pub.towardsai.net
5 stars – based on 497 reviews

Frequently Used, Contextual References

TODO: Remember to copy unique IDs whenever it needs used. i.e., URL: 304b2e42315e

Resources

Take the GenAI Test: 25 Questions, 6 Topics. Free from Activeloop & Towards AI

Publication

Pope Francis Talked About AI & Ethics at The G7
Latest   Machine Learning

Pope Francis Talked About AI & Ethics at The G7

Last Updated on June 18, 2024 by Editorial Team

Author(s): Harriet Gaywood

Originally published on Towards AI.

Pope Francis Talked About AI & Ethics at The G7

Credit: Generated by Dall-E 3

This week, Pope Francis addressed the Group of Seven (G7) Summit in Southern Italy about AI and highlighted the importance of ethics as during his address (see full version). He was speaking as part of Italy’s bid for the Presidency of G7.

Pope Francis said β€œAt a time when artificial intelligence programs are examining human beings and their actions, it is precisely the ethos concerning the understanding of the value and dignity of the human person that is most at risk in the implementation and development of these systems. Indeed, we must remember that no innovation is neutral. Technology is born for a purpose and, in its impact on human society, always represents a form of order in social relations and an arrangement of power, thus enabling certain people to perform specific actions while preventing others from performing different ones. In a more or less explicit way, this constitutive power dimension of technology always includes the worldview of those who invented and developed it.”

He added β€œWe cannot, therefore, conceal the concrete risk, inherent in its fundamental design, that artificial intelligence might limit our worldview to realities expressible in numbers and enclosed in predetermined categories, thereby excluding the contribution of other forms of truth and imposing uniform anthropological, socio-economic and cultural models. The technological paradigm embodied in artificial intelligence runs the risk, then, of becoming a far more dangerous paradigm, which I have already identified as the β€œtechnocratic paradigm”. We cannot allow a tool as powerful and indispensable as artificial intelligence to reinforce such a paradigm, but rather, we must make artificial intelligence a bulwark against its expansion.”

So what are the other forms of truth? Which parties or stakeholders should be engaged to ensure that AI develops in a way that is β€˜beneficial’ to society? This is another way of looking at the question of which organizations are best placed to govern AI and how can we define good decision-making? How can we ensure that AI is ethical or judge what is right and wrong? G7 is united on some issues, but regarding the topic of AI, the focus of its members and other governments aren’t united because their misalignment reflects their individual geopolitical concerns, while global corporations are struggling to take a β€˜global approach’ but hitting cultural and national obstacles. So, what kinds of organizations should lead in defining the values that can shape β€˜good’ decision-making in AI?

The Vatican City’s Rome Call for AI Ethics has attracted signatories, including Microsoft, IBM, Cisco, FAO, and the Pontifical Academy of Life. Under the auspices of the RenAIssance Foundation, established by and run out of the Vatican City, β€œit aims to promote a sense of responsibility among organizations, governments, institutions and the private sector with the aim to create a future in which digital innovation and technological progress serve human genius and creativity and not their gradual replacement.” This sounds like a laudable and suitably universal ambition that most people would support β€” but to achieve this, the representation needs to be broad. If one religious organization is leading discussions on this topic, then other religious voices must be included to create a diversity of views. This has happened to some extent β€” in January 2023, jewish and muslim religious leaders signed the Rome Call. So, which other religions should be included if religious leaders are going to steer the direction and contribute truths?

If Italy is successful in its bid for the Presidency of G7, there has already been a hint of what to expect. On September 2023, Italy’s Prime Minister Giorgia Meloni spoke spoke at the UN saying β€œWe cannot make the mistake of considering this domain [AI] a β€˜free zone’ without rules. We need global governance mechanisms capable of ensuring that these technologies respect ethical barriers and that the evolution of technology remains at the service of humans and not vice versa. We need to give practical application to the concept of β€œalgorethics,” that is, giving ethics to algorithms”. So, can we define values and ethical behavior that transcend religions? Or will certain governments and politics always dominate?

Business leaders are also being challenged on a national level. In April 2024, the US Department of Homeland Security (DHS) created an AI Safety and Security Board, which includes members of corporations, academics, policymakers and civil organizations (see full list of DHS current board members). Corporations include Adobe, Alphabet, Amazon Web Services, AMD, Cisco, Microsoft, and Nvidia. So, how should a global corporation balance its business interests with protecting national security?

The US is clear about its concerns. It has released β€˜Guidelines to mitigate AI critical infrastructure’ and released a report (link) about AI misuse in the development and production of chemical, biological, radiological, and nuclear (CBRN). This builds on the AI Risk Management Framework (RMF) by the National Institute of Standards and Technology (NIST).

If we contrast this with the UK’s AI Bill (just 7 pages), which includes the establishment of an AI Authority and defines AI as follows: (i) In this Act β€œartificial intelligence” and β€œAI” mean technology enabling the programming or training of a device or software to β€” (a) perceive environments through the use of data; (b) interpret data using automated processing designed to approximate cognitive abilities; and Β© make recommendations, predictions or decisions; with a view to achieving a specific objective. (ii) generative AI, meaning deep or large language models able to generate text and other content based on the data on which they were trained.” This feels very vague and limited in its reach given the number or types of AI not included. A research briefing about the AI Bill in the House of Lords β€” 18 March, 2024, states β€œat present, it is too soon in the evolution of AI technology to legislate effectively and to do so now may be counterproductive.” Therefore a β€˜voluntary’ approach of self-governance is encouraged. A 2017 report by PWC (still used for the recent AI Bill) suggests there are economic benefits, including adding GBP232 billion by 2030 because of AI (10% of the UK’s GDP).

The UK approach contrasts with the recently released EU AI Act, which is more regulatory across all areas of AI and ”aims to provide AI developers and deployers with clear requirements and obligations regarding specific uses of AI. At the same time, the regulation seeks to reduce administrative and financial burdens for business, in particular small and medium-sized enterprises (SMEs).”

So the question is, what should be the sources of our truths? Are kinds of voices should contribute to the decision about whether it is a good or bad decision by AI? Our answer is of course, based on our experiences, our education, our culture, our families, our subject expertise, and the resulting values. Algorethics is simply how we reflect on the ethical use of algorithms. How should human empathy, experience and emotional intelligence play a role in decision-making and shape algorithms? Can our values be considered in a logical manner to guide the development of AI? What are the challenges of this technological paradigm?

I would love to hear your thoughts about the truths we need to confront to ensure the ethical development of AI.

#AI #Ethics #Algorethics #G7 #RomeCall

Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming aΒ sponsor.

Published via Towards AI

Feedback ↓