The views expressed in our content reflect individual perspectives and do not represent the authoritative views of the Baha'i Faith.
Since November of 2022, ChatGPT and other Artificial Intelligence (AI) programs have sparked an excited and sometimes fearful discourse in public, media, science, economy, and even religion.
Either we’re doomed, or we’re saved, the pundits seem to say. Questions abound. Will AI soon surpass human intelligence? Will autonomously learning machines take control over all aspects of society? Is humanity obsolete?
RELATED: What Can ChatGPT Tell Us About the Human Soul?
The enthusiasm for AI is mixed with significant ethical and moral doubts and questions. Will this turn into a nightmare, or is it just hype that will fade away?
Let’s be clear: the discussion is still in its early stages. ChatGPT is currently perceived by the public as the AI flagship, but it is only one of several existing AI systems of this kind. In just six months, it attracted over 100 million internet users worldwide.
Why? – well, because, this AI-powered language model produces answers, ready-made speeches, or even exam papers based on a question or keyword, quickly, in detail, and in relatively readable language. Rapidly processed based on the knowledge already present in millions of web pages, it does those tasks thanks to an algorithm that constructs the statistically most probable but still meaningful sequence of words at an unimaginable speed.
Evidently, these kinds of autonomous learning systems can already outperform human brains in certain aspects. Through their functioning via so-called “neural networks,” AI systems attempt to mimic the human brain as much as possible – but the extraordinary complexity of the human brain remains mysterious and subject to ongoing research. In general, AI systems aim to solve extensive tasks, from language or image recognition to autonomous decisions, to closely replicate human intelligence. The application areas cover nearly everything, whether it’s healthcare, research, education, public administration, economy – or even the military and warfare.
Many see AI as one of humanity’s greatest discoveries, while others fear that it will lead to humanity’s downfall. Do the risks outweigh the benefits? Will jobs be destroyed on a large scale? Will computers ultimately make decisions about patients and their health? Will computers manipulate voter opinions, thus controlling people with opaque methods?
Even philosophers discuss whether AI could develop something like its own “consciousness,” perhaps different from human consciousness, but with a kind of “artificial morality” and a sense of guilt or compassion?
For several years now, concerned and warning voices have arisen – from technology executives, private individuals, cultural creators, scientists from all disciplines, responsible politicians, and representatives of churches and religious communities. In April 2023, numerous public figures, including tech entrepreneurs and developers of ChatGPT, called for a AI “pause” of six months in an open letter.
Then, together with hundreds of AI experts from the Center for AI Safety, the chief developers of OpenAI/ChatGPT – Sam Altman – and Google/DeepMind – Demis Hassabis – have recently raised their voices. In late May 2023, they warned in an open letter with a brief but drastic message about the dangers of unregulated “machine intelligence,” by saying that “mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
The Baha’i Faith takes a position in its own way, through the role it assigns to science and religion, as well as the central importance it places on the unity of all humanity in its teachings.
More than two years prior to the tech community’s warnings, in February 2021, Soraya Bagheri, a representative of the Baha’i International Community (BIC) warned at the 59th session of the United Nations Commission for Social Development about the potential danger that, despite the great potential of AI, its unintended consequences could exacerbate global injustice:
Emerging technologies such as artificial intelligence hold great potential to channel the human spirit to address humanity’s most pressing challenges. One challenge we are facing today is that the speed of technological progress has outpaced the ability to reflect.
So out of curiosity, in mid-June 2023, I asked ChatGPT this question: “Can artificial intelligence solve ethical and moral problems?” Within 20 seconds, I received the following response:
The ability of artificial intelligence (AI) to solve ethical and moral problems is a complex and debated topic. AI systems are designed to perform specific tasks based on algorithms and data processing. They do not possess their own moral values, emotions, or comprehensive ethical sensitivity.
Ethics and morality are intricate human concepts rooted in cultural norms, values, and social beliefs. Resolving ethical and moral problems often requires balancing different values, contextual understanding, and moral reflection. These are capabilities typically associated with human consciousness and moral reasoning.
However, certain aspects of AI technology can assist in ethical decision-making. For example, AI systems can analyze large amounts of data and provide information that may support the assessment of ethical implications. The development of ethical guidelines and principles for the use of AI systems is also an important step in minimizing potential negative impacts and ensuring social responsibility.
Nevertheless, solving ethical and moral problems often relies on human values, moral reflection, and a holistic consideration of the situation. It is important to acknowledge that AI systems make decisions based on algorithms and data, which can be flawed or biased. Relying solely on algorithmic solutions to ethical questions can pose challenges.
Ultimately, the responsibility for solving ethical and moral problems lies with the humans who develop, implement, and utilize AI. It requires comprehensive examination of the impacts, ethical reflection and societal discourse to ensure that AI operates in alignment with our shared values and moral principles.”
This response does indeed sound self-critical and problem-aware. However, it does not satisfactorily explain why I received a response to the same question about a month earlier (mid-May) that was only about half as long, with this final core statement:
Therefore, it is important to establish a strong ethical foundation in the development of AI systems and to design these systems to follow ethical guidelines. Furthermore, it is important for AI systems to be monitored by humans to ensure they make ethically correct decisions.”
That response appeared rather confident, if not naive and illusory.
How can it be that the response to the same question is so different only a month later? Is it a coincidence? When asked about this, ChatGPT responds:
OpenAI can make updates and improvements to the models of ChatGPT after the training period. These updates can lead to slight changes in response generation, for example, to correct previous errors or weaknesses.
Was the difference between the versions a rapidly-controlled reaction to the intense and controversial media discussion following the two OpenAI letters? Were the letters themselves primarily used as a promotional “self-realization”? In that case, where are the “ethical reflection and societal discourse” demanded above?
Apart from that, who actually establishes these “ethical guidelines” and “shared values” and monitors their implementation? So far, at least, technology corporations expect huge profits and therefore develop their AI systems behind closed doors.
The development of AI has revealed a significant tension between ethical-moral standards on the one hand, and scientific-technological progress with economic power struggles on the other. Take the climate problem, for example: Although oil companies recognized the impending climate catastrophe through their own scientific research in the 1970s, also behind closed doors, they kept this knowledge secret. They denied climate change for decades and hindered measures to mitigate it in favor of massive profits.
RELATED: Artificial Intelligence: Can Robots be Moral?
The entire human race now faces a similar issue with the development of AI. Baha’is believe that science and religion must agree. According to the Universal House of Justice, the democratically-elected global Baha’i administrative body, writing in their public statement One Common Faith, religion and science are closely connected but fundamentally different:
The one discerns and articulates the values unfolding progressively through Divine revelation; the other is the instrumentality through which the human mind explores and is able to exert its influence ever more precisely over the phenomenal world.
Here, the “human mind” can be understood as a reflection of the soul in the human body. If we consider consciousness as part of this reflection, then we cannot attribute the ability for independent consciousness to AI.
Therefore, religion – as the main basis of ethics and morality throughout human history – is indispensable as a counterweight to scientific and technological progress. The idea of a necessary balance between the two is one of the central principles of Baha’i teachings. Abdu’l-Baha, in a talk he gave in Paris in 1911, compared humanity to a bird with two wings:
Religion and science are the two wings upon which man’s intelligence can soar into the heights, with which the human soul can progress. It is not possible to fly with one wing alone! Should a man try to fly with the wing of religion alone he would quickly fall into the quagmire of superstition, whilst on the other hand, with the wing of science alone he would also make no progress, but fall into the despairing slough of materialism.
Materialism here describes pure self-interest, one-sided profit or power gain, and the resulting injustice. The danger of slipping into materialism does not come from scientific discovery itself, but from its technological and economic implementation. For example, the discovery of nuclear power was initially a value-neutral result of human knowledge. Only its conversion into a destructive bomb became a problem.
Accordingly, AI must not continue to be developed in secrecy. For the future to be livable for all of humanity, we must ensure that predetermined moral and spiritual values serve as a binding compass in all phases of AI development, self-learning, and application. These values need to be oriented toward justice, peace, and the well-being of all humanity.
A version of this article was originally published on July 1, 2023 at https://www.perspektivenwechsel-blog.de/bahai-artikel/ki-begeisterung-albtraum
Comments
Sign in or create an account
Continue with Googleor