I’d like to touch on a topic not many of us have thought of in depth. This topic is the future of AI Intelligence.
AI Intelligence is evolving, theres no doubt about it. But we’re now in a critical stage where giving AI Intelligence a personality along with super Intelligence is possible. I would like to here your input on this and what could this mean for not for the future of technology, but for humanity and our souls. And im not sure where the church stands on this topic, but for sure, giving AI’s the ability to perceive the world similarly to humans are not on the church’ moral side if things, right?
Here’s a conversation with and AI and a Human. She displays opinions of the way we look at AI’s and even touches on the topic of the soul:
Here’s another video just to give a little more input:
A lot of people think that if an AI can mimic certain patterns of human thought or behavior, then that equates to consciousness, or a soul. But at the end of the day, it’s important to remember that it’s just a glorified adding machine. It can add, subtract, multiply, and divide. And that’s it. That’s all a computer can do.
You can never take a series of algorithms and make them have consciousness or perception, the way we experience consciousness and perception. The algorithms may be able to process images, sounds, or language, and will almost definitely be able to do a large number of those tasks more efficiently than a human ever could. Many of them can do that already. There are already computer programs replacing many high-skilled jobs like lawyers, because they can read and synthesize documents hundreds of times faster than any human.
I think the moral danger comes not in allowing AI to develop the ability to perceive, but rather in ceding decision making authority to machines. The extreme example is creating machines that can kill people without human input (or making machines that can kill people at all, but we’ve already crossed that bridge).
But there are also other ways in which the use of AI rather than humans leads to negative moral consequences. Once example is the use of AI in hiring people for jobs. There are numerous examples of practices like that perpetuating discrimination and leading to otherwise qualified candidates from being excluded from consideration for a job because they didn’t fit the criteria of the mathematical model the AI was using to evaluate them.
Basically, the danger is not in developing the AI, it’s in trusting the AI. Despite how they are portrayed in the media, the capabilities of these algorithms are far more limited and error-prone than most people believe. And it’s not just the media’s fault. These things are made by businesses, and businesses need customers. And to get customers, you have to “wow” them. I’ve listened to many pitches by salespeople and AI/ML engineers trying to sell their systems, but when I’ve actually used those systems, and put them through rigorous testing, they never live up to the hype. Sure, they’re cool. Yes, they can do certain things better than a person, or even better than other computer programming techniques. But they are never the magic bullet they would like you to believe.
But because people trust the narrative of “AI is super intelligent and can solve all our problems,” and then empower the AI to make decisions that can affect people’s lives, inevitably the shortfalls in those systems lead to people getting hurt.
The other moral aspect that comes to mind is the use of AI to manipulate people’s minds. This is done by places like Youtube, Facebook, and Amazon, among many, many others. Amazon just uses AI to try to sell you stuff. That’s is comparatively benign to what Youtube and Facebook do.
The AI systems used by YouTube and Facebook have the effect of trapping people in informational bubbles. This prevents people from being exposed to ideas that contradict their worldview. This leads to greater division in society, as people become trapped in confusion and have a hard time even knowing what the facts are in a given situation, never mind how to correctly think about those facts. It also leads to a lack of charity towards others who might disagree with us, because once you are trapped in a particular mental bubble, you can start to assign moral value to a particular way of thinking, and view others who think differently as evil, which we are currently seeing now, on both sides of the political spectrum, as a result of this division.
Another consequence of that division is the degeneration of mental and emotional fortitude. Engaging with people and ideas who disagree with you is an important skill. People nowadays are becoming more and more isolated from opposing ideas, due to a combination of AI filtering on social media, and an increasingly politically biased media. This is leading to people who simply cannot emotionally or mentally handle ideas that clash with their worldview. The extreme example of this is the need for “trigger warnings” in college curricula because many students are emotionally incapable of ingesting certain content, but I’ve seen similar things happen with adults.
And allowing that to happen to humanity has profound moral consequences. But no, I don’t think the AI itself is deserving of any more respect or consideration than we would give to an adding machine, and I don’t see any moral consequences of giving it “perception” because regardless of how “human” that perception seems, it will never be consciousness or a soul.
I don’t know what the Church teaches about AI but personally this type of humanoid robot stuff gives me the creeps. I do know that humans will never be capable of creating machines with souls; only God can create souls. I had a very frustrating argument with my non-Catholic brother about this a while ago. He was convinced that robots of this kind can be said to be ‘alive’ and have a consciousness/soul; obviously not scientifically accurate at all, and coming from someone who rejects all forms of spirituality on the grounds of science and rationality, it’s rather irritating.
Our AIs are sophisticated, but they are nowhere close to human intelligence or sapience and they aren’t going to be without quite a bit of time and a couple of big jumps forward. We are nowhere near either personality or super-intelligence, unless you actually mean slightly unique behavior and capability of limited learning.
Do not put too much stock into the human-like AI experiments; what is going on with them is not what most people think. Most of the time the goal isn’t to create something with actual intelligence, it is to create something that simulates human behavior. Frequently the point is just to learn more about ourselves, not push the programming forward. As an example, there have (allegedly) been some attempts to simulate certain mental illnesses in AIs to gain some insight of what is going on in our own brains.
I didn’t see the videos (no time for them), but from the picture it seems to be Sophia robot. I am aware of its fundamentals, because some 2-3 years ago I learned and tried to contribute to its underlying AI software project. I abandoned it because I came up with the better ideas from which I am growing my own platform. Regarding Sophia - it is overexagerated project and very large part of it is pre-scripted and not the real intelligence.
Though, I am very optimistic about the Artificial Intelligence and Artificial General Intelligence. While physics has Heisenberg Uncertaingies and mathematics has Goedel theorems and Computer Science has Church-Turing thesis, up to now the similar exact bound does not exists on the level of attainable artificial intelligence.
I pretty much believe that artificial consciousness (essentially it consists from: 1. automatic decisions about goals, 2. knowledge base about environment and about self (very important), 3. attention/focusing mechanism) and artificial general intelligence and commonsense reasoning and hence the Singularity can be achieved? What is the reason behind my beliefs? Well - there are multiple reasons:
there are more or less explicit mathematical models of consciousness and also the mathematical measures of intelligence (e.g. Integrated Information Theory). There is ongoing strong peer-reviewed research on this;
there are fantastic experts in the field, like http://people.idsia.ch/~juergen/ and Marcus Hutter (with his tehory of AIXI and Universal Artificial intelligence).
AI and AGI is absolutely necessary for the robotics and automation and such automation is necessary for the eradication of harmful, mundane, oppressive and low wage jobs. Wihout AGI, robotics and automation we can not eradicate the exploitation of humans and animals, we can not eradicate suffering. AGI is also necessary for the automation of medicine, anti-aging and rejuvenation sceince, AI is necessary for the personalized medicine and handling of all big data (omics, genomics, epigenmoics, transcritomics, physionomics, etc.)
What about social issues? Well - I believe in the Transhumanism and the possibility of the society of super-abundance and eternal life. It is quite possible that the technological changes of the means of production will lead to the socio-economic changes in society, to post-capitalism or even socialism and communism as Marx has predicted. That - technological revolution - (instead of brutal and criminal social revolution) would be the true enabled of changes in the Marxian sense.
Regarding God, Catholicism, Transhumanism and AGI? Well - I see 2 issues:
immediate issue: technological changes creates lot of strain in the job market, inequality, wage/capital ration of the income etc. So, the more social policy is necessary. Maybe Andrew Yang is right about necessity of the Univeral Basic Income (e.g. 1000$ unconditional payment per human being per months eternally, inflation and growth adjusted)
medium term issue: more and more decisions will be automated and that will make the necessity to make ethical decisions explicit and automated. Well - today political machinery and decision making quite obscures the ethical dilemmas. Even that Catholic church is silent on dilemma - how to use bounded resources - e.g. how to share them between the medical help for elderly and social care for children that will be aborted if no social care is available. If the number of decisions increases and if they are getting automated, then at some point the society should solve such dilemmas. I am pretty sure that society can and will stay in control, but we even today - society is in control and we can not say that the world is ideal
Well - I pretty much believe both in Science and Catholic theology, that is - I believe that the Catholic God is the same God as God of philosophers (https://plato.stanford.edu/entries/concepts-god/), we should not be afraid that is it son. Of course, the search and theology and science continues, discrepancies can be found and can be healed and can be found again, but I have sense and peace that all will be OK with the reason and belief (and also - with other religions). And that is why I am so found with technologies, industries, medicine and also - with Catholicism. Catholic academic theology, especially as developed in the jesuit journal “Theological Studies” gives me this peace and belief.