Robotics and Artificial Intelligence

Has the Church promulgated or has a faithful Catholic theologian addressed the topic of the development of robots and robotics and/or the computing field of artificial intelligence?

How so?

What would they address?

I guess I didn’t really finish my thought there. What I’m looking for are ethical guidelines and such. For example, reflections on the line between legitimate development and venturing too close to God’s domain, similar to the area of cloning.

Computers are not alive. They have no souls. So, I guess I don’t understand what you are looking for.

Cloning involves living beings.

AI is something entirely new and does not involve anything living.

I think the real danger is that scientists may decide to utilize real human brain tissue coupled with a computer to try to achieve ‘AI’.

But achievement of AI using wholly artificial means I doubt has any ethics to worry with.

I don’t believe machines can ever be developed to the point that they are “alive” and have a soul. It is is non-issue.

Now, Heaven forbid a human being is actually cloned someday, I imagine they would be considered a whole person just like people created the “old fashioned” way.

I know where you are with this thought. Computer scientists are actually thinking along the lines of eternal life by using robotic body parts and somehow putting your “existense” into a non- destructible robot frame. It is just sad to think that a lot of money is being put into this kind of research. Instead of thinking about the soul, they have the brain as the carrier of life. I know I am not capable of understanding this kind of “progress” but if children are raised with this concept, it will be a normal thought process to them. I would hope for more from our society for their sake.

Only God can create a soul.
So man will never be able to create an AI with a soul.

We may be able to simulate it, but never duplicate it.

Machines don’t have souls, but many people (especially in the sciences) don’t believe in a soul either.

There certainly is a risk of anthropomorphizing robots with AI and this topic has been treated in numerous chapters of popular media. If a robot is created with persuasive AI how long until the debate rages on whether or not they are sentient and if they should be given human rights.

Thus the question, has anyone yet treated upon these topics…how to address such things…whether it is even prudent to pursue such ventures given their inherent risks…what are all the inherent risks…

I believe someone, somewhere has probably already cloned a human, its only illegal in certain countries, so nothing stopping it in some places. Plus if such a law was created, then it must be possible.

what I question is what kind of being would it turn out. Obviously it would not have a soul, but it would be conscious and could learn, interact with other humans, etc. Im also curious if one could tell if a being had a soul or not, would there be some way to tell?

This is scary to think about, because such beings would not be subject to Gods laws or his wrath.Man would be this beings ‘God’. That alone is scary to think about.

google “qualia” and see where it takes you.

Peace

Why would he or she “obviously” not have a soul? :ehh:

As for the OP’s question, I think it’s interesting that I ran into this while taking a break between episodes of Time of Eve, a show that addresses this very scenario of near-sentient computers. But honestly, that technology is so far ahead of what we can do right now that it’ll remain a non-issue outside of science fiction for at least the next several decades (if it ever becomes an issue at all).

Now, I have to admit, it is a fascinating question… just not one that demands an answer yet. Right now, I think we’re totally fine treating robots and computers as machines, and nothing more. I don’t think that the risk of people treating machines like humans is a good enough reason to discontinue research.

I wasn’t aware that this was an issue with cloning. Cloning would obviously involve IVF (or a similar process with the same problems), so I knew it was immoral to clone humans, but why is it “venturing too close to God’s domain”? Does that mean it’s immoral to clone animals? Why?

EDIT: Also, if anybody has a document that addresses cloning, that might be useful, too.

I merely mentioned cloning as an illustrative example. When I mention that it may venture too close to God’s domain, I think of the intent, which happens to be close to the intent of many pursuing research in cloning. It is the attitude of I can be God and create life by my own means. I think many of the issues here of ones of attitude and intention. Perhaps it is in the same manner as those who cast human emotions, motives, and rights to animals. The animal isn’t a human but a real human thinks it is, more-or-less. Make any sense?

I think we’re a lot closer than that to creating computers that people might think have unique personalities. Just look at IBM’s Watson and how the popular media portray it or how many people think Apple’s Siri really is alive.

Siri is just annoying. Not that I’ve used it much, since it annoys me. :o

Anyone who thinks that such programs are sentient simply needs to be assured that this isn’t the case. I would say the same assurance should be given to anyone who mistakes animal behavior for human emotions, says they should be given human rights, etc. It’s simply not in line with what’s actually going on. Are you looking for a statement from the Church that affirms this?

For me, creating a machine that can learn, adapt to new environments and tasks, and create new things is when it’ll get interesting. I’m a science fiction fan, so it’s pretty easy for me to imagine humans building something that mimics human interaction, communication, and reasoning. At some point, it’ll come into question how much is mimicry and how much can be called actual thought, and what that means for the programs and for people who interact with them. But right now, as far as I’ve seen, the programs we write just aren’t there yet, and I don’t think you’re going to see ones that are for a while. (That opinion isn’t based on much, though, just what I’ve seen so far.)

I really don’t think that “people might get confused” is a valid reason for discontinuing research. That’s a problem that needs to be solved by educating people. :shrug:

[quote=Lost_Sheep]I don’t believe machines can ever be developed to the point that they are “alive” and have a soul. It is a non-issue.
[/quote]

It is good that you used quotation marks around the word “alive”. What is “alive” and what is not is a thorny question. The best definition is that life is “complex reaction to complex stimuli”, regardless of the material that entity is “made of”. The computers may not be biologically alive, but they can be intellectually alive.

[quote=vz71]Only God can create a soul.
So man will never be able to create an AI with a soul.
[/quote]

The word “soul” has several definitions. One of them is supposed to be “give” us rational thinking. Building a very good AI will make that assumption useless and superfluous, just like the discovery of microbes made the assumption that certain illnesses are caused by “demonic possession” useless and superfluous. Of course some higher apes exhibit a very human-like behavior, they are able to conduct a conversation via sign language. And they are not supposed to have a “rational” soul. So this whole “soul-thing” is just nonsense.

[quote=Kamaduck]Anyone who thinks that such programs are sentient simply needs to be assured that this isn’t the case.
[/quote]

Correct. These programs are still very simple.

[quote=Kamaduck]For me, creating a machine that can learn, adapt to new environments and tasks, and create new things is when it’ll get interesting.
[/quote]

Such systems already exist, in a simple form. Medical diagnostic programs, stock market analyzing programs, and a whole lot more. There are self-modifying programs, too. These programs attempt to optimize their own performance. Sometimes they fail, other times they succeed. At the end not even their creator can figure out how did they achieve their superior performance.

[quote=Kamaduck]At some point, it’ll come into question how much is mimicry and how much can be called actual thought, and what that means for the programs and for people who interact with them.
[/quote]

This is the crux of the problem. Where does emulation end and reality begin? Let’s consider a few examples.

Suppose that a perfect copy machine is built, which can make an atom-to-atom replica of the Mona Lisa. Obviously, one of them, the “original” was touched by the hands of DaVinci, while the “copy” was not. The problem is: “how can you tell?”. This “difference” is not material, and there is no way that it could be detected. As such the question of “what is the difference” is meaningless.

The other copy machine is not that perfect. It “only” copies the colors, but not the canvas. There is a discernable difference here… but it is not significant. The two pictures look the same and that is all that counts.

What about an excellent actor, who can emulate all sorts of emotions? How can one find out if those emotions are “real” or contrived? The point is that a “perfect” or a “semi-perfect” emulation cannot be distinguished from the reality. Of course this is expressed in the old duck-principle.

[quote=Kamaduck]But right now, as far as I’ve seen, the programs we write just aren’t there yet, and I don’t think you’re going to see ones that are for a while.
[/quote]

Correct again. I have a suspicion that a “full blown” artificial human will not be built for quite a long time, since it has no practical use. But, then again, humans have a lot of curiosity, so who knows? Since you like science fiction, I suggest you read the books of Stanislaw Lem, especially “The Cyberiad”. They are philosophically “deep” and very entertaining.

Made up terminology.
I would encourage you to use terms everyone knows and is familiar with, not terms that appear made up for the specific purpose of making vague what is absolutely clear.

You are on a Catholic website participating in a discussion on a Catholic forum.
Context of my post dictates that Soul is defined based upon the traditions of the Catholic faith.
And I, author of that post, verify it. Only God can create a soul.
Man will never be able to create a soul.

Indeed. I have seen such programs for nearly a decade.
It would seem man is creative enough to simulate a type of evolutionary process where random changes are made, performance measured, and the change either kept or removed.
Such a method could easily make incredibly complex programs, but complexity does not equate to intelligence or life.

A meaningless example.
“Let’s imagine one is two, now the concept of the one is meaningless.”
The problem is, one is not two.
And no imagination can make it so.

Agreed.
There is no point to an artificial human.
It is more likely our machines will become more and more complex, but never take on human appearance.
Current trend is that they will be fashioned in a way suited to their function.
Our understanding of them will eventually be exceeded, and the more liberal parts of our society will begin campaigning for the inalienable rights of the machine.
Of course leaving out that the rights endowed by the creator of the machine can just as easily (and with the same moral justification) be taken away.
Man-made rights can never be inalienable.

[quote=vz71]I would encourage you to use terms everyone knows and is familiar with, not terms that appear made up for the specific purpose of making vague what is absolutely clear.
[/quote]

There is no “clear” definition. Even biologists cannot agree where is the dividing line between “living” and “inanimate” matter. Viruses can be considered either one of them. When we talk about artificial intelligence, then “biologically” alive is not relevant; it is the “intellectually” alive what matters.

[quote=vz71]Context of my post dictates that Soul is defined based upon the traditions of the Catholic faith.
[/quote]

There are at least three different definitions of a “soul”, and there is no agreement, which one is correct. But I agree, there will never be a robot with a “soul” – because the soul is an irrelevant concept.

[quote=vz71]Such a method could easily make incredibly complex programs, but complexity does not equate to intelligence or life.
[/quote]

Why not? Where is the line of “intelligence”? If part of intelligence is to be able to decipher vague and possibly misleading information, then IBM’s Watson is more intelligent than even the best Jeopardy champions.

[quote=vz71]A meaningless example.
[/quote]

Only if you fail to see its significance. The distinction between “real” and “emulated” is only possible, if the difference can be measured – in any way.

[quote=vz71]It is more likely our machines will become more and more complex, but never take on human appearance.
[/quote]

“Never” is a long time. I doubt that either one of us will live that long.

DISCLAIMER: The views and opinions expressed in these forums do not necessarily reflect those of Catholic Answers. For official apologetics resources please visit www.catholic.com.