AI Robots vs AI Life Forms

I am not completely certain that I am using the correct terminology in the title so I will define how I am using the terms.

In this thread, an AI robot and an AI life form can be alike in terms of capabilities such as being autonomous, being able to learn, and even resembling humans or other animals when it is ethical.

The difference is that the AI robot is not given a sense of self, in terms of value, in comparison to existing life forms. In its programming, it never considers whether it is moral to do something. The machine could run programs, which can be so complex that it could be considered an android. It could even download new functionality in real time. However, the morality of the programs are left to the designers of the programs.

An AI life form, on the other hand, is the result of an effort to create an AI machine that has a morality system. It is able to make evaluations or comparisons between the value of itself and other life forms. The machine may also be able to modify its own value system through learning systems. In its highest form, it might be able to attempt any level of self determination as humans can. It might decide to travel to Jupiter, for example.

Do you think that both of these types of AI machines are ethical?

Personally, I would limit myself to robots.

If we base our morality on a set of rules then there is nothing to stop us programing those rules into an AI. We may all disagree as to whose rules we use.

The robot could be programmed with Asimov’s Robotic Laws:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.(https://en.m.wikipedia.org/wiki/Laws_of_robotics#cite_note-1)

I see no reason to accept that a robot, coded with some sort of “morality” a part of its computer language/data becomes a “life form”.

I know of no evidence that amoebas exhibit morality, but they are generally accepted as life forms. Going up the complexity scale, I will leave it to others whether or not any “large” animals (e.g. elephants, simians) have developed or exhibit morality.

That a machine coded to run programs could be further coded to apply a morality provided not by the machine but by the programmer does not, in my understanding of the term :life form) raise it to anything beyond a form of inanimate computer.

Inanimate: not endowed with life or spirit; lacking consciousness (lest we get into a disagreement as to how I use the term).

Nice science fiction. It might be the grist of a story but has no bearing on reality.

I see your point.

Definition of life:
the condition that distinguishes animals and plants from inorganic matter, including the capacity for growth, reproduction, functional activity, and continual change preceding death.

Does self determination make it an AI life form?

In silicon terms, it could decide to do the things listed in the definition. Growth is translated to the ability to upgrade itself.

There is a distinction I am trying to make between two different types of AI machine. I am not sure how to do it though. There is a point when the machine ceases to be purpose built and becomes capable of self determination.

A programmed morality or having a self valuation vs human’s or other creatures does not seem necessary for a robot built for a purpose. Asimov’s laws, for example, create a value system where humans have a greater value. They do not forbid the harming of all other creatures though.

As long as a machine is not programmed to harm creatures, laws are not neccesary to prevent intended harm of humans or other creatures because the machine will not generate the motivation in the first place. A machine with self determination and the capacity to learn might need morality.

Is it ethical to attempt to endow a machine with morality and self determination?

Artificial intelligence and life form are contradictory.

I see how using that term can create confusion. I don’t think it is accurate but I have been studying definitions for a while and I have not found the correct term yet.

I need a terms that describe a AI machine that is capable of self-determination and a different term that describes an AI machine that is not capable of self-determination. If I find or create terminology that is accurate, I will re-post.

No. Any so-called self-determination is extremely limited to the coding the machine has.

You seem to imply that a machine could "upgrade itself. anything the machine does is going to be fairly linear, as it is constrained by the code it has. I have yet to see creativity in a machine being anything except a linear progression; and that so narrowly defines creativity as to be meaningless.

You posit this, with no examples. In other words, you are making a statement that the machine “can do” something called “self determination”. I say that is simply a linear progression of code and has absolutely nothing to do with self determination.

Asimov was a creative writer. I have enjoyed him and other Sci Fi writers; I have yet to see anything produced by a computer that even begins to scratch the surface of creativity.

Maybe I’m reading too many threads on this forum but I’m imagining a few decades from now some protestant denomination somewhere splitting over their new robot pastor or human-robot intermarriage.

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.

DISCLAIMER: The views and opinions expressed in these forums do not necessarily reflect those of Catholic Answers. For official apologetics resources please visit www.catholic.com.