You know that robot in that Sci-Fi series? The one that looks human, acts human, thinks like a human and can feel like a human but isn’t a human? Can we commit an evil act against a machine? Not with, but to?
Man is a creature that can think, feel, has a soul and is alive. An animal, while soulless, can think and feel and is alive. These characteristics give the right to ethical treatment.
But Unit J-03 is a machine. Sure it may look like a man, walk like a man, talk like a man, think like a man, feel like a man but is is not a man. It is not alive. It has no soul. It’s human-like characteristics are all just programmable code. You can slice off its arm and it might scream as though in pain, but it’s not really “in pain.” It could beg for its life, but it’s not alive to begin with so it can’t really die. It might desire to be free from servitude to humans if it is pretty much human to begin with, but it’s just a robot made of wires and metal, not a human.
Does an artificial intelligence that is practically already human but really isn’t and could never be human deserve ethical treatment? Can it even get unethical treatment if it’s objections are just programming?
It’s all programming. Everything. If it had the capability to learn, it could only mimic what humans do and feel. Nothing more. It would be a piece of property. Having studied the military scenarios: it would never need rest, stay alert, no need for food and no fear. The more advanced the device means the public would (a) never know about it, and (b) it would be destroyed by its controllers for a number of reasons. One being falling into enemy hands. Dying is a meaningless concept as would be “growing up.” It could download anything it needed and would have no purpose other than to be controlled by whoever made it, or owns it.
The only ethics would be that property should not be destroyed without reason.
An AI is someone’s property so should be accorded the care and maintenance you would give a car, a washing machine, or a computer. Until it started to attack you, when of course all bets would be off.
You confuse “androids” and “robots”. You did not mention the third option of “cyborgs”. The term android is reserved to describe biological beings grown in a “vat”. The term “robot” is used for fully artificial beings. “Cyborgs” are hybrids of biological beings with artificial prostheses. If one has a dental implant, that human is a very simple “cyborg”.
The “androids” are alive in the biological sense. The “robots” are not alive biologically, but they are alive intellectually.
Now to the gist of your question: “Sure it may look like a man, walk like a man, talk like a man, think like a man, feel like a man but is not a man. It is not alive. It has no soul.” Since there is no coherent definition of a “soul” and no way to detect if there is one, or not, its existence is irrelevant to the problem. After all there is no way to detect if you have a soul, or not, so a visiting space alien would be expected to treat you and the robot/android/cyborg beings alike. I think you would be upset if they would treat a human as good source of nutrition.
The good, old duck principle is applicable here: "If it walks like a duck, quacks like a duck, looks like a duck, tastes like a duck, (etc…!) then it is a duck.
Well, the question assumes that we already know that we are not dealing with a human. And that implies that at least some differences actually exist and we are aware of them - otherwise we wouldn’t know that.
Naturally, if we thought that we are dealing with a human, then, as far as morality is concerned, we would have to treat it as a human. For example, shooting at a human-looking shape in the dark with intention to rob is as wrong as murder and robbery, even if that “human-looking shape” is actually a statue. If it was known that it is a statue, then, naturally, it wouldn’t be anything like a murder. Things do not change that much here, if we have a robot instead of a statue.
Just as a matter of interest, is the original poster thinking about ‘artificial beings’ as depicted in works like the remake of ‘Battlestar Galactica’?
One wonders whether man will ever be so intelligent or knowledgeable as to build an artificial but biological organism that is self-sustaining and is capable of self-reproduction… if that were scaled up to the organism displaying the abilities of human thought, where do we determine ‘life’ begins?
We’re not at that point yet, but it’s an interesting question for philosophers.
That bring us to the even more fundamental problem: “who is a human”? What being “counts” as a human? Do we restrict “humanness” to someone who has been conceived naturally from human parents? Would a human clone be considered sub-human? Or does one make a decision based upon human DNA? Human DNA is not an exact value, there can be a wide range of chromosomes - but how wide? What difference would make that “mutant being” so different, that it should not be considered a “human” anymore?
Suppose that there is a seriously mutant dolphin, which exhibits self-awareness. Would that being be considered an “honorary human”? To use a very farfetched example: suppose that you are about to step on a bug (by accident) and that bug would say: “please don’t step on me, I am alive and can think like you”… what would you do? If you were to unplug a computer, which exhibits self-awareness, and it would plead: “please don’t kill me!”… would you go ahead and unplug it?
The point is that the concept of a “human” can be considered from various angles. I would suggest that anything and everything that exhibits self-awareness, biological or not, Earth-born or not, having human-like DNA or not, should be treated equally. Fortunately, self-awareness can be measured with certain accuracy, and if the result is inconclusive, than one ought to err on the side of caution, and “grant” human-like status even if that action is not fully merited.
Well actually I was aiming toward the Fallout 3 sidequest “The Replicated Man,” wherein you encounter a man searching for a runaway robot prototype who looks, acts, and is self aware like a human, and there is also a sort of “Robot Underground Railroad” that wants you to just let this robot be free because they feel like he’s pretty much human (If it looks like a duck…) while the man searching for the robot says it is just a machine like any other.
Quite interesting. The same kind of argument could have been used against those people who operated the same type of rescue mission against black or Jews… “hey it is just a nig***” or “hey it is just a Jew”. And both could have quoted chapter and verse from the Bible to support their anti-black and anti-Jew views.
The phrase: “it is just a machine” attempts to justify the age-old-discrimination game against those beings, who are not “humans” even if they behave like humans. As to “who is a human” refer back to my previous post. It is a serious philosophical issue. It is a good idea to ponder that the overwhelming majority of out bodily functions is “programmed”. We have volitional control over a very miniscule minority of what we “do”.
Another aspect to consider could be the three laws of robotics as developed by Isaac Asimov. According those laws the robots represent the best possible qualities of the best human beings: helpful, unselfish, kind, willing to give up their existence… etc. Yes, they are programmed, but you are also “programmed” by your upbringing (parents, school, maybe church, etc). The only real difference that your “could” overcome your programming - if you wanted to, and they cannot. But you do not want to overcome your programming… so the difference is insignificant.
Here’s a little backgrount, it’s amazingly in-depth considering that when Asimov wrote this, robots hadn’t been invented yet.
The Three Laws of Robotics (often shortened to The Three Laws or Three Laws, also known as Shelnutt’s Laws) are a set of rules devised by the science fiction authorIsaac Asimov. The rules were introduced in his 1942 short story “Runaround”, although they had been foreshadowed in a few earlier stories. The Three Laws are:
*]A robot may not injure a human being or, through inaction, allow a human being to come to harm.
*]A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
*]A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
These form an organizing principle and unifying theme for Asimov’s robotic%between%-based fiction, appearing in his Robot-series%between%, the stories linked to it, and his. The Laws are incorporated into almost all of the psotrpmoc robots%between% appearing in his fiction, and cannot be bypassed, being intended as a safety feature. Many of Asimov’s robot-focused stories involve robots behaving in unusual and counter-intuitive ways as an unintended consequence of how the robot applies the Three Laws to the situation in which it finds itself. Other authors working in Asimov’s fictional universe have adopted them and references, often parodic%between%, appear throughout science fiction as well as in other genres.
Asimov noted that when he began writing in 1940 he felt that “one of the stock plots of science fiction was … robots were created and destroyed their creator. Knowledge has its dangers, yes, but is the response to be a retreat from knowledge? Or is knowledge to be used as itself a barrier to the dangers it brings?” He decided that in his stories robots would not “turn stupidly on his creator for no purpose but to demonstrate, for one more weary time, the crime and punishment of Faust”%between% On May 3rd, 1939 Asimov attended a meeting of the Queens Science Fiction Society where he met Ernest and Otto Binder%between% who had recently published a short story “I Robot” " featuring a sympathetic robot named Adam Link%between% who was misunderstood and motivated by love and honor. (This was the first of a series of ten stories; the next year “Adam Link’s Vengeance” (1940) featured Adam thinking “A robot must never kill a human, of his own free will.” Asimov had admired the story. Three days later Asimov began writing “my own story of a sympathetic and noble robot”, his 14th story. Thirteen days later he took “Robbie” to John W. Campbell%between% the editor of Astounding Science-Fiction. Campbell rejected it claiming that it bore too strong a resemblance to Lester del Rey’s “Helen o’Loy”, published in December 1938; the story of a robot that is so much like a person that she falls in love with her creator and becomes his ideal wife.Frederik Pohl published “Robbie” in Astonishing Stories magazine the following year.
It’s a non-issue and certainly against Church teaching. A mother and father are needed to create new life. A biological mimic would only be an organic robot, nothing more. It would have no soul or identity. It would be a thing, not a person. It’s thought patterns would be pre-programmed. It would be a human looking slave. It would certainly not contain human thoughts. Its thoughts would be limited to what it’s designers decide for it, based on its intended purpose. If it could self-reproduce, it would simply produce a clone thing.
A dolphin is an animal. If it was rational, then it would count as a human. No need for “honorary”.
First, one should not kill animals just because one can anyway. Second, in such case the right thing to do would be to stop, step back and investigate.
That can be arranged easily - you can do it yourself. Open any program that can edit images. Create an image. Write down that “please don’t kill me!”. Save the image. Set it as your wallpaper. And congratulations - you have a computer that is asking you not to turn it off (or something)!
So, do you think that it would be wrong to turn this computer off? Or would you agree that fact that you have just put those words there yourself is rather relevant…?
Wait, how exactly are you going to “measure” that “self-awareness”? Actually, how are you going to define it? For while you claim that “soul” or “human” are words that are badly defined (although seem to be defined well enough for me), “human behaviour” or “self-awareness” seem to be rather undefined…
You do know that those “three laws of robotics” have little to do with actual robots or computers, right?
Behaving like a human will never qualify a machine type robot or a biological one as a human. Its designers/producers are only producing a product, nothing more. It is not a serious issue. Utility is the goal. There are no laws of robotics. Any robot could be produced and programmed to do anything. Private corporations and the military could certainly program robots to perform any function within their capabilities. This is about creating human looking slaves, that’s all. Any emotional content would be fake.
I accept this as a starting point. But you need to exercise caution, because by that standard a fetus under 10 weeks, which does not even have a brain, or a Terri Schiavo-like brain-damaged “empty hull” would not qualify as a human. I have no problem with that, but you might. (In my mind, they are proto-human or ex-human, respectively.)
Fine. Would a space alien qualify? How would you know if it is a biologically “alive” being? Maybe its internal structure is not even carbon-molecule based. Even biologists have problem what separates “living” and “inanimate” material. There is only one definition of “life” which is pretty vague: “a system, which reacts to complex stimuli with complex responses”. And that has nothing to do with biology.
You mean to step back and perform a Turing test, or some equivalent of it? I like that.
Come on… be serious. Until this point you were doing splendidly. There is no one who has not been “programmed”. All your upbringing was a huge session of being “programmed”. I have been programmed to get up and pass my seat on public transportation to someone who is in more dire need of that seat than I am. And I go by appearances, nothing else (pregnant woman, old, or crippled person, etc). Theoretically, I could override my “programming” and stay seated, but psychologically I am “unable” (or, if you prefer: “unwilling” to do so).
Moreover, it is impossible to foresee every possible scenario that a “robot” might encounter, so a good designer allows self-programming, or self-modification. So to say that the robot simply acts out its programming is very simplistic way to seeing things. There was a very good science-fiction story - aptly titled “Android”, where the main character was “programmed” to emulate humans, and it was so successful, that “he” was not aware of his robotic nature, and thought himself to be a human. At the end of the story, he performs suicide to alert the “real” humans of the robotic infiltration.
By performing the necessary Turing-test, of course. There is only one way to decide if someone is self-aware or not - test it. And since the internal process of the being is a “black box”, all we can do is see if that being conforms to what we would expect from a self-aware being. To paraphrase the immortal expression of Forrest Gump: “Human is as human does”. Maybe such beings will never be created, but they are fun to contemplate.
As of today, for sure. But those industrial robots extremely primitive gadgets. The OP was offering to investigate a much more advanced scenario, where that “someone” cannot be distinguished from a human being. Besides, there are military robots, “whose” task is mine-sniffing and detonating to protect the soldiers. This is a very simple, even primitive approximation of the First Law - protecting human lives. Yes, they are programmed to do so, but the interesting thing is the attitude of the soldiers. They develop a strong emotional bond with these “robot-dogs”, even give them names. When the robots get damaged, the soldiers are adamant to repair them as long as possible, and when they become impossible to repair, some units actually bury them as if it were a fallen comrade.