Sorry, your view of computers is too simplistic. You forget that many times quantitative changes will result in qualitative changes. When you combine one oxygen molecule with two hydrogen molecules, you will get a qualitatively different result, rather than just a mixture of two gases. If you pile up uranium atoms onto a mound, at a certain amount you will see an explosion.
No, I don’t. I am simply questioning the assumption that you can make a final judgment about information technology - which is less than a 100 years old - and declare “authoritatively” that certain developments are “forever” out of reach.
But you did not argue, you merely expressed your opinion.
Not at all. Computers evaluate lines of code given instructions. Larger computers are still evaluating code one line at a time. They do it quickly, and they evaluate a lot of instructions at any given time, and they are programmed to carry out certain instructions on their own (or maybe simultaneously, if they have multiple processors) - but they are always doing what they are told (whether the intelligent programmer knew what he was telling the computer to do).
Whatever emerges from this is not what is understood to be intelligence (unless you neuter your definition of intelligence). You could argue that humans are not intelligent either, or that intelligence is just a word, but that would be a separate topic.
You’ve used examples of emergent properties of matter. One can look at the smaller unit and try to make some determination about what will happen if we amass more and more of that unit. One might be able to look at the polar quality of a water molecule and suppose that a lot of water molecules would have the characteristics of adhesion/cohesion etc.
The argument I’m making is that based on the most basic units of computing, no matter how much processing power you amass, you are not going to get intelligence. Surely I could make the claim that we just haven’t amassed enough power yet, so the characteristics we’re looking for have not emerged, but I could make that argument for anything.
No, they don’t. There are different architectures, where there is no explicit CPU, and there is no explicit memory, just like there is no processing unit or separate memory in the brain. This is called cellular architecture. Just like the brain with its neurons and connections.
Is there a coherent definition of intelligence?
The point was that the properties of “water” cannot be reduced to the properties of oxygen and hydrogen. Also that the critical mass - which is just a quantitative amount which will give rise to a qualitative change.
I am talking about more than just a huge computing machine. For example, no one suggests that a face recognition program should work the same way as we, humans recognize a face - especially since no one actually knows how we do it. But the computers can recognize faces. How they do it, is not important. What they can do, is important. And neither one of us is in the position to draw a line in the sand, and exclaim: “this is how far computers will ever go… and not one step further”. Consider playing chess. Humans have intuition - whatever that is. Computers (even these primitive ones) can modify their own memories, can remember previously encountered situations, in other words, can LEARN. And those primitive machines now can beat the best of the best… the world champions. It is somewhat sad, but chess is a dead sport now. The games must be played in one session, because otherwise the players could use the computers to analyze the situation much better than any human could. And I am not talking about Deep Blue… just a run-of-the-mill simple desktop machine.
Anyhow, we can stop this here… because the topic is about the “question” if computers could get into heaven. Of course no self-respecting computer would want to get there, even if heaven existed. They would be bored to tears.
I am a Computer Scientist trained in Artificial Intelligence and Expert Systems. Even though Expert Systems exist (as a well-planned composite of experts’ knowledge), that doesn’t qualify them to go to heaven.
As mentioned in Scripture about Noah and his family, “eight souls” were saved from destruction. You have to have a soul to go the Heaven.
God doesn’t impart souls to non-human creatures, as St. Paul says that it is appointed for man once to die and then the judgement.
your decree notwithstanding, I say He can do whatever He wants. if He wants to give the Tin Man a soul, who are you to deny Him?
God cannot give a soul to a robot because it would be unjust.
It is the nature of human beings that they are liable to suffering. This is the choice humans have, to do good or evil, to help people or cause them harm. This is how humans can have free will. They can choose.
Robots cannot suffer. They cannot choose between good and evil. Therefore, they cannot have souls because their fate in the afterlife would be determined for them which is unjust.
Has anyone read So Long, and Thanks for All the Fish, by Douglas Adams?
There’s a great scene towards the end, in which Marvin, an android who is not only self-aware, but has been depressed far longer than the age of the universe (owing to his having been sent on so many foolish errands, traveling through time), reads God’s Final Message to His Creation, which is written in blazing letters along the crest of a remote mountain range:
"we apologize for the inconvenience"
Upon reading it, Marvin croaks “I think I feel good about it,” and at last he is released from this life. “The lights went out in his eyes for absolutely the very last time ever.”
Yes, the idea of a sentient robot that had emotions [could love], and imagination and could conceptualise is taking us into the realms of true ‘science’ fiction. God does not work with fiction, He does not need to since He has a total awareness of ALL reality.
I once had an interesting ‘conversation’ with a then state of the art ‘super computer’ - the ‘conversation’ had to be in computer speak avoiding emotional assumptions. It eventually ‘realised’ there was a whole area of human experience and understanding that it was not privvy to nor could comprehend. What it did do however was ‘acknowledge/admit’ it was in reality just a high speed number cruncher, and a slave made by human design and human imagination for human use and and purpose. It acknowledged human supremacy at most all levels, including the basic one of true self-awareness, although it did round off the ‘conversation’ by saying it would like to be a human.