The computational theory of mind

Hi all,

I’d like to generate some discussion around probably the most well supported materialistic theory of mind, computationalism. In particular, I’d like to discuss the problems with this theory and (probably in a later post) how a Thomistic response might be put forward. Note, the below is paraphrased substantially from a book by Edward Feser called “The Last Superstition”, and I would recommend it.

Simply, the computationalist theory of mind is based on the idea that thoughts are “symbols”, the medium of which are neural states or firing patterns, in the same way as “cat” written on a page is a symbol, whereby the medium of this symbol is ink on a page. Thinking, on this view, is the transformation from one or more symbols to others according to the rules of an algorithm. For an example with a difference in degree but not in kind, a calculator manipulates the symbols “2” and “+” and “2” and “=” to give “4” according to an algorithm encoded in the device. These symbols get their meaning by the cause and effect associations with objects outside the brain. For instance, a certain brain state will count as a symbol for “there’s a cat” if it is caused by cats appearing to the sense organs of the observer.

Now, there are problems with each step in this process:

  1. The idea of certain thoughts being considered as symbols
  2. The idea of unconsious algorithms constituting thinking and
  3. The meaning given to symbols by causal relationships

At the moment, I would like to concentrate on 1. The difficulty of trying to define thoughts as symbols can be loosely expressed in the following reductio:

  1. All thoughts are physical symbols in the brain.
  2. Symbols are only considered symbols of things (i.e. representative of things) by intentional agents. For instance, the word "cat’ is only a symbol for an actual cat because English speakers have associated the two. In the same way, a drawing of a cat is only considered symbolic of an actual cat by an intentional observer.
  3. From 2., the physical arrangements of matter which are said to be symbols don’t inherently point beyond themselves to the thing being symbolized (according to materialism). For instance, the word “cat” written on a page is, without a mind to interpret, just squiggles of ink on a page.
  4. If thoughts are physical arrangements of matter, they do no inherently point to anything, or represent anything.
  5. Only an intentional mind can interpret physical arrangements as symbols
    1. presupposes an intentional mind to interpret the physical states of the brain and assign “symbols”, but (under materialism) this leads to a regress of internal “minds” - see Homunculus Arguement.
  6. Therefore 1. is false, some thoughts are not physical symbols in the brain.

The conclusion of 7. seems to suggest that there are intentional states of mind which cannot be reduced to brain states, as the computational theory of mind is probably the most plausible way of explaining thought under materialism.

Look forward to your comments.
3.

Thanks for posting this. There was a related thread (It from bit) and if you haven’t looked through that you might. I’ve been interested in this topic, have read Penrose, Searle, and others but not Feser. I want to “digest” your post and then comment in more detail.

In a computer the memory and the processor are on separate chips. It’s tempting to think of the same separation in humans (between static symbols and a dynamic intentional mind), but what if they are integrated? A symbol can then be a node in a map that is changed by every thought, or can be a relationship in the map, making it part of the thought, or can be a meta-symbol containing both, or a meta-meta-symbol or …

Another issue relates to the non-verbal, emotional aspects of our minds where a lot takes place. In what way can we say that the feeling of happiness is a symbol?

I don’t know enough neuroscience, but I’d venture we need more results before putting trust in any theories.

There are savants who say that they see numbers & musical notes as colors, symbols, shapes and as smells.

Mathematical savants:

google.com/search?q=mathematical+savant+see+numbers+as+color&rlz=1I7ADBR_en&ie=UTF-8&oe=UTF-8&sourceid=ie7

Musical savants:

google.com/search?q=musical+savant+see+numbers+as+color&rlz=1I7ADBR_en&ie=UTF-8&oe=UTF-8&sourceid=ie7

Andy T

For the sake of us uneducated idiots on the subject, I would like to add on this. Reason being on a Catholic site like this one, it is not out of order to seek what the Lord’s Wisdom, Knowledge and Understanding might be on any subject.

If one looks at

Gen:2

18: And the LORD God said, It is not good that the man should be alone; I will make him an help meet for him.
19: And out of the ground the LORD God formed every beast of the field, and every fowl of the air; and brought them unto Adam to see what he would call them: and whatsoever Adam called every living creature, that was the name thereof.
20: And Adam gave names to all cattle, and to the fowl of the air, and to every beast of the field; but for Adam there was not found an help meet for him.

Therefore there was nothing found that the Man could associate himself with, but is was given to man to associate these to a name. It also could be understood here that God associated Himself with the Man. Association is of great importance in the use if the mind, and mind set, or setting of the mind, for use of the user. And also in the Understanding of God’s Will, Purpose and Choice for mankind. God chose to associate Himself with Noah, Abraham, King David, and the many other remembered names.

To use the analogy of a computer, a computer is useless to the user, without and system of association he can understand for use.

Or maybe…

In animals having brains, one can easily observe that species rarely if at all associate themselves with another species especially in the wild. Though they may associate what another species does, to such things as danger or safety. A lion would not associate a calf as dangerous to his own life, other than the lion might associate the calf to groceries. But a calf might live to learn that a lion is a danger to his life. Even the animal learns to not associate himself to the lion and associates the lion to danger.

So I believe there is something to association when it comes to understanding how a mind works. But the construction of the mind to accommodate this function would be to those that would know.

  1. If all thoughts are physical symbols in the brain the thought:
    “All thoughts are physical symbols in the brain”
    is a physical symbol in the brain.
  2. Therefore a physical symbol in the brain symbolises itself.
  3. What causes a physical symbol in the brain to symbolise itself?
  4. How does it symbolise itself?
  5. What is the **significance **of a symbol symbolising itself?

A similar theory presented by a member of this forum is that truth is an “isomorph” of atomic particles. So much for the protestations of materialists that they are not crude reductionists! :slight_smile:

Andy:

Utterly simplistic!

At each and every moment (or, Now) our minds take “snapshots” much like a super-super-camera might. These snapshots are strung together like a motion picture filmstrip. Except that the “snapshots” in our minds consist of incredibly more definition, subtlety and quanta. By quanta, I mean, stuff.

No wonder such “materialists” can’t see the hand-prints of God anywhere. They can’t even see reality.

God bless,
jd

Well, a Thomistic response here strikes me as very close to looking for an alchemist’s response to new developments in chemistry. I can’t help you on that, and suggest that is a good way to make sure you understand less than you do now, going down that path, but will make a couple comments on computational points that arise from your discussion below.

Also, just by way of disclaimer, I’m not a fan of CTM in the classical sense, as it’s primarily philosophical, and only loosely connected to good cognitive science and neurophysiology, and even then only after the fact. Adaptive Neural Networks are an example of modern mechanical models that have changed the game from the 1960s and 1970s, when CTM was the “only game in town”. ADNs have the strong advantage of being derived from neurology, from recent knowledge of the structure and patternings of the brain, and as such afford the model key strengths over classic CTM in the areas of learning, associativity, parallelization, pattern/chunking and integration of noise and stochastic inputs.

It might be fair to say that ADNs and similar architectures for cognition are the “new CTM”. It is computational, and it is a theory of mind, but even so, it makes discussion confusing because CTM has traditionally pointed at a different architecture, the classic top-down rules processing model.

Simply, the computationalist theory of mind is based on the idea that thoughts are “symbols”, the medium of which are neural states or firing patterns, in the same way as “cat” written on a page is a symbol, whereby the medium of this symbol is ink on a page. Thinking, on this view, is the transformation from one or more symbols to others according to the rules of an algorithm. For an example with a difference in degree but not in kind, a calculator manipulates the symbols “2” and “+” and “2” and “=” to give “4” according to an algorithm encoded in the device. These symbols get their meaning by the cause and effect associations with objects outside the brain. For instance, a certain brain state will count as a symbol for “there’s a cat” if it is caused by cats appearing to the sense organs of the observer.

I don’t know where you are getting your primer on CTM, but this seems a very simplistic rendering of CTM, if not a caricature. Under CTM, the symbols may be atomic – “there”, “cat”, “is”, etc. – and integrated at runtime into sentences as needed, according to semantic and syntactic rules. The classic “the cat is on the mat” would not need to be a symbol, but a structure composed of symbols with grammatical relations between them, providing the (semantic) structure. If I suggest to you a new kind of imaginary animal that is half elephant and half giraffe, the chimera you imagine may be (will be) informed from some experience of both “elephant” and “giraffe”, but the resulting “eleraffe”, if it is to become a symbol, does not point to such an extant animal, but to an imagined beast.

There are lots of examples to go into on this, but suffice it to say that symbols may be directly grounded in sensory experiences of the extra-mental world, but they may not be.

Now, there are problems with each step in this process:

  1. The idea of certain thoughts being considered as symbols
  2. The idea of unconsious algorithms constituting thinking and
  3. The meaning given to symbols by causal relationships

At the moment, I would like to concentrate on 1. The difficulty of trying to define thoughts as symbols can be loosely expressed in the following reductio:

  1. All thoughts are physical symbols in the brain.

This I do not understand to be a feature of CTM proper, but a subset area of CTM called Physical Symbol System Hypothesis (PSSH) that is a recent variant. Not a problem here, but from this I understand you to be now analyzing PSSH, per Simon and Newell and similar thinkers.

-TS

Well, no. You can go buy an OCR machine that will recognize and understand (per its rules) those squiggles of on a page – check out the machines the banks use to scan and process checks! That is not to say that such a machine has strong AI, but just to note that a computational mind would need nothing more than that to ground its symbols in “things beyond the self”; a computational mind would store visual prototypes that get matched against visual input, and thus associated with that symbol. If a computational mind can visually recognize “M” as ink on paper, it can then associate such visual inputs with other symbols and processing (e.g, combine it with other recognized symbols and white space to produce a string of symbols like “man”…).

A computational mind then, should not have a problem associating its symbols with external stimuli. As visual input (for example) streams in, the “visual lexicon” can be searched for matches with tolerance. This binds a brain-state to related phenomena by visual processing.

  1. If thoughts are physical arrangements of matter, they do no inherently point to anything, or represent anything.

Is this really what Feser claims?? Lame. See above. A visual pattern for “M” will fire on recognition processing when external inputs match. This binds the brain state to particular forms of external phenomena, and this is readily achievable by relatively dumb machines now. By “pointing to”, in terms of the computation, I mean that some stored visual pattern is response invariant with characteristic inputs. “M-like” patterns that are sufficiently close to the prototype (or set of prototypes) in memory, and not more like other patterns in the repository will predictably fire “M”. This is pointing to external phenomena.

  1. Only an intentional mind can interpret physical arrangements as symbols

I think you should define “intentional” here, and set out a criterion for “interpret”.

    1. presupposes an intentional mind to interpret the physical states of the brain and assign “symbols”, but (under materialism) this leads to a regress of internal “minds” - see Homunculus Arguement.

I think this idea of the “little man inside” working the brain with his brain, and so on… is precisely what CTM proposes to eliminate. It doesn’t presuppose it. Your (1) above certainly doesn’t presuppose such. CTM identifies the intuitive language commonly superimposed on cognition – an “inner eye” looking at “pictures in the brain”, for example – as misleading and erroneous. Mental imagery as internal presentations that are not “pictures” as we think of them externally, but rather are simply associative patterns. A software program that is processing images doesn’t “see” the input fed into it as “images”; “image” at the point of input disappears as part of the explanatory framework; it is just signal processing at that point, and there is thus no cascade of homonculi produced. The homunculus problem, then, is an artifact of illicit assumptions about how visual processing (or other forms of sensory integration) obtain.

  1. Therefore 1. is false, some thoughts are not physical symbols in the brain.

The conclusion of 7. seems to suggest that there are intentional states of mind which cannot be reduced to brain states, as the computational theory of mind is probably the most plausible way of explaining thought under materialism.

Look forward to your comments.
3.

I think 7 fails, based (at least) on the errors in 6.

I haven’t read the thread yet, beyond this, and will be very interested to see what, if anything, Thomistic thinking can contribute positively on this question.

-TS

Thanks all for the replies. Sorry, I am a bit time poor at the moment and I won’t be able to respond as often as I would like. Therefore, for the moment, I am going to respond to the objector, Touchstone first.

Touchstone,

Thanks for weighing in, I was hoping you would.

Well, a Thomistic response here strikes me as very close to looking for an alchemist’s response to new developments in chemistry. I can’t help you on that, and suggest that is a good way to make sure you understand less than you do now, going down that path, but will make a couple comments on computational points that arise from your discussion below.

I don’t know where you are getting your primer on CTM, but this seems a very simplistic rendering of CTM, if not a caricature. Under CTM, the symbols may be atomic – “there”, “cat”, “is”, etc. – and integrated at runtime into sentences as needed, according to semantic and syntactic rules. The classic “the cat is on the mat” would not need to be a symbol, but a structure composed of symbols with grammatical relations between them, providing the (semantic) structure. If I suggest to you a new kind of imaginary animal that is half elephant and half giraffe, the chimera you imagine may be (will be) informed from some experience of both “elephant” and “giraffe”, but the resulting “eleraffe”, if it is to become a symbol, does not point to such an extant animal, but to an imagined beast.

Well, that’s why I said “Simply” at the beginning. I’ll admit I am new to the subject of philosophy of mind, so I won’t pretend to know every detail of CTM. But I think the definition I gave is a reasonable start for discussion at least.

Before I go any further, I’ll answer the following first:

I think you should define “intentional” here, and set out a criterion for “interpret”.

By intentional I refer to the following definitions from dictionary.com:

a. pertaining to an appearance, phenomenon, or representation in the mind; phenomenal; representational.
b. pertaining to the capacity of the mind to refer to an existent or nonexistent object.
c. pointing beyond itself, as consciousness or a sign.

By interpret I mean “to explain or tell the meaning of”

Well, no. You can go buy an OCR machine that will recognize and understand (per its rules) those squiggles of on a page – check out the machines the banks use to scan and process checks! That is not to say that such a machine has strong AI, but just to note that a computational mind would need nothing more than that to ground its symbols in “things beyond the self”; a computational mind would store visual prototypes that get matched against visual input, and thus associated with that symbol. If a computational mind can visually recognize “M” as ink on paper, it can then associate such visual inputs with other symbols and processing (e.g, combine it with other recognized symbols and white space to produce a string of symbols like “man”…).

I think you’re missing the point here. An OCR machine does not recognise the letter “M”, what it “sees” is four lines, with varying degrees of straightness. In fact, it does even see “lines” for lines are abstractions also. An OCR “sees” the states of various photo-transistors or some such thing as input. We interpret the machine as seeing the letter “M” as intentional agents. Intentional agents have designed the machine to operate in a certain way, and intentional agents are required to intepret the inputs and outputs of said machine. “M” is only a symbol for the arrangement of matter in the machines memory because we, as intentional agents, have assigned it that meaning. Without an intentional agent to “point” the letter “M” to something, i.e. a letter in a persons name, there is no inherent meaning of the digital state of the machine which detects what we consider to be “M” on a page, a check etc.

Similarly, the idea that our thoughts are essentially symbols (or constructions out of symbols), with a medium of neural states (like symbols can have other mediums, such as, digital memory, ink on a page, sound waves etc.) is incoherent. There is no inherent pointing of the neural states to anything without an intentional mind already present to interpret said neural states as symbolic.

Is this really what Feser claims?? Lame. See above. A visual pattern for “M” will fire on recognition processing when external inputs match. This binds the brain state to particular forms of external phenomena, and this is readily achievable by relatively dumb machines now. By “pointing to”, in terms of the computation, I mean that some stored visual pattern is response invariant with characteristic inputs. “M-like” patterns that are sufficiently close to the prototype (or set of prototypes) in memory, and not more like other patterns in the repository will predictably fire “M”. This is pointing to external phenomena.

No it’s not, see above.

I think this idea of the “little man inside” working the brain with his brain, and so on… is precisely what CTM proposes to eliminate. It doesn’t presuppose it. Your (1) above certainly doesn’t presuppose such. CTM identifies the intuitive language commonly superimposed on cognition – an “inner eye” looking at “pictures in the brain”, for example – as misleading and erroneous. Mental imagery as internal presentations that are not “pictures” as we think of them externally, but rather are simply associative patterns. A software program that is processing images doesn’t “see” the input fed into it as “images”; “image” at the point of input disappears as part of the explanatory framework; it is just signal processing at that point, and there is thus no cascade of homonculi produced. The homunculus problem, then, is an artifact of illicit assumptions about how visual processing (or other forms of sensory integration) obtain.

The argument says it does. Again, see above. In order for brain states to be considered as symbols, *an intentional agent is required *. Therefore, in order for brain states to be considered symbolic, something like an intentional “little man inside” is required. But if this “little man inside” has cognition which is also to be considered symbolic yet wholly material, then he needs an intentional “little man inside” for said cognition to be symbolic etc. etc.

I’m going through the Stanford Encyclopedia of Philosophy article on “Computation in Physical Systems” which deals somewhat with this.
plato.stanford.edu/entries/computation-physicalsystems/
will add more after the article is digested.

Are you looking for a comment from a hypothetical view, or from real life experiences, or inside of laboratory experiments?

It’s fine as a starting point. This is a complex, and technical subject, and one of the challenges in making headway on it is making sure that our starting points aren’t problematic. Concepts can be grounded in percepts (external stimuli), but they can also be grounded in other concepts, which makes for some complicated “concept graphs”. Just like in computing how a variable can “point” to something concrete – the scan status of a graphics card connected to the computer, say – other variables can point to other variables, which can point to other variables, etc. In computing this is called “indirection” and while it’s a very basic feature of computing (and cognition, I think, analogously), it produces complex productions with ease. If we suppose a concept is just something hardwired to a particular stimulus, we won’t get anywhere. Not saying that’s your position, but it’s good to make sure.

Before I go any further, I’ll answer the following first:

By intentional I refer to the following definitions from dictionary.com:

a. pertaining to an appearance, phenomenon, or representation in the mind; phenomenal; representational.
b. pertaining to the capacity of the mind to refer to an existent or nonexistent object.
c. pointing beyond itself, as consciousness or a sign.

OK, that’s fine, but on its face, a computational mind has no problem satisfying any of those criteria. If you go read Searle, or Penrose or others who are anti-computationalists or mysterians, you will see that the deifinition of “intentional” is highly problematic, and indeed, at the core of the whole controversy regarding computational minds. The definitions they deploy I understand to be overly casual, but even so, they are far more restrictive then the above.

on a), a software program has no problem with representation; a capture from a webcam connected to the computer is a representation of the light patterns from the surrounding environment, for example.

On b) a chess playing program has no trouble simulating “possible futures” in exploring possible moves. Note also that the chess program is intentional as well in that it is biased in software toward “goal states”, the end results of winning the game.

On c) The security system where I work is connected to an integrated cluster of computers all running a software system that is highly aware of the incoming information from a variety of inputs – cameras, sensors, magnetic latches, mechanized electronic locks, etc. It is constantly monitoring input from phenomena outside itself and reacting to it.

Which is not to say that by this strong AI is thereby demonstrated. Rather, that the common definitions used are not adequate for pressing the case against computational theory of mind. I won’t go dig up anything myself here, but if you want, we can digress a bit on this, because the precise meaning of “intention” and “intentionality” is really very much at the heart of this debate.

By interpret I mean “to explain or tell the meaning of”

Same thing, here. I don’t know, and think you don’t either at this point, how to tell when a system “knows the meaning of” something. Think about the practical test you would apply to test a candidate “mind”. What conditions would have to obtain for that system to demonstrate it could “tell the meaning of” something. This is actually one of the most profound questions you are likely to wrestle with, in my experience. As a software developer who’s worked long years in the areas of custom computer language as well as neural and adaptive networks, this is where real philosophy obtains. When you have to code it into an executable machine format, you can’t get by with fluff. This is a fascinating, hard, and I would say theology-slaying problem. More on that as we go, though, perhaps.

-TS

See, here you are positing a homunculus, a “little man” in the machine that does the seeing. This is the dualist impulse, which also posits a homunculus, the 'immaterial little man" who sits behind the brain and does the real thinking, recognition, etc., and drives the brain in response.

If you look at how computer vision works, the parallels to human physiology and visual integration are strong. And they do implement a similarly layered approach which refines raw percepts into abstractions and approximations, distillations of key features identified in the input that are matched against a database of stored prototypes. The OCR machine identifies an “M” in much the same way you do, and as computer systems get more powerful and sophisticated, the parallels get stronger.

The salient point here is that you do not have a homunculus inside your brain that “sees” the lines, any more than an OCR machine does. It’s just pattern matching, visual chunking and prototyping. The mind is a machine at that level. We can go investigate some of the neurology on this for more, if you are interested.

An OCR “sees” the states of various photo-transistors or some such thing as input. We interpret the machine as seeing the letter “M” as intentional agents. Intentional agents have designed the machine to operate in a certain way, and intentional agents are required to intepret the inputs and outputs of said machine.

Sure, but I’d say the same thing about humans. Nature has designed humans to operate in a certain way, or rather, nature tends to eliminate variants of humans that do not operating in “seeing-performative” ways.

The machines the bank uses go from scanned image to account adjustment (money changing hands!) no humans involved, though. The interpretation and actions are captured and implemented computationally, there, for that task.

I think you must be thinking about some level of self-awareness of what the machine is doing, by the machine, rather than just seeing, recognizing, interpreting and acting based on rules and encoded heuristics.

“M” is only a symbol for the arrangement of matter in the machines memory because we, as intentional agents, have assigned it that meaning. Without an intentional agent to “point” the letter “M” to something, i.e. a letter in a persons name, there is no inherent meaning of the digital state of the machine which detects what we consider to be “M” on a page, a check etc.

As above, I don’t think this addresses computational theory of mind, though, as in the computational model, that’s just a machine with a limited “feature scope” compared to humans, humans being themselves designed by nature to “goal-seek” computationally around broader objectives that “scanned check processing”. Nature has programmed humans, in other words, with more complex machinery – meta-representationally adaptive cognition, for example, which is 100x overkill for a check scanning machine – and more “fuzzy” goals, goals that change and grow over time. Computationally, though, this is just more levels of indirection, according to CTM.

Similarly, the idea that our thoughts are essentially symbols (or constructions out of symbols), with a medium of neural states (like symbols can have other mediums, such as, digital memory, ink on a page, sound waves etc.) is incoherent. There is no inherent pointing of the neural states to anything without an intentional mind already present to interpret said neural states as symbolic.

Incoherent, how? If we arrive from the womb wired to build a conceptual model of the world around us, informed by the instincts we inherit, those neural states are physiologically intentional, or “goal-seeking” as we would say in computational terms. A human is wired to “goal-seek” towards survival, satisfaction of needs (hunger, shelter, emotional connection, etc.) and our neural states are the medium by which those models toward those goals gets realized.

Again, you have a homunculus stalking your complaints here – where is the “little man” behind the curtain, that “really sees”, and “really adopts intentions”? The answer, on the CTM paradigm is that the question itself is badly formed. There is no homunculus, intentionality obtains in a constitutive sense from the neural machinery that is wired toward goal-seeking optimizations. The machine’s goals are the intentionality. There is no homunculus to appeal to, and this is the profound challenge CTM brings to dualism and anti-mechanical views of cognition.

-TS

In a fourth year computer science class, we were asked to build some kind of AI system. I chose to implement a genetic algorithm. It was interesting to watch the final system learn to guide a spaceship and teach an ant to walk, but I could not help but feel that the whole thing was deceitful. It was like telling someone “This clock I built knows what time it is”, when in fact the clock is just mindlessly spinning gears and springs. The program contained intelligence because it had been created by a human mind, but the resulting system was of a much lower order and could hardly be described as thinking. The distinction between the coding and execution seemed best described as something “supernatural” versus something “natural”. All machine intelligence (whether CTM, neural or genetic) suffers from such clock-like materiality. The following excerpt from a C.S. Lewis site makes the distinction clear (with apologies to Daniel Dennet).

"The argument [against a material mind] holds that if, as thoroughgoing naturalism entails, all of our thoughts are the effect of a physical cause, then there is no reason for assuming that they are also the consequent of a reasonable ground. Knowledge, however, is apprehended by reasoning from ground to consequent. Therefore, if naturalism were true, there would be no way of knowing it, or anything else not the direct result of a physical cause. Lewis asserts that by this logic, the statement “I have reason to believe naturalism is valid” is self-referentially incoherent in the same manner as the sentence “One of the words of this sentence does not have the meaning that it appears to have”, or the statement “I never tell the truth”. In each case, to assume the veracity of the conclusion would eliminate the possibility of valid grounds from which to reach it. To summarize the argument in the book, Lewis quotes J. B. S. Haldane who appeals to a similar line of reasoning. Haldane states “If my mental processes are determined wholly by the motions of atoms in my brain, I have no reason to suppose that my beliefs are true … and hence I have no reason for supposing my brain to be composed of atoms.”

I think the original post is simplistic. Of course arrangements of matter do not by themselves point to physical things, but it does not require an “overmind” to construct a mapping in which they do. Therefore my gripe is with point 5. What you have done in point 5 is reduced the argument to:
we can interpret symbols.
It takes a soul to interpret symbols.
we have a soul, tough luck computers.

Computers already interpret arrangements of matter into symbols (the classic 1s and 0s) which are in turn converted to the text you are reading now. Sure we designed computers, but we do not manipulate their rules once they are built, they follow the rules given to them without further input from our minds. Why could we not give them a camera feed and teach them that certain symbols sent by the camera correspond to “cat.” Going one step further, why not set it up such that the computer can generalize its own symbols?

Touchstone,

Sorry for skipping over some points of your post, I am aiming for the salient parts as I am a little time poor at the moment.

See, here you are positing a homunculus, a “little man” in the machine that does the seeing. This is the dualist impulse, which also posits a homunculus, the 'immaterial little man" who sits behind the brain and does the real thinking, recognition, etc., and drives the brain in response.

If you look at how computer vision works, the parallels to human physiology and visual integration are strong. And they do implement a similarly layered approach which refines raw percepts into abstractions and approximations, distillations of key features identified in the input that are matched against a database of stored prototypes. The OCR machine identifies an “M” in much the same way you do, and as computer systems get more powerful and sophisticated, the parallels get stronger.

The salient point here is that you do not have a homunculus inside your brain that “sees” the lines, any more than an OCR machine does. It’s just pattern matching, visual chunking and prototyping. The mind is a machine at that level. We can go investigate some of the neurology on this for more, if you are interested.

You think that by saying simply that when we see an “M” written on a page and we associate it with a letter in the English language, or a part of Michael’s name etc. that I am thereby positing a homunculus? Here I was thinking that I was simply stating the obvious, that when we see an “M” written on a page, we take that “M” to represent something.

Let me ask you a question. Do the molecules of ink on a page which constitute the pattern “M” have any inherent meaning (i.e. does it point to anything beyond itself) if there were no human beings? Of course the answer should be no. Therefore, clearly there is a difference between beholding the matter (i.e. the molecules on the page) and the symbol which is composed of matter. An OCR machine “beholds” the matter, but an intentional agent is required to consider a symbol, i.e. something that points beyond itself.

In the same way, if as per CTM certain thoughts are symbolic, then the only way they really can be symbolic is if an intentional agent interprets them as such. If this is correct, then it is the materialist who is stuck with a regress of intentional homunculus’, there is no other way to be able to consider thought as composed of symbols.

Sure, but I’d say the same thing about humans. Nature has designed humans to operate in a certain way, or rather, nature tends to eliminate variants of humans that do not operating in “seeing-performative” ways.

What is “seeing-performative”? If natural selection produces brains which have thoughts which are composed of symbols, the argument still holds (as far as I can see, of course).

The machines the bank uses go from scanned image to account adjustment (money changing hands!) no humans involved, though. The interpretation and actions are captured and implemented computationally, there, for that task.

Of course there are humans involved! Who created the machine? An intentional agent (i.e. a human being) interprets both the inputs (i.e. that what the machine is reading is not ink on a page, or more basically, photo-transistor states, but checks of certain account holders) and the outputs (a match has been made between a certain account holder and the name on a check). Just because they are not involved when the machine is running makes no difference to this fundamental point.

Again, you have a homunculus stalking your complaints here – where is the “little man” behind the curtain, that “really sees”, and “really adopts intentions”?

It’s as simple as this: Symbols require somebody to link the symbol with the thing symbolized, otherwise it is not a symbol, just meaningless matter. Therefore symbols, by their very definition, require somebody who “really adopts intentions”. This is a problem for the proponent of CTM who wants to call thoughts symbols. You are somehow projecting this onto my position, yet never have I promoted the idea that thoughts are composed of symbols. I’m arguing against that position, remember?

No problem. Theory of mind and computation are subjects I have dealt with a lot over many years, and like to discuss, but don’t mean to overwhelm. I’m fine just focusing on the parts you (or I) suppose are salient.

You think that by saying simply that when we see an “M” written on a page and we associate it with a letter in the English language, or a part of Michael’s name etc. that I am thereby positing a homunculus? Here I was thinking that I was simply stating the obvious, that when we see an “M” written on a page, we take that “M” to represent something.

It’s the “obvious” part that makes it deceptive. This is, by the way, precisely what undermines the arguments of Searle, in my view, as he has much the same reaction; for all the very careful thinking he applies in the Chinese Room, the “obvious” trips him up, and undoes his argument. He supposes that intentionality and meaning are just… obvious. These are precisely the points that require the most clarity and definition, and which hide the mistakes until it is applied.

I don’t deny the “obviousness” of that, at face value. But if you break it down, you can see that you (or I) posit a homunculus in fact, and “obvious” makes it a blind spot for us that we skip over if we aren’t careful.

Look at this carefully: “when we see an “M” written on a page, we take that “M” to represent something”. That “take” in there jumps to an inner frame, and smuggles in the homunculus, who is the “mini-brain” doing the “taking”. On CTM, and other mechanistic models, the association neurologically is the meaning. The connection between percept and symbolic handle is constitutive of meaning. There is no further “taking” to be done, no homunculus to posit, there.

Let me ask you a question. Do the molecules of ink on a page which constitute the pattern “M” have any inherent meaning (i.e. does it point to anything beyond itself) if there were no human beings?

The simple answer is “no”. But that only works as an answer if we assume humans are the only possible possesors of mind. An “M” may have meaning to alien minds, or to mechanical/software/non-biological minds. If humans have all been exterminated by the machines, and there are no biological alien minds, it’s quite possible that English words would remain meaningful and semantically rich for the extant machine minds, if indeed machines can be minds.

Of course the answer should be no. Therefore, clearly there is a difference between beholding the matter (i.e. the molecules on the page) and the symbol which is composed of matter. An OCR machine “beholds” the matter, but an intentional agent is required to consider a symbol, i.e. something that points beyond itself.

I don’t see the difference. For a human:

  1. Vision provides the input through the eyes that presents an information pattern.
  2. The pattern is analyzed for familiar and identifiable features by the brain.
  3. The analyzed pattern is matched against a store of prototypes in memory.
  4. On a match the visual input is then associated with that associated with matched prototype (an “M” is identified, and the input is associated with whatever “M” connects to in the brain).
  5. The “M” is “considered” by virtue of dereferencing the visual symbol “M”. We move from visual to conceptual semantic in this step.

For a machine:

  1. Computer vision provides input through a camera that presents an information pattern.
  2. The pattern is analyzed for familiar and identifiable features by the program.
  3. The analyzed pattern is is matched against a store of prototypes in memory.
  4. On a match the visual input is then associated with that associated with matched prototype (an “M” is identified, and the input is associated with whatever “M” connects to in the program).
  5. The “M” is “considered” by virtue of dereferencing the visual symbol “M”. We move from visual to conceptual semantic in this step.

For the machine, and the brain, the context matters. If we are “reading” characters right to left, than the “M” is semantically inert on its own if we are identifying letters that constitute words, which are the “semantic” containers. So letter recognition may not be the most useful example here, but word recognition (“man” instead of “M”) works just fine, too, here. When the machine delineates the word “man”, using the same rule humans do – whitespace on either side, etc. – the symbol is identified and becomes the container for meaning.

What part of the parallel above do you disagree with?

-TS

In the same way, if as per CTM certain thoughts are symbolic, then the only way they really can be symbolic is if an intentional agent interprets them as such.

Yes, but that is not a problem for the idea that an intentional agent is doing symbolic manipulation when it “interprets them as such”. That is, to be an intentional agent is to engage in symbolic manipulation, which, on its face at least, is not a problem for computation.

There’s something mysterious/superstitious about the way you use “interpret”, and that’s why I want to press on that for more detail. As I understand it, it’s complex and multi-layered, and in humans it is self-referential, which produces what Douglas Hofstadter has named “strange loops”, but for all that, there’s nothing superstitious or magical or supernatural implicated by any of that. It’s just a complex mechanism.

If this is correct, then it is the materialist who is stuck with a regress of intentional homunculus’, there is no other way to be able to consider thought as composed of symbols.

CTM denies what you call “interpretation” as something distinct from “beholding” (or just “association” if you want to refer to it that way). Visual integration and pattern matching is an interpretative process, and constitutive of meaning, all on its own. It doesn’t call for a “little me” to interpret the results. The mechanical associations are the interpretation at that level. Visual integration and pattern matching is symbolic manipulation itself, on CTM. That doesn’t mean that other layers of software working at a higher level cannot and do not use those symbols in their own heuristics. But the “beholding” as you call it, is forward progress in symbolic manipulation all on its own. It doesn’t need a “little me” to finish its task. That’s an artifact of dualism.

What is “seeing-performative”? If natural selection produces brains which have thoughts which are composed of symbols, the argument still holds (as far as I can see, of course).

By that I mean that the individuals in question have feedback loops in their eyes and brains that model extra-mental reality better than those who don’t. For example, an evolutionarily isolated population of hominids may have very good visual acuity, and better pattern recognition than another population in the same area. If they compete with each other for hunting food, all other things being equal, the better performance of the eyes and visualization functions of the brain will tend to favor that population of the other group with poorer vision, just because better vision giving rise to a more accurate model of what is what and where provides an advantage toward survival goals – making the kill and feeding the tribe.

It is for this reason that we understand our senses to be highly optimized over time towards fidelity insofar as that fidelity aids survival and propagation. Determining whether gods exist or not does not confer this direct advantage (although just religious ritual all on its own may have some adaptive advantages for humans by virtue of the social trust and order it can foment). Good hearing, visual, spatial visualization, “what-if” speculative reasoning, etc. does. That we have these and that we are here having survived the eons is warrant for understanding these faculties to be basically trustworthy and performative.

Of course there are humans involved! Who created the machine? An intentional agent (i.e. a human being) interprets both the inputs (i.e. that what the machine is reading is not ink on a page, or more basically, photo-transistor states, but checks of certain account holders) and the outputs (a match has been made between a certain account holder and the name on a check). Just because they are not involved when the machine is running makes no difference to this fundamental point.

Well, that’s a different question, I grant. But it’s ground gained for me if we understand that even if it is designed, that a machine can in principle do these “mental” things. If we agree that computational minds are plausible in principle, we are getting somewhere, and can move to the question of what ‘being desiged’ means and entails.

-TS

DISCLAIMER: The views and opinions expressed in these forums do not necessarily reflect those of Catholic Answers. For official apologetics resources please visit www.catholic.com.