A primer on intentionality and the mind/body problem

There’s been a flurry of posts over on SHAFT’s website, all having some connection to the question of whether science can disprove God’s existence, and I muddied the waters further by bringing up the problem of intentionality. But I did not take the time to explain what the problem is. So I thought I’d make some attempt to do so here.

A cool feature of mentality — and maybe the crucial one — is that ideas can be “about” things. They can be representational. But how does an idea, or a thought, or a word, or a sentence, manage to be “about” something else? That, fundamentally, is the problem of intentionality. (Note: it has nothing to do with “intending to do something,” or having “good/bad intentions.” Different matter altogether.) It’s a question of meaning.

Initially, it seems like intentionality is a problem for materialists. For how does some hunk of matter ever come to be “about” some other hunk of matter? We can complicate the hunks by putting them into activity, and into causal relation with the hunks of matter they are supposed to be about, but it still is initially puzzling how one dynamic system can be about another. It is for this reason that some philosophers have thought materialism can’t handle intentionality, and so they have posited something special (special properties, capacities, or substances) in order to explain aboutness.

In the early 1960s, W.V. Quine worked through a careful thought experiment meant to show that determinate meanings, or intentionality, cannot simply surface out of physical behavior. His thought experiment was about a couple of linguists who confronts a bunch of people speaking a language no one else has encountered before. He argues that these two linguists could come up with two very different translation books, each of which did a perfect job of capturing what the people say and do. (So, for example, the term “Gavagai” could be equally well translated both as “Lo, a rabbit!” and as “Look — undetached rabbit parts!”). But Quine didn’t take the conclusion to be that we need to invent some special stuff to settle the matter, since he was a hard-headed materialist (except when it came to logic). Instead, he concluded that there was no fact to the matter about which translation was right. This result is called “the indeterminacy of translation.”

One may agree or not with Quine’s conclusion. But his thought experiment seems quite sound: no amount of physical behavior can be interpreted in only one way. There are always alternative and equally apt interpretations.

The same goes for computers. You can’t read a unique program off the behavior of the machine. (You can always come up with some program, but you can always come up with more than one.) The question about what the program really is cannot be settled empirically. And the same, it seems, for human beings and their behaviors, which are just like super complicated computers.

This seems like a puzzling conclusion, since don’t we all actually mean something when we say something or think about something? The indeterminacy of translation does not seem to jibe with first-person experience. So how do we get at least the appearance of determinacy of meaning out of a fundamental indeterminacy (if materialism is true, and we don’t call in special stuff to solve the mystery for us)? Indeed: how do we get any appearance of meaning at all?

That’s the question materialists have to answer. I’m not saying they can’t answer it, but I am saying it is a toughie.

The question was dramatized with John Searle’s “Chinese Room” idea. So you are in a room, and your job is to take inputs in the form of written Chinese, look them up in a great big book, where you find an appropriate Chinese response, and return it as output. Anyone outside the room says “Hey! The guy in there understands Chinese!” But there is no real understanding of Chinese anywhere in the room. So, Searle concludes, the behavior of “understanding” does not constitute genuine understanding.

There’s more to say, maybe in a part 2 of this post, but I’ll leave it at that for now.

Author: Huenemann

Curious about the ways humans use their minds and hearts to distract themselves from the meaninglessness of life.

22 thoughts on “A primer on intentionality and the mind/body problem”

  1. “The same goes for computers. You can’t read a unique program off the behavior of the machine. (You can always come up with some program, but you can always come up with more than one.)”

    If two programs behave in the same way (produce the same output when given a certain input), it really doesn’t matter that they are not unique. Mathematically, I can produce an infinite number of functions that map set A to set B in the same way. (As an aside, computer scientists further distinguish between equivalent programs by considering the time and space efficiency of the algorithms involved, obviously preferring those that are faster or use less memory–some algorithms would literally take until the heat death of the universe to finish).

    As for the Chinese Room, one possible response is that even though the guy in the room doesn’t understand Chinese, the guy plus the room constitute a larger system that does. Together, they form a “mind” in a sense that can understand and respond in Chinese, effectively forming an artificial system that passes the Turing test. (Another aside–if the system were literally built of a guy looking up symbols and rules in filing cabinets and drawers, he’d probably need an enormously huge room, and take millions of years to produce an answer to a question).

    At least with regards to the question “can computers have true understanding?” it comes down to the problem of subjective experience. I have a favorite quote I usually bring up in this discussion by computer scientist Edsgar Dijkstra: “The question of whether machines can think is about as relevant as the question of whether submarines can swim.”

    Like

  2. “If two programs behave in the same way (produce the same output when given a certain input), it really doesn’t matter that they are not unique.” – right, that’s Quine’s response. Probably good enough for practical purposes, but an evasion of philosophy, imho.

    To see how that reply to Searle fares (as well as others), see http://plato.stanford.edu/entries/chinese-room/.

    Does Dijkstra have more to say, or is it just a quip?

    Like

  3. “The question about what the program really is cannot be settled empirically.”

    In a narrow sense, it can. When a programmer writes a program and compiles it to machine code, the machine code is stored in memory as a specific pattern of bits. To the machine executing them, these bits have one and only one meaning, and always produce the same result (assuming the program doesn’t have any random elements or require outside input).

    These bits can be de-compiled to the higher level language again, and we can see something very nearly like what the original programmer wrote. We can tell whether he used a merge sort or a quick sort, for example. We can tell if he’s multiplying numbers directly, or using repeated addition. We can distinguish between otherwise equivalent solutions, and I might even be able to tell you if he was a bad programmer or not.

    I guess what you’re saying we can’t do, though, is decide “this program multiples two numbers, then returns the square root of that result” without figuring out what is meant by “multiply” or “square root” or even “two”. Or why the original programmer used one method instead of the other?

    I suppose what I’m saying is that I’m still having a hard time seeing the problem.

    Like

  4. In a sense it’s a quip (I haven’t read any of his books), but I understand him to mean something a little deeper.

    He worked on systems that generated mathematical proofs, and some of them produced novel proofs to theorems that human mathematicians had been struggling with. I think what he’s effectively saying is “whether or not the computer is actually thinking, you can’t argue with the results.”

    Like

  5. “To the machine executing them, these bits have one and only one meaning” — or they have no meaning at all, and we’re just talking about electromagnetic forces at work. Meanings come into the picture, again, when we interpret the significance of that work. “Oh, it has just turned n into n+1.” “No it didn’t,” we can imagine someone else saying, “I didn’t see any adding. Some electrons just moved from A to B.”

    Like

  6. or how does some hunk of matter ever come to be “about” some other hunk of matter?

    I might be too literal here, but it doesn’t seem to be that there is matter that is about other matter. The examples you gave – language and computer programs (which are very similar) aren’t matter (or a physical quantity more specifically, since energy would be equally valid – for short hand I shall just use matter however) but rather interpretation of arrangement that uses matter as a medium (energy can similarly be used, hence the photons that illuminate the characters on my monitor).

    If I am understanding your problem definition correctly I think that distinction is important, because understanding arrangement is a somewhat distinct (though very related in many cases) question set to that used in understanding matter. I guess independently I am asserting a dualism similar to Form/Shape (or Form/Matter), but using different (I’d posit, more current) vernacular.

    Arrangement, as opposed to matter, is context dependent – for example, simply stumbling across the word desert isn’t horribly helpful since it could mean a place with little precipitation, or it could mean to be in dereliction of duty, either a noun or a verb, but either way meaning almost nothing in isolation (it could also mean my cat walked over my keyboard in a very specific and unlikely way). however if I said “the kurd peacekeeper chose to desert while on patrol in the western desert” the combined words do give context and you can have a high degree of certainty about the meaning. The greater the lack of context the less we can determine.

    What I have a hard time with, at least with all of the examples given of intentionality so far is that it seems to ignore the fact that context has utility. Sure, simply looking at a diagram of a circuit in a particular state tells me almost nothing about the arrangement, which is why in scenarios such as that I would endeavor to learn more and more about the context (the states leading up to the snap shot, whether the circuit was part of a bigger system, what environment that system was used in, etc) until I could speak to the arrangement.

    Now I will grant a level of recursion – as I seek more meta-data, more context, I then have to understand the arrangement there, and so on. That said, unless we embrace the notion of Solipsism (I understand the point, but I wonder about its utility) we do end up with a common body of knowledge that we can draw from – knowledge where the confidence level in it is close enough to true that the margins of error or uncertainty have limited effect on the probability of correctness further down the chain. As we build and revise that common body of knowledge we can progress in understanding of further arrangements (and matter, for that, er, matter).

    Back to my above example – “the kurd peacekeeper chose to desert while on patrol in the western desert”, the interesting thing is that each word has its ambiguity removed by its context while simultaneously serving as context to remove the ambiguity of the other words that make up its own context. That basic premise is what makes up Google.

    Ultimately I think the notion of certainty (or conversely, margin of error) is how science addresses the Problem of Induction, and pertinent here as well. The distinction between scientific and theological or philosophical explanations or reality is that science does not try and establish an absolute truth, but rather a probable truth. I think the fallacy in the Problem of Induction is that while it asserts that science can’t give you absolute confidence (which is true), it doesn’t follow that you can derive no confidence. There are, after all, infinite numbers between zero and one, and just because your confidence isn’t one doesn’t mean it is anywhere near zero.

    Like

  7. I believe Searle is also a materialist, and his Chinese Room is mostly meant to make the case that whatever consciousness is, it cannot be reproduced through rote symbol manipulation (though it might be simulated), as digital computers running high level language programs do. He remains agnostic about whether another type of artificial mechanism might be able to do it.

    Thus the Chinese Room, simulated using an unimaginably complex Perl program, even one running on, say, a Linux cluster with 10,000 machines, zillions of classes, objects, and threads, would remain a simulation. There would be no “there there,” no sentience. If asked to translate “redness” would the Perl program “know” what red was (or any other “qualia”)?

    I recently read “On Intelligence” by Hawkins, and I’ve become convinced that the process we call “thinking” or intelligence will in fact be reproduced by machines in the not-too-distant future. The question is will these machines remain “zombies,” able to think but not feel.

    Like

  8. Just as an addendum to what I just said…

    There remains the possibility that we ARE all just Chinese Rooms, and that there is in fact no there there (or here here). Hence “feeling” would merely be an elaborate feedback mechanism between brain and body, and the fact that we cannot convey the meaning of “redness” without experiencing it would just be a cognitive inability on our part.

    …not that I particularly enjoy being a zombie.

    Like

  9. The question is will these machines remain “zombies,” able to think but not feel

    My wife is a neural endocrinologist and a behavioral ecologist, so that certainly colors my perspective, but how much of feeling is anything more than awareness of our physiological reaction? For example, when our brain observes something of immediate risk and signals the adrenal gland to send endorphins to flood our blood stream, how much of our resulting sense of fear is actually the reaction of our increased heart rate, our muscles tensing for fight or flight, the increased activity of our amygdela and various sensory components, etc. We tend to romanticize our emotions as being abstract thought, but much of what we experience is entirely physiological. To that end I certainly think a computer would not feel in the same way unless specifically designed as such (I also don’t think there is a great deal of point to that – some of the worst human decisions are the result of emotional behaviors rather than logical thinking, why emulate that?)

    Like

  10. One final note and I’ll shut up.

    I agree with Quine that word translation has no “fact to the matter,” but I would argue that this implies qualia are all in our head; this is exactly what is to be expected. Red to you is not red to me, although since color is such a fundamental aspect to our visual perception, we are all in close enough agreement to suppose that “redness” might be a “thing” (which confuses us). The further you get from primary perception, the more ambiguity is introduced, which paradoxically is easier for us to deal with. For instance, my favorite example, beauty, is that a “quale” ? Beauty is so ambiguous that it is easier for us to dismiss it as pure relativism.

    Like

  11. Josh,
    For the most part, I agree with your intuition.

    Why emulate that? Because we’re humans and we’re egotists, of course. :-)

    Like

  12. As far as qualia go, I’m pretty much on the side of thinking they’re entirely relative and emergent, a subjective shorthand or brain-bookmark for a packet of sensory stimuli.

    A dog’s sensory world consists of all sorts of qualia I can’t imagine having, like smelling every item of open food in the whole house, or experiencing the scent of an argument, or hearing the buzzing of the wires in the walls. Bees, octopi, and plenty of other animals can see into the ultraviolet, picking up color qualia I have no reference for. To me that seems entirely relative, given the limits of your own sensory organs and past experience.

    Like

    1. These examples of qualia all have measurable frequencies and are produced by the oscillation of electromagnetic fields–the very fields that produce light and matter (including the human brain.) This is a kind of relativity–special/general relativity–but the “entirely relative” you are eluding to seems to be more connected with evolutionary phenomena rather than the existence of the continuum of frequencies themselves. The electromagnetic spectrum continues to ‘expand’ as technology allows us increased perceptive abiltiy to detect these various frequencies. The fact that you see red seems to be a result of your physical ability to detect that frequency, the fact that there is no evolutionary fitness benefit for a dog who sees red doesn’t change the fact that photons align themselves and transmit energy at a particular frequency. Red is red whether you or I see it or not.

      Like

  13. “To that end I certainly think a computer would not feel in the same way unless specifically designed as such (I also don’t think there is a great deal of point to that – some of the worst human decisions are the result of emotional behaviors rather than logical thinking, why emulate that?)”

    This is complete speculation, but an arbitrarily advanced mobile robot would benefit from having some sort of “pain” mechanism to avoid damage–basically just an internal signal of “hey, that something is dangerous”. That’s pretty much why pain evolved in the first place.

    Whether it would have any experience of *fear* in response to potential damage is a completely open question.

    Like

  14. To put it another way, emotions are basically evolved shortcuts for quickly making certain kinds of common decisions in the face of common situations, especially where slower, more measured thought would likely get you killed. A component of emotion is experienced as a shift in internal state, both physiological and mental. In modern society, most emotional responses are the exact opposite of positive survival utility, but at one point they were useful.

    To end this comment in pretty much the same way as my last one, whether a robot or AI would need any sort of shortcut decision-making system, and whether “emotion” is the right word to use for that, is up for debate. (Robots that are meant to socially interact with humans will at least need to fake it, and the best way to fake emotional responses is to actually have them).

    Like

  15. I just read this in a blog post:

    “. . . For example, mentally rotating an imaginary object in your head uses some of the same brain machinery as actual perception of a real rotating 3D object. That came as a shock to some brain scientists, who previously assumed that the representations and operations were fairly different, rather than being nearly identical.”

    If actual sensory input first molds a neural subnetwork to recognize certain objects (sounds, emotional states, etc.) and recalling the memory later triggers the very same network in a similar way, that seems significant. It might be a small part of explaining how the brain generates “meaning” or forms conceptual attachments from one arbitrary concept or symbol to another. A specific neural pattern is a roughly direct mapping onto a physical experience (although I bet it’s not unique–my network for recognizing say, chairs is likely different than yours). Once I have a neural net for a concept, it can be transformed, filtered, or attached to other specialized concept networks, generating a practically infinite number of recombinations.

    Like

    1. Me too. Though I would make a plug for Huenemann further motivating the problem – it seems to be some are still not quite getting it – before offering possible materialist solutions to it (though I think Huenemann is skeptical of materialist solutions anyway).

      Like

  16. Some quick observations, from my point of view.

    Josh is willing to call in something special — form, arrangement — to do the “meaning.” That’s Aristotle’s move, too. But I’m not sure it’s easier for form to “mean” than for matter to mean, unless we just build into form (by definition) the capacity to represent. But then aren’t we just saying that we are capable of meaning because we have this thing that renders us capable of meaning? That’s hardly an explanation!

    Hunt seems relatively untroubled by the problem of intentionality, and more troubled by the problem of qualia. Josh and James aren’t troubled by the problem of qualia, since they think that if you can see what the brain/body is doing when people experience qualia, you’ve explained the existence of qualia. Sandi, I think, has the same view, though she’s more interested in the underlying physics than the underlying physiology. (Though she’s also interested in what red means — and I don’t know how to really answer that question without pointing inward, with my invisible pointer, and saying “It means THIS!” James and Josh would probably give a story about brain states, but the question is whether any such story will explain why THIS (the thing I’m pointing to inside) is correlated with those brain states rather than some other THIS (“blue”). But, anyway, that’s all the problem of qualia, not the problem of intentionality.)

    Like

  17. (though I think Huenemann is skeptical of materialist solutions anyway)

    When it comes to things like consciousness, intentionality, qualia, etc. there are a lot of people who are skeptical of materialist solutions, and even the ones who claim to be hardcore materialists actually presuppose a meeting point between perception and experience, something Dennett ridicules with his Cartesian Theater. These are not actually solutions to anything; they amount to philosophically passing the buck.

    I’m also skeptical at this point, but if I had to commit, I would probably take the “hard” option and say that the Chinese Room actually does understand Chinese — as much as it can be said that we understand Chinese. For those who find this absurd, can you find an obvious contradiction that would disprove the claim? Note that incredulity that understandable parts working in concert can combine to form complex emergent phenomena does not constitute a contradiction. As I noted back on the SHAFT site, this is the fallacy of division.

    But, anyway, that’s all the problem of qualia, not the problem of intentionality.

    Perhaps intention vs. qualia could be related as analogous to verb and noun? The two may be distinct concepts, but I suspect that to “solve” one is to solve the other.

    Like

  18. Josh is willing to call in something special — form, arrangement — to do the “meaning.”

    I don’t know if what I am doing is *special* per se, but rather an observation that all of the examples you have been giving to try and explain unmeasurable meaning are distinct from a physical form. You mentioned matter about matter, but you examples don’t bring matter into play, hence the distinction.

    Essentially, it seems that the “meaning” that is being thrown around in the Quine and Calculator examples are nothing special – an arrangement conveys a specific type of information if paired with the appropriate context. Outside of that context its meaning can change (or be removed completely). If you don’t have a sufficient grasp of the context you can’t determine meaning. If you don’t have a grasp of the language it is hard to interpret, don’t know the context of a circuit it could do many things – that doesn’t really seem deeply problematic to me. Those aren’t problems that can’t be quantified – in fact computer science tackles issues like that regularly (albeit not revolutionarily – not a solved problem, but not an unsolvable problem). Anyway, I think I was caught up in those and spent a great deal of time discussing them, when they aren’t really demonstrative of the question.

    The Chinese room, and question of “what is red” are different questions to me, they seem to me to suggest the notion that what we think can’t actually be quantified. I see no evidence that that is actually the case though. The notion that the person in the room doesn’t know chinese seems entirely semantic to me – is the physical process that he is employing to look up the response in books fundamentally different than the neural process that would take place in the brain (forget philosophy – do you have any empirical evidence to suggest a fundamental different process in the brain)? Think of the process you go through when a word is on the tip of your tongue – are you not essentially indexing your memory looking for a hit? Is that really different than flipping through pages (or executing SQL wild card queries if you want to update the chinese room problem to a modern setting)? I think the chinese room is a romantic distinction of what it is to know something.

    Per the “What is red”, is this a deep philosophical question, or a very imprecise biological mechanism? We don’t think of red as an EM wavelength around the 650 nm range, but is that because we know that the meaning of red is more than that, or because we are really not good at actually knowing red as a result of our physical organization (neural tissue and how it records and recalls wavelengths of light). I am a completely un-visual person – when I think about red I think about the word, and I can’t mentally visualize it at all (but boy can I remember gobs of information and correlate them, and visualize geometric and mathematical problems incredibly easily, and far more quickly than most people – I would hypothesize that my brain is optimized for speed of recall, at the expense of storing visual data), to me red really means nothing unless the photoreceptors in my eye absorb the correct wavelengths and my brain then correlates that with the word. My wife, however, has synaesthesia, and her knowledge of red is completely different – to her thinking about the color invokes the color, and a host of other sensory memories she has associated with the color (red has a sound, a smell, a texture, to her – she ads all sorts of extra information that has nothing to do with red, to the meaning of red). The three letter arrangement r-e-d invokes an entirely different set of mental responses, and the sound the word red makes when spoken is associated with a similarly different set of mental responses. 30 seconds after seeing something red and having it hidden from me I would be lucky to pick a color swatch at all close to the appropriate hue – she could do so with scary accuracy a year later, and tell you what she was wearing at the time. To me red means almost nothing outside of active visual distinctions, to her it can mean a whole host of things depending on how she recalls it. Does the fact that we can’t convey what red means to each other indicate a deep unquantifiable meaning, or is it actually indicative of weaknesses in how we observer, process, store, and recall certain information, and the fact that each of us presents that weakness in different ways. It is not at all clear to me that the supposed meaning of objects is anything more than biological tradeoffs in our brain that we are not conscious of.

    Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: