Intentionality and Meaning

In the previous post, I put forth the question of whether Husserl’s phenomenology could be of use to AI, weak or strong. This is a genuine question that I put out there to discuss…I have no thesis to support. Just curious to hear what you think.

In writing this post, I realized I’d have to break this down into several segments. From now on, I’ll be using Husserl for the most part, not Heidegger, to explain aspects of phenomenology…although I do like Heidegger’s readiness-to-hand and presence-at-hand distinction. But I prefer the bracketing methodology of Husserl for these purposes. I could see Maurice Merleau-Ponty coming into the picture, especially on the issue of AI embodiment, but I haven’t read him. (Perhaps those of you who have can weigh in. I’d love that.)

I might stray from Husserl too, setting out on my own. In other words, not everything here will be a lesson on Husserl. I don’t want to be encumbered by referring back to his works to verify what I’m saying, because that would make what should be a simple blog post an academic enterprise. I’m not feeling that game right now.


Conditions of experience

Phenomenology allows us to describe experience as it’s actually experienced. In doing so, we look for conditions that make experience possible—the constitution of meaning. These “rules” are not likely to be revelatory in describing what happens inside a biological brain. However, phenomenology could run parallel to neuroscience. After all, in order to know what’s going on in the brain, we must know what brain states correspond to—the so-called “subjective” experience, i.e. 1st person accounts. One might argue that 1st person accounts tend to miss the mark, fall into error, but we can’t allow all 1st person accounts to err on a grand scale. There must be a back and forth here, perhaps only a preliminary one at the outset. There is no mapping of the brain without knowing what it is we’re mapping.

Why should we care about a philosophy that sounds very much like navel-gazing? Well, this navel-gazing isn’t about the stuff we ordinarily think of as “subjective”: our favorite ice cream, the personal feelings we get when we listen to music…that stuff we generally agree is “a matter of taste.” Husserl’s direction is actually scientific (like, Wissenschaft scientific, “the sciences” scientific) in the sense that we are looking for elements of experience that are essential to it.

For example, those of you familiar with Kant’s Critique of Pure Reason may remember that space is the a priori outer form of experience, and time, the inner form. Causality was explained in this way too; everything we experience will be shaped by the categories because these are necessarily presupposed. (Kant also believed there were inexperience-able things “out there”—noumena—which causes phenomena. Let’s leave this aside.) Husserl goes further than Kant by setting forth a philosophy that seeks to ground the content of experiences individually, on a case by case basis. We’ll see how this works in later posts. Let’s just say for now that Husserl’s phenomenology is a lot more detailed and specific.

The very fact that phenomenology seeks out “rules” makes me wonder if it could apply to AI in some capacity, especially in areas that have to do with perception and learning. It might actually be preferable to bracket the “natural world”: “objective” reality, Kantian “things in themselves.” In a way, we’re looking at our own experience as if it were virtual reality. Like a computer.

However, phenomenologically speaking, we live in an environment that is not closed, which seems to imply that computers just aren’t like us. It seems that AI would have to progress significantly to allow for open-ended possibilities if we want to achieve those hard-to-accomplish tasks that for us seem basic. Does that which allows for creativity and learning in us preclude algorithmic AI? Maybe, maybe not. I’m not well-informed in this area, but it seems at the very least we’d have to know what makes our experience what it is in order to answer the question. Do we really take in new information just as it comes to us, spontaneously, or do we have to synthesize that information onto pre-existing charts? I suspect the latter, and I suspect if we could “crack the code” that allows us to understand our own learning methods, we’d be better able to do the same for AI (even if only in weak AI, or for certain specific goals).

In my last post I told you I’d explain how phenomenology operates by exploring Husserl’s intentionality. Let’s do phenomenology.


Intentionality

Husserl’s Intentionality is at the heart of his phenomenology. Intentionality is our directed-ness toward things, and it’s basically this: Consciousness is always consciousness about or of something. Pause here for a moment. Really stop and give this consideration. Much of phenomenology is reflection on experience. If you don’t do it, if you read articles on phenomenology and look for ways to summarize the logic, to relate to it only on the level of mere verbal cohesiveness, you’re missing a crucial aspect of it. The process is intuitive. You analyze the veracity of such statements as “consciousness is always consciousness about or of something” via intuition, reflection on your own experience.

Try not to think about anything. You might think you’ve experienced something like this once: a dreamless sleep, a coma perhaps. But were you conscious? No. So right now do this: Really try not to perceive something, to be aware of. You can close your eyes, close the windows, block out the sound, but time goes by. What happens? Well, if you’re like me, perhaps even more happens in your consciousness now that the senses are closed off. Ideas, daydreams, random thoughts…these are included as content, “about-ness.”

Those of you who meditate may raise objections, and these will be well taken. I, for my part, have never found myself to be conscious while being conscious of nothing, absolutely nothing.

It is the nature of our experience to be directed towards things or about things. (What I’m loosely calling “things” are not just objects of sense perception, but can include thoughts, ideas, memories, etc.) Intentionality is always there. In other words, it plays a pervasive role in every kind of experience: perceiving, judging, remembering, dreaming, screwing up, etc.

Imagine an omniscient camera (or recorder of some sort) that captures the infinity of experiences, all sense data, equally, without any directness toward things, without signifying any particular experience. We are not even a time-limited “subjective” version of such a camera. We can speak of this omniscient experience just as we can speak of a square circle, but we can’t really picture it. That’s because, in an a-logical—non-logical—way, it is nonsense. Through intuition we know that in such a world, there would be no objects. No objects, no intentionality. No intentionality, no objects.*

You might’ve guessed by now that intentionality is broader than what we mean when we say, “I intend to fix this,” but includes such statements and meanings. Plus, intentionality is not attention, necessarily, but includes attention.

What intentionality does is acknowledge that there is always a foreground and background to experience. The background is a vague summation of the world. This world may not be the world of science, may not include the world ‘in itself’ (or it may, phenomenologically, but let’s not get too complicated here). Let’s say for now that, at a minimum, it’s a world that’s available for us, and therefore it coheres in a loose sense—it must. This background is what Husserl calls the “horizon.” It can be thought of as a potential experience, past or future, which has not yet shown itself or is not now in view. The horizon is also infinite (more on this later.)

Intentionality is mostly passive as we go about our everyday lives, and on philosophical-phenomenological reflection we can “see” it operating, to some extent.

We quickly disregard what isn’t relevant to us at the moment while simultaneously knowing that those things that are currently irrelevant or out-of-focus—on the horizon—are possible experiences that could come into the foreground. Those background possibilities constitute our foreground experiences. We know what’s behind us in a loose sense. We have expectations about what’s behind us and those inform our foreground experiences.

I repeat, these foreground experiences are not necessarily “paying attention.” More often than not, we’re not trying to focus.

We grasp content in its context, leaping ahead to the most likely meaning or its totality, its unity, often unaware of other possible meanings or interpretations of the content, although further investigation may warrant a change. This is all done in a flash due to the intentional nature of our experience. The horizon, the background, is operating at the same time that we make the leap. The meaning of words/objects are constituted in time and situation, and this constitution is holistic, yet adaptable and subject to constraints.

Furthermore, the object or content of the experience is the way we look at it. Here’s a good example found in this article:

Consider the plight of poor Oedipus Rex. Oedipus despised the man he killed on the road from Delphi although he did not despise his own father; he desired to marry the Queen although he did not desire to marry his mother; and he loathed the murderer of King Laius before he came to loathe himself. But of course the man he killed was his father, the Queen was his mother, and he himself was the King’s murderer. How shall we describe the intentionality of such acts? Oedipus’ desire, for example, seems to have been directed toward Queen Jocasta, but not toward his mother. But Queen Jocasta and Oedipus’ mother were the very same person…Oedipus’ desire was therefore not simply “for” Jocasta: it was for Jocasta as conceived in a particular way. And the same sort of thing is true, not only of Oedipus’ mental states, but of everyone else’s as well…The intentionality of an act depends not just on which object the act represents but on a certain conception of the object represented.

The intentional conception of an X is not just an imposition of our minds on “facts” and therefore subject to error. (Remember, intentionality is always there, and it doesn’t always err. Error is just a clear way of showing the difference between fact and intention.) The example above demonstrates how meaning is constituted, but also how new conceptions can arise from new evidence. The meaning of Oedipus Rex would be entirely lost on us if we did not understand Oedipus’ intentions and the context which guided those intentions.

*Here I’m combining “object” and “content” for the sake of avoiding pedantry. We’ve established we’re not talking about noumena, so I hope you’ll excuse my sloppy language.


 

Meaning Constitution

Let’s look at our intentionality, our guiding mental behavior, linguistically.

Consider the sentence: The pig is in the pen.

I would be incredulous if you interpreted this sentence to mean, “There is a pig that is inside a writing instrument.” (Unless you happened to look down at the picture first, and you probably did because images tend to command attention. And there’s another topic for discussion…but anyway. Pretend you didn’t.)

The truth about the world, the background—that pigs don’t fit in writing instruments—informs your foreground interpretation. Yet you did not (I hope) have to analyze the sentence and determine all possible meanings of the word “pen” in order to arrive at your interpretation. You probably didn’t even think of writing instruments.

Consider the sentence: The pig is in the pen. Then imagine someone pointing to this while saying the sentence:

s-l225

The pig is in the pen?

You might laugh and say, “Well, the pig is on the pen, or maybe the pig’s relationship to the pen is something about which we don’t wish to speculate.” Whatever the case may be, the sentence now has a different meaning constitution. You might wonder…why would the speaker say, “The pig is in the pen?” Does this person speak English? Is this person having a prepositional brain fart?

And the best question: Would you have considered “pen” in this case as signifying “an enclosure for animals”? Probably not in this situation.

Or maybe the speaker of the sentence is a moderately funny, punny person who has this whole theory about truth and language and you two have discussed this pig in the pen example on many occasions.* In this case, you might grasp both meanings of “pen” simultaneously to get the joke. You might only get the joke because you know this person makes this sort of joke on a regular basis.

As you can see, the holistic interpretation is adaptable and situational; even as it “runs ahead of itself,” it is subject to all sorts of constraints. In other words, intentionality is not just some willy-nilly imagining of the world, some sort of act of creation from nothing.

Also, this “leaping ahead” applies to all experience of objects, not just language interpretation. In my next post, I’ll go into further detail on this topic. Be on the lookout for eidos…

Ha ha. (Okay, not funny. But you’ll “see” what I mean later.)

*This is my husband’s example, which he used in a different context in his unpublished book on language and generosity (Donald Davidson’s “charity”).

Thoughts?

 

65 thoughts on “Intentionality and Meaning

  1. The way “intentional” is used here seems similar to the models of awareness I’ve read in psychology books, and also similar to the attention schemas of Graziano’s attention schema theory of consciousness. Would you say that we have an intentional conception of ourselves? If so, we may be looking at confluence.

    I do think this is very relevant to AI research. Understanding how we model concepts, objects, people, etc, is something AI research is still working at. From what I’ve read, the human mind comes with a lot of pre-wired functionality. For example, infants prefer focusing on faces. We often can learn and model some things easier than others. The problem is that this doesn’t map cleanly; it tends to be programming that is heavily contingent on common experiences.

    The foreground / background distinction is interesting. I agree that as we learn things, we figure out ways to map it to our background. Put another way, we map incoming patterns as much as possible to existing patterns, sometimes more than is possible.

    All together a fascinating post Tina. Looking forward to the next one.

    Liked by 3 people

    • “Would you say that we have an intentional conception of ourselves?”

      I wasn’t sure what the answer was, so I looked it up online. I mostly focus on Husserl’s intentionality in terms of external objects/content. I found this article which sounds right, and says that the self is mostly given as pre-reflective self-awareness (although we can also be self-aware, which would mean reflection on ourselves as objects):

      http://plato.stanford.edu/entries/self-consciousness-phenomenological/

      To summarize, here’s an excerpt (the article is long and I didn’t read it all…the first few paragraphs should give you the idea):

      “To have a self-experience does not entail the apprehension of a special self-object; it does not entail the existence of a special experience of a self alongside other experiences but different from them. To be aware of oneself is not to capture a pure self that exists separately from the stream of experience, rather it is to be conscious of one’s experience in its implicit first-person mode of givenness. When Hume, in a famous passage in A Treatise of Human Nature, declares that he cannot find a self when he searches his experiences, but finds only particular perceptions or feelings (Hume 1739), it could be argued that he overlooks something in his analysis, namely the specific givenness of his own experiences. Indeed, he was looking only among his own experiences, and seemingly recognized them as his own, and could do so only on the basis of that immediate self-awareness that he seemed to miss. As C.O. Evans puts it: “[F]rom the fact that the self is not an object of experience it does not follow that it is non-experiential” (Evans 1970, 145). Accordingly, we should not think of the self, in this most basic sense, as a substance, or as some kind of ineffable transcendental precondition, or as a social construct that gets generated through time; rather it is an integral part of conscious life, with an immediate experiential character.

      One advantage of the phenomenological view is that it is capable of accounting for some degree of diachronic unity, without actually having to posit the self as a separate entity over and above the stream of consciousness (see the discussion of time-consciousness in section 3 below). Although we live through a number of different experiences, the experiencing itself remains a constant in regard to whose experience it is. This is not accounted for by a substantial self or a mental theater. There is no pure or empty field of consciousness upon which the concrete experiences subsequently make their entry. The field of experiencing is nothing apart from the specific experiences. Yet we are naturally inclined to distinguish the strict singularity of an experience from the continuous stream of changing experiences. What remains constant and consistent across these changes is the sense of ownership constituted by pre-reflective self-awareness. Only a being with this sense of ownership or mineness could go on to form concepts about herself, consider her own aims, ideals, and aspirations as her own, construct stories about herself, and plan and execute actions for which she will take responsibility.”

      Without referring to Husserl, I think this article makes sense and is in line with the phenomenological epoche (bracketing) of the natural world. If we exclude everything outside of experience in doing phenomenology, we can’t posit the self as some sort of noumenal substance; the self, if it exists, must be revealed within experience itself. It looks like various phenomenologists have different opinions on the matter of whether or not there is a self, but I think Husserl would say there is a self of sorts, but it’s tied to the “stream of consciousness” as a “mine-ness.” Although that term, “mine-ness,” sounds like Heidegger…I can’t remember. I don’t want to put words into Husserl’s mouth here…I’m not sure what he says and I’m loathe to crack open “Ideas Pertaining to…” right now. (See, I can’t even bother myself to write out the full title.) Anyways, that article seemed fairly clear.

      “From what I’ve read, the human mind comes with a lot of pre-wired functionality. For example, infants prefer focusing on faces.”

      It is SO weird that you mention infants preferring to focus on faces. In the first draft of my last post on this AI-phenomenology series I talked at length about children’s artwork. I decided to cut all that because I realized I was confusing my art metaphor and the post was too long. The point I’d tried to make was that children leap to that which is meaningful to them, which is pretty bare and essential. Almost like an emoticon.

      The odd thing about this children’s art example is that I didn’t do any research into the matter at first, but I just supposed that children drew stick figures. I’d planned on using that fact to try to explain how the bare essence of a thing—it’s meaning—comes first, realism later. My example was a tree, represented by a circle with two straight lines for a trunk. Then I doubted myself and decided to look into the matter. I’m glad I did. When I googled children’s art to see what others had found, I was surprised to learn that very young children didn’t start out with stick figures, but instead drew faces with large eyes and—the weird part—limbs coming directly out of the head:

      https://en.wikipedia.org/wiki/Child_art

      That lack of attention to realism went much further than I would have guessed. My tree example was actually fairly sophisticated! I was so excited about it that I rambled on and on, the way I’m doing now…so I’ll stop.

      “Put another way, we map incoming patterns as much as possible to existing patterns, sometimes more than is possible.”

      I want to get into this more in my next post. I have a very elaborate (but hopefully fun) example of how “mapping to existing patterns” can be corrected by further investigation, so that the mapping itself is like a fluid process, to some extent.

      Thanks for reading! I hope there is confluence.

      Liked by 2 people

      • Thanks for the article excerpt. This looks like something I’m going to have to re-read carefully with frequent lookups of the terminology.

        I do think the self exists. It’s one of the few times I disagree with Hume. I think people are tempted to dismiss the self because it is not indivisible, but can be isolated into components and studied.

        Your mention of stick art is interesting. Before you mentioned the research, I remembered that when I was a child, the idea of drawing stick figures had to be shown to me. It didn’t occur intuitively. (Assuming of course this isn’t a false memory. I’m often surprised how innacurate many of my early life memories are.)

        On mapping and children, my friends with young kids often complain because their kids like rewatching the same movie over and over again. It drives the parents crazy. I suspect that what the kids experience from the movies is very different from what we experience. We watch a movie, map everything to existing patterns, filtering out most of the details, and then we’re usually done with it. But kids are still building those background patterns. They continue to get more information on each viewing. I suspect it’s one reason they prefer cartoons, because there is less detail to take in.

        I recently saw a study that showed that newborns take in far more detail when they see something then we do, although the effect rapidly diminishes as they get older. Apparently a 9 month old takes in only a fraction of what the newborn sees. Of course, this has to happen, otherwise we’d be overwhelmed with detail all the time.

        Looking forward to the next post!

        Liked by 1 person

        • I remember watching Alice in Wonderland over and over. I also remember getting the video in May because my parents forgot they’d bought it for me for Christmas. It turned out to be the best not Christmas Christmas present ever. It got to the point where I could “play” the movie in my head, and speak the lines before the characters. I still have that “painting the roses red” tune in my head. I remember speculating about weeds and whether that was just a label…(I liked dandelions, of course.) I don’t know what that fixation was about, but I hope it’s not a grand explanation of How I Am What I Am. 🙂

          Very true about remembering childhood. Who knows what really happened.

          “I recently saw a study that showed that newborns take in far more detail when they see something then we do, although the effect rapidly diminishes as they get older.”

          Fascinating. I wonder, how did they figure that out?

          Like

          • On memorizing Alice and Wonderland, the only movie I really had that experience with was the original Star Wars, which at the age of 10, I saw numerous times in the theater. I didn’t really get the opportunity as a child to do it with any other movie or TV show because VCRs didn’t come around until I was a teenager. But I do remember being able to recite every line in the original Star Wars movie. My friends only saw it as confirmation of my nerdiness.

            On babies, it has to do with how long they look at things. It’s been established from previous studies that they look at new or novel things longer. So to see what they find interesting, they watch to see what they look at and what they only glance at. I can’t find the original article (I thought for sure I had tweeted it, but I’m not finding it in my list), but here’s a HuffPost article on it: http://www.huffingtonpost.com/entry/babies-perceptual-constancy_us_56b8ac2ce4b08069c7a7e99e

            Liked by 1 person

            • If it makes you feel better, I have a friend who’d find your ability to recite every line in the original Star Wars movie very very very cool. Several friends, actually. But one in particular would probably make hysterical squealing noises while clapping her hands. And then you two could play a game over who could say the lines first.

              On that weird re-watching thing that children do, I wonder what the significance of that is. I suspect it has a lot to do with language learning through mimicking, or even behavior learning. But why do they care? That’s a mystery to me.

              I had these books that came with a cassette tape, and you could listen to the narrator read while reading the book yourself. She’d tell you when to turn the page, so you could really keep track and teach yourself to read. At the end of the tape, the narrator tells you to flip over the tape and record your own story. I never made my own story. I’d record myself reading the same story, then I’d listen to myself and check to see if I sounded like the narrator. If I didn’t know a word, I’d make up something in the moment and continue reading. Over time, I honed in on words that were difficult. I think that taught me how to read at an early age. My parents never read to me, but I suspect that multimedia aspect was an even greater learning experience since you don’t just sit there passively hearing a story, but are asked to participate. Then you can double check yourself as many times as you want without boring some poor adult. Same goes for watching a video over and over. You get to the point where you can recite the thing in your head, and just think of all the things you’ve learned. Of course, in those cases, there’s desire to repeat, which seems insane to us as adults, but to very young kids it’s fun I guess. As I said, I have no idea what compelled me to want to match the narrator’s voice. I hear that nowadays, Mr. Rogers and Sesame Street are boring to kids. Too slow. I don’t know if you ever watched those shows, but those were what I grew up watching. They were highly repetitive, especially Sesame St. which had a lot of songs in which various characters shouted out numbers and letters over and over. Why did I love the show? I have no idea. (And come to think of it, certain obnoxious TV commercials repeat things three times. You end up remembering them by sheer accident, including the phone number. How convenient.)

              I know I was “reading” the newspaper at age 4 (meaning, I could phonetically sound out the words, which my father made me do). I actually enjoyed doing it. (This was the same time I was doing the books on tape.) I hear reading at 4 is supposed to be an extraordinary thing, but I don’t think it is. There’s a school here in Tucson that teaches kids at this age (actually, they start even younger, preschool level, I think) various subjects in different languages. So for instance, they’ll do math in French or even Chinese. I saw in it person, it was pretty amazing. Kids at that age have no idea that what they’re doing is hard. They’re just little sponges.

              Another little anecdote about that age. In college there was a one day field trip which you could apply for to teach kids something. I decided to teach origami, and I got to go on the trip (I did it for the $). I had all levels from K-middle school. I taught them how to make an origami turtle and two different kinds of swans. The kindergarteners were by far the most capable of learning. One kid even memorized the steps after one run through, and we’re talking about many steps. He started teaching the other kids on the second run through, like my little teacher’s aide. Then we went through all three little creatures, no problem. They were so focused and just took it all in. The only problems they had were with dexterity, getting the folds clean. (And if you botch that up too much, the subsequent steps will be impossible, so I’d help them with that.) But my, they were so interested in pleasing me and in getting things right. It was wonderful. The older kids (2nd grade and up) were not so quick. We never made it past the easy swan, and I could tell they really wanted to make these, but just couldn’t remember the steps (they did better with dexterity, however). We spent most of the time discussing the steps: “So what happens next? A triangle, yes. What kind of triangle?” (This was supposed to be a geometry lesson, by the way.) The middle school kids were by far the worst in every regard. They were too busy flirting with each other and dealing with their evil little self-imposed hierarchies. They actually complained, “This is too hard.” Then I tried to shame them by showing them the turtle the kindergartener made after seeing me do it once. They didn’t care.

              On that infant study, I wonder how they determine that B and C are “most alike”? It seems to me that A and B might be different in terms of pixels, but B and C are also different in a different respect.

              Liked by 1 person

              • Sadly, my memorization of Star Wars faded long ago. Although even today if I watch that first movie, I still know exactly what’s going to happen every second, but I’ve lost the ability to mimic any scene on demand.

                On children re-watching, I do think it’s because they’re getting new information each time. It might be in how people act, or in the presented settings, or any other aspect of what’s there. Why do they care? At the risk of sounding reductive, because they’re evolutionarily programmed to learn new information that might benefit them in the future.

                I definitely watched Sesame Street and Mr. Rogers growing up. As to it being boring, kids have way more interesting things to watch now. When I was a kid, other than on Saturday morning, it was watch those shows or go outside and play. We didn’t have video games, video on demand of any type, or more than three channels.

                I personally learned to read with comic books. I tried to read before then, but if the book didn’t have pictures I had a hard time staying interested. But after a few years of reading comics, I suddenly found novels approachable and tore into them. My parents rarely ever read to me (that I can recall), although they did encourage me to read on my own.

                That’s interesting about the younger kids learning the easiest. I suspect the middle schoolers were also worried about losing face. Everyone’s insecure at that age and terrified of anyone else finding out about it.

                On the study images, don’t know. Reading about methods in popular science articles can be frustrating at times since it’s going through a reporter that often doesn’t really understand what they’re relaying.

                Liked by 1 person

  2. I like what you said about phenomenology as a complement to neuroscience. There is a third discipline, cognitive science, which looks at the “pre-existing charts” on which we pattern the information we take in. It seems that all three of these are needed if we are to make progress in understanding the mind, or to create an AI which possesses consciousness.
    Great post–hope you are feeling well these days.

    Liked by 2 people

  3. Is driving a car a good example of this? My attention (intention?) is normally concentrated on the road in front of me, but I am aware of a background of other objects (things that I have seen in my rear-view mirror, pedestrians walking along the side of the road, traffic lights that may change to red, …) There must be background processes in my mind constantly watching these objects, ready to switch my attention to them if something changes.

    Liked by 2 people

    • Yeah, that’s a good metaphor. In fact, I’m about to bring in a similar example. The rules of driving are sort of like “the background”…you know what to expect in a loose sense, but you’re also able to adapt to rule-breaking. (Hopefully.)

      Like

  4. Tina says: “I, for my part, have never found myself to be conscious while being conscious of nothing, absolutely nothing.” – Would you say, Tina, that to be conscious of something is to be within a memory of sorts – i.e. a memorised percept modelled onto some selected sensory phenomenon so as to become part of a (now amended) memory? The words are clumsy, but I think make sense. If so, then that implies our entire wakened life is a memory of what occurred ‘just then’. I think that conception is wrong, and that the reason you are not aware of nothing (there’s not nothing wrong with a double-negative!) is because awareness can indeed persist outside of being aware of, or about, any object. Yes, many meditators would (gently) argue with you, because they can’t deny their direct experience, their own non-phenomenal phenomenology. They know they can look back (in memory) at an objectless lucidity and know by inference that it persisted in time, and by inference that it was apprehended as itself, not as a representation of itself – the stuff of consciousness.

    Stanford says: “There is no pure or empty field of consciousness upon which the concrete experiences subsequently make their entry. The field of experiencing is nothing apart from the specific experiences.” – Well, I obviously have to say that I think that is entirely incorrect. There is a Tabula Rasa of awareness upon which consciousness is inscribed, so to speak. It can’t be known in ‘everyday’ consciousness, nor arrived at by thought. It is well documented in, say, Buddhist Psychology, for example, and the argument that introspection is an unreliable witness to itself doesn’t apply here, because nothing is being witnessed – there is just that objectless lucidity I mentioned above.

    Great article Tina, sorry to be such a bloody contrarian. And don’t you and Mike get me started on all this ‘self’ nonsense! 😉

    Liked by 3 people

    • “And don’t you and Mike get me started on all this ‘self’ nonsense! ;)”

      Haha…I imagined you coming in here with objections. Well, all I can do is clarify and hope you see what I mean. By “object” I don’t mean empirical object or remembered object necessarily; content can be a state of mind or something intangible. And if you’ve had a conscious experience of nothing, of course there’s nothing I can say to that. The whole thing with phenomenology is that it’s intuited, not argued. If you are given an experience of nothing, or of God, or whatever, there’s no way I can argue about it. In fact, I’m not sure phenomenologists would all agree on this whole issue of self, so some might agree with you. It’s an issue up for grabs.

      I’m starting to have issues with that quote, “The field of experiencing is nothing apart from the specific experiences.” In the course of the article, I had already grasped the point, so didn’t spend time scrutinizing it. In fact, I didn’t even read the whole thing. But that line poses the issue of the self in terms that seem too atomistic to me. Later, I believe, the article explains that the self is not specific experiences ‘added together,’ but that one line seems to imply the opposite.

      On memory. There are different kinds. I don’t know how many, but I imagine I could cook up quite a few categories. The immediate continual experience of some physical object while not being in a mode of reflection or introspection on your own experience of it…a great deal of experience, in other words…this isn’t what I’d call memory. “Damn, this apple is mealy. Should I throw it out? But that’s wasteful. Should I offer it to my husband?” That’s not memory of ‘apple’ ‘just then.’ That’s an experience of a mealy apple and, more to the point, an experience of a kind of mundane moral dilemma about wasting food vs. pawning off bad apples. I don’t mean to sound glib, because there’s also the horizon in which so much is going on. More on this in a sec.

      If I wanted to do phenomenology and talk about memory, I’d specify that it’s a sort of story-telling of some event that isn’t given to sense perception now. This sounds like a lot of technical talk, but if you think about it, it jives with the common sense use of the word “memory.” I might have to get more specific here and this would take a lot of time, but basically, it’s a specific type of experience in which under ordinary circumstances I would not say out loud, “This apple is mealy,” then make a face, since I’m not currently eating the mealy apple. It could be a reflective philosophical story-telling, or it could be just, “I saw your glasses on the kitchen counter.” There could be an entire phenomenology of memory, of all different kinds, but for my part I wouldn’t include all of experience as memory. We can look at our entire past experience on the whole and say, “It’s all memory of what happened ‘just then.'” But that doesn’t ring true. That’s actually missing most of experience as it’s actually experienced. To say, “All experience is experience of what happened ‘just then'” is to miss the non-reflective mode we were in during the experience, plus a lot of other things. This is where the term “horizon” comes into play.

      The background of experience is all there at once, loosely, like a haze. The “just then” is too limiting and linear. We can’t add up all of our “just thens” and say, voila, there I am.

      It’s probably helpful to think of time in a Kantian way here…time isn’t “space-time” or measured. It’s not a clock. It’s not something we experience atomistically. (My spell check says that’s not a word.) 🙂 Let’s just say time is the internal form of experience. It kind of a perplexing way of saying that things happen…move, even when we close our eyes and block out the world. In the horizon, there’s all of time, all at once. We can break up time and objectify it: Yesterday, 3,000 years ago, 20,000 years into the future, etc., but in phenomenology, we should be aware that we’re doing this. In those cases we’re not talking about time in some constitutional way, but referring to it in some objectifying way (which is actually experienced all the, um, time. Like when I’m late for an appointment. Or making dinner plans. And this can be phenomenologically described.) Plus, the horizon as background is not something we’re consciously aware of. It’s available, it’s operating, and it’s possible to be made aware of something in the background in reflection. Objects that were in the background could in reflection come to the foreground as intended objects..like when I suddenly just happen to know where my husband’s glasses are when he asks for them, even though I didn’t consciously think about them. But the horizon as such is not the intended content.

      Whew. I hope I didn’t send you running. Sorry about that.

      Liked by 1 person

      • “Content can be a state of mind or something intangible. And if you’ve had a conscious experience of nothing, of course there’s nothing I can say to that.”

        Perhaps that first assertion may sound a bit loose? Do phenomenologists even bother talking about ‘minds’, let alone the ‘state’ of them? Anyway, if the content of mind can be synonymous with its state as you suggest, then when one has to ask how one knows the state? And if one does know the state, then one knows an object – a representation – which itself is what we think of as consciousness i.e. being ‘with knowledge’/’con science’ of the state. That is obviously not what I mean when talking about an objectless awareness in which there is no knowledge or representation of anything. One may ask how one knows there is no object, and the unhelpfully simplistic answer is that it is not there! – mentation (and consciousness) always comes along with a feeling, so even in pre-verbal thought, there’s something like a vibrational feeling (like the subtle feel of running on railway tracks during a train journey) that something is going on. None of those feelings are there in the ‘state’ I’m talking about.

        “And if you’ve had a conscious experience of nothing, of course there’s nothing I can say to that.”

        No, I’ve not had that, because all conscious experience is knowledge – knowledge of objects – and I’m talking about what I call an objectless awareness. That must seem anathema to phenomenologists, particularly those from Stanford. It seems you would call this a ‘state of mind’, but again, what is a mind, or a state of mind, when known, outside of being a mental representation? So it’s impossible, by my lights, to have a ‘conscious experience of nothing’. Still, there is a lucid nothingness (or so I maintain), and it seems fitting to use a different term for it, such as objectless awareness, or objectless lucidity. Bear in mind that when you say I’ve ‘had’ this experience, that it’s something that’s been repeated countless times over three decades for me, and by countless others in different times and cultures too. It’s not anything special really, and it’s not confabulated, but it’s also not something that one can become familiar with in a lecture theatre or whilst reading a book on phenomenology. It’s as I jokingly said, a non-phenomenal phenomenology.

        “But that line poses the issue of the self in terms that seem too atomistic to me.”

        It’s probably unwise to venture the question right here and now, but what do you mean by a ‘self’ Tina? I take it you mean something that endures over time as a constant? I know you don’t mean a social construct – we all have those. Do you mean something other than a sort of morphing narrative construct that we hold in memory for the most part? What is your notion of a self?

        “Damn, this apple is mealy. Should I throw it out? But that’s wasteful. Should I offer it to my husband?” That’s not memory of ‘apple’ ‘just then’.

        Quite, it’s a verbalised memory of a pre-verbal thought which itself happened ‘just then’. If those words appeared silently in your or my head, they didn’t just come out of nowhere – yes? They’re bringing a liminal seed-thought up to a more conscious representation in verbal form as part of a wider endogram of consciousness i.e. me or you being here wondering about this mealy apple.

        “for my part I wouldn’t include all of experience as memory”

        You seem to be saying that memory is necessarily a kind of narrative Tina. We differ here, because I see percepts as memory, and they can be non-narrative (unless a single noun can be deemed as narrative). There can be the pure perception of colour or form, for example, which are not narrative constructs. But actually, if one accepts a representational model of that ‘mind’ thing – and I suspect you do – then all so-called ‘experience’ must be some mode of memory i.e. the representation (or conscious endogram) being a time-shifted, priority ordered and focused attention given to events already having occurred. That’s just what memories are.

        “Let’s just say time is the internal form of experience.”

        Yes, psychological time as against clock time – agreed. But to be tediously contrarian, I would say that it’s impossible to experience time. In the same way, it’s impossible to experience gravity. Time and gravity can be inferred, but never directly experienced outside of phenomena which we definitely don’t deem to be either time or gravity.

        Shall I sod off now? 😉

        Like

        • “Perhaps that first assertion may sound a bit loose?”

          It sure is. I didn’t mean for it to be a rigorous term in phenomenology. I meant it in the common sense way. “Happiness is a state of mind,” that sort of thing. Keep in, um, mind, though that even here a “state of mind” could be many things which would be talked about in phenomenology: a mood (which Heidegger talks about in Being and Time, specifically angst and “bringing death closer”), an emotion, a lack of emotion or mood…(which is there in Heidegger: angst is a kind of anxiety ‘about’ nothing, the mother of all moods which discloses our authenticity…this is all stuff I’ve never gotten into much, so I’ll leave it at that). Basically directness to something we’d ordinarily in common everyday language call non-external. So looking for the right answer to a math question is not something I’d call a “state of mind” even though surely there’s something in that activity that is a state of mind, strictly speaking. (For me, frustration, perhaps anger.)

          You’re right that in these cases there is content: the mood or emotion. All I meant here was to clarify that objects/content is not necessarily something like a cup or flower.

          Content of mind is not synonymous with “state of mind” when you take the phrase in the ordinary sense. There’s something very specific in this phrase when taken in the ordinary sense, and I think you’re right in questioning whether phenomenologists would use that phrase. The ordinary sense of the phrase relies on an ordinary philosophy—if you can call it that—which is more or less like dualism. Perhaps it’s not a worthy phrase to bring up in this context since it sounds very much like something that belongs on a bumper sticker. I probably should have stuck with “mood,” since that word is used.

          On phenomenologists talking about mind, I don’t see why not. At least, it’s not so clear to me that such a word would be off-limits in the way the phrase “state of mind” would be. But mind would not necessarily be “brain.” And it would not necessarily be something held up as consciousness that’s over and above the brain, qua immaterial substance. I doubt the word would be off-limits, though. The main thing that’s off-limits is posing some sort of causal theory based on the theorized existence of an external world that we don’t experience. But as you are probably aware, philosophers can use these words like ‘mind’ and turn them into a particular term with a particular meaning in the context of their work. I doubt the German philosophers would like using some word like ‘mind’…not long enough, no hyphens. 🙂 No, really, it’s a word that has too many different meanings for different people. I can see myself using the word in some strict way when writing something philosophical, but I don’t know if that would be wise of me. It wasn’t until I started blogging that I came to realize that quite a few people see the word “mind” and automatically think “brain.”

          “And if one does know the state, then one knows an object – a representation…”

          Here I’d want to give pause over the word “representation”. An object or content is not necessarily a representation. This computer before me is not a representation. That’s one of those sticky words like ‘mind’ that can mean different things. Perhaps phenomenologists could use the word in a more specific way, but they’d have to be careful to avoid using it in a way that suggests noumena. I can see it being used in a very literal way, re-present, but I don’t have a clear idea of what that would be in phenomenology.

          “So it’s impossible, by my lights, to have a ‘conscious experience of nothing’.”

          So then we agree!

          “Still, there is a lucid nothingness (or so I maintain), and it seems fitting to use a different term for it, such as objectless awareness, or objectless lucidity.”

          I think Heidegger might agree with you on this, although you might not like the way he expresses what you’re calling ‘lucid awareness’. He calls this kind of awareness Dasein’s disclosure of authentic self (which turns out to be nothingness and so may fall in line with what you’re saying). For some reason, I’ve never been terribly interested in this part of Heidegger, so I can’t tell you much more about it, but you might look it up and find it interesting. (If you decide to do this, beware of people who confound Sartre and Heidegger.) I’ve always preferred to think of really basic stuff, like how we come to know an ashtray is an ashtray when there are infinite perspectives of it…but more on that later.

          “Do you mean something other than a sort of morphing narrative construct that we hold in memory for the most part? What is your notion of a self?”

          I have no idea what the ‘self’ is.

          From a phenomenological POV, the ‘self’ is not the narrative we give ourselves through memory (or mis-memory). That’s why the whole issue of memory doesn’t really make sense in phenomenology of self. This is what I was trying to explain with that long-winded explanation of internal time vs. memory of ‘just then,’ but I don’t think I was clear. Neither internal time nor memory gives us ‘self.’ Memories of ‘just thens’ cannot be added up and called the self, AND therefore this whole idea of a remembered narrative being the self is even further removed (since here, the ‘just thens’ are not added up, but sometimes fabricated).

          So what is the self in phenomenology? I’m not sure. I think people would have varying opinions.

          “I would say that it’s impossible to experience time.”

          If you add: “directly, in itself, by itself” then I agree. Time is like gravity in that way, with caveats.

          Kant calls internal time (or rather Time…he wouldn’t say ‘internal time’) the ‘form’ of internal experience. Think of ‘form’ here as more fundamental than concepts or categories…it’s a necessary condition of experience in a way that gravity is not. We can’t experience internal time as a particular thing or content, since it is a form of all things and all content. That would be sort of like trying to pull your eyes out in order to look at them.

          Liked by 1 person

          • This is really lovely of you Tina, to go to all this trouble to respond so knowledgably to my (perhaps) rather irritating and pedantic observations. A couple of points I take issue with, but then this could go on forever, and be futile from both our points of view I suspect, as well as drifting outside the scope of your article, which I think I’ve already caused us to do. Sorry. Make your next post about music or dogs for chrissakes! Hope Geordie is thriving – as usual, a tickle behind the ear from H across the pond if you will.

            Liked by 1 person

            • No trouble at all! You’ve made me think more about what I’m saying, which is a good thing. I’m afraid I’m the one who’s been pedantic.

              Music or dogs, for sure. I’ve cracked the code for Joan Armatrading’s song, “No Love For Free,” but I haven’t memorized the chord patterns yet. They’re sort of strange. I’ve been considering starting up a new blog for guitar…just little tutorials for people who aren’t Serious Musicians. But that would require video and editing, which sounds like a lot of work. Geordie is much easier to write about.

              Liked by 1 person

          • Lovely woman, Joan Armatrading; I had the pleasure of meeting her once. Is she using open tunings? Quite possibly. I used to play her early albums over and over like a nutcase.

            But just to creep back into the room like a chin-scratching Columbo, why are you so convinced there is a self somewhere or other within or about us when you say “I have no idea what the ‘self’ is”?

            Like

            • You met her? Wow. I missed an opportunity to see her final tour…she came to Tucson not too long ago.

              Open tuning, yes. That was what I got stuck on for a while. That plus a capo on the 1st fret. Once those two things were solved, the song didn’t seem nearly as complicated.

              On the self, I wasn’t convinced of anything. The point was to dodge the point.

              But since you won’t let me dodge it, I do think there’s a self, although we’ve been through that one. It may be scaffolding for experience. I don’t equate it with a sort of personal narrative and then say, well, that’s nothing but a fabrication or construct. It may be “nothing more” than a concept that’s useful (and concepts are not nothing, in my opinion), it may be something necessary a la the Transcendental Unity of Apperception, it may be a “mineness”, it may be a way to avoid a schizoid breakdown…it may be something else…

              Liked by 1 person

  5. D’oh! it’s bad enough I’m still chewing on the article you wrote. Now I see I have to work through all the very interesting comments!

    So far I don’t have much to contribute. I do have a question about whether your use of “intention” is related to the “intention” and “extension” of words? The intention of a word is basically its abstract meaning. The extension is all the concrete instances.

    One passing comment about kids watching the same show over and over (and over and over… it really is amazing they can do it). FWIW, my guess has always been that the world is so filled with new input that watching a known quantity provides a respite from all that new input.

    Even older kids like hearing the same bedtime story over and over again, and from what I’ve seen it provides comfort. They know what to expect, and they will correct you when you get it wrong! 🙂

    [I read an interesting SF short story about an alien race that saw all of time as a whole. They remembered the future as well as the past. The story makes the point (even referring to kids repeating their stories) that knowing what will happen doesn’t “ruin” things, because sometimes everything is in the execution. Think about all the times you’ve heard the same poem or song.]

    It’s also very true about kids’ attention to detail. I’ve seen it over and over that a kid will walk into a room in their own house and immediately (and I mean immediately) spot something that’s changed. Makes it really hard to hide birthday presents.

    Adults? They often don’t notice it. Ever. Having a mind full of thoughts is apparently very distracting. 😮

    Back to chewing…

    Liked by 1 person

    • “I do have a question about whether your use of “intention” is related to the “intention” and “extension” of words? The intention of a word is basically its abstract meaning. The extension is all the concrete instances.”

      I don’t think so. The word might be traced to something like that, but I wouldn’t say Husserl uses it to distinguish abstract from concrete. I believe Husserl borrowed the term from Brentano who used ‘intentional’ in a very different way. I don’t recommend looking up Brentano’s version or you might get confused. There, from what I understand—and keep in mind I haven’t read Brentano—the intentional was very much ‘in the head’. Husserl is far from that.

      “…sometimes everything is in the execution. Think about all the times you’ve heard the same poem or song.”

      That is a possibility. I’d been thinking of the repetition in terms of learning, but there are obviously other motivations.

      Chew away! 🙂

      Liked by 1 person

      • Does anyone have a child we can borrow? 🙂

        We need to ask what did they learn from the 137th viewing? I’m still betting the attraction is more the familiarity than the newness. Even adults enjoy the same songs and poems repeatedly. (Some watch movies, or read books, repeatedly.)

        Liked by 1 person

  6. What I eventually want to come back with is something along the lines of this:

    Okay, so I’m an AI researcher and you’re a philosopher, and we happen to share an airplane ride, and we get to talking. My question then becomes, how can this help me to explore AI? What might the first steps be?

    Liked by 2 people

    • Very good question. I think if we’re sitting on an airplane and you’re an AI researcher, I’d want to know what problem you’re working on, what type of AI you’re dealing with. I don’t know that phenomenology can apply at all, but I think it depends on the specific problem. And it may be hard to communicate how to deal with that problem in terms that an AI researcher could use. So I’d give you a description or example of experience (after all the preliminaries of methodology to get you to this sort of thinking) and see if you can make something of it.

      Liked by 2 people

    • I hope that doesn’t sound like I’m simply passing the question back to you. I really don’t know how phenomenology would fit into things with AI, especially since I don’t know how computers work. So even at that level—which, from what I understand, is pretty basic in terms of AI research—there’s a lot that I don’t know.

      Liked by 1 person

  7. Wow, what a mind bender. I like it. 🙂

    I think you’re probably correct that phenomenology is probably a big part of the difference between machine “cognition” and human cognition but I need to think about that quite a bit more before I go farther.

    “Try not to think about anything.”

    This example of the intentionality is fascinating. I am not claiming this is outside the realm of intentionality, but I think there might be a sort of meta-intentionality that comes with really intense experience. One of the reasons I like driving race cars and partaking in violent sports is that I find the experience extremely pure, almost to the point of dissociation.

    Let me give you an example of the single most exciting thing I’ve done in a car (mind out of the gutter, at least). There’s a racetrack in Northern Nevada with a long front straight away. In my car, I was able to get to about 118 mph before I reached the first set of corners, a gentle s-turn that starts with a flick to the left immediately followed by a flick to the right. I discovered that I could take these turns at more than 110 mph if I quickly jumped off the throttle at the entrance to the first corner, waited for the rear wheels to start sliding and then jumped back on the throttle before the car could spin out.

    The consequence of failure was exiting the road backwards at 110 mph. The consequence of success was sweetness beyond description. My consciousness almost felt as if it was floating above my head, observing with complete indifference a person below who, without consciousness, was overflowing with sensation. The sound of my own heartbeat, the individual shocks from the rough edges of the track surface flowing through the steering wheel, the squirming of each tread block on each tire – it felt like time had slowed and left me in a state of concentration so profound that sensation and consciousness separated.

    I hope this isn’t a pointless rabbit hole, but your discussion of intentionality reminded me of this feeling.

    Like

    • “I think there might be a sort of meta-intentionality that comes with really intense experience.”

      Maybe. Or it may be that the intense experience has other objects than normal experiences—the dissociation itself is one. Because in a way, you’re hyper aware of certain things: the sound of your heart beat, the individual shocks from the rough edges of the track surface, etc. The things you’re not aware of, the things that are “on the horizon” of your awareness (but maybe shouldn’t be) are seen in retrospect, things like possibly dying, etc. 🙂 So what makes this experience unique is that those things that are usually not intended—hyper sensations—are now in the foreground. This is rare.

      And yet, suppose you didn’t have this experience. You would be able to infer things about the rough edges of the track surface, etc., almost theoretically, if you were inclined to do so. You might not experience them directly, but only in retrospect, maybe in memory or maybe just in thought, theoretically. Like, say, when I suddenly know where my husband’s glasses are because I either saw them or I know his behavior and the sort of places his things would be located if he were to ask me where something is. (I have an uncanny ability to locate HIS stuff, even when I didn’t see anything. Not mine, alas.) It’s the background knowledge of him, plus the fact that he can’t find X, plus the logical places he’d leave X (glasses not in the toilet, for instance). All these things are sort of hovering in the background, not intended, until he asks me where something is. Then I recall this background knowledge and bring it into the foreground.

      That feeling of being outside of yourself is a specific kind of experience that doesn’t seem to cross into a meta-intentionality as I see it, but simply a different experience. The normal stuff that would be intended is now the background, and the background the foreground. That said, I’ve never had such an experience. The only time I ever drove in a death defying-way was up icy hills in Vermont with a cliff on my side. I can’t say I felt any sort of euphoria as I slammed on the gas (which I had to do otherwise I’d slide backwards, which I’d done before…luckily not over a cliff.) Usually at the top of the hill I’d feel my heart pounding, but not while driving.

      Like

        • Interesting article. And on second thought, I did have a time-slowing experience once. I remember when my husband rear-ended someone in the middle of nowhere in Navajo land. He’d just told me he was getting sleepy, I’d just told him to pull over at the next available opportunity so I could take over, I’d just turned on the radio (to wake him up)…but there was a lot of construction and he couldn’t pull over. This woman dealing with traffic control suddenly flipped the sign to “stop” and the person in front of us slammed on his breaks. We were all going about 25 mph, so the whole thing was pretty slow anyways. I can remember my husband slamming on the brakes and then time seemed to slow down. We were going so slow anyways that I had time to think all kinds of things before we hit the person in front of us. I remember thinking, “Oh, we’re totally gonna hit that truck, but it won’t be a big deal. We’re barely moving. It’ll be a minor dent. I hope we have our updated insurance papers. This is gonna suck. Ugh. Why is this person stopping?” Then BOOM. The sound seemed to come out of nowhere. The front of our car just crunches like a wad of paper, smoke is spilling out from the sides and I can’t see out of the front windshield anymore because our hood is blocking it. I didn’t even feel the impact, but apparently the reason our car crumpled like that was due to the hitch on the back of the truck in front of us. That truck, by the way, had no damage. None at all!

          So the strange thing about that experience was the feeling that we were gliding very slowly into the truck in front of us, and I wasn’t the least bit worried about it except insofar as I knew it would be a pain in the butt. (And it was. We had to get towed to Flagstaff which took hours…the tow truck had no air conditioning and I ended up sitting on my husband’s lap panting out the window in the middle of the summer.)

          On the other hand, maybe I was frightened and wasn’t aware of it? That’s an annoying thought.

          Liked by 1 person

          • Well, being in a life-threatening situation and feeling frightened are two different things. It’s the awareness of the former that causes (what I once heard called but can’t find in Google) hypertachia.

            People who watch scary movies get frightened, but don’t generally experience slowing of time.

            Just thought of this: It might require suddenness. I never experienced it during skydiving (when I was both in a life-threatening situation and pretty frightened about it). But it’s not sudden. You’re getting ready on the ground (thinking about it). You’ve got the plane ride to altitude (thinking about it the whole way). You’ve got the jump run (really thinking about now)…

            Like

  8. “The very fact that phenomenology seeks out ‘rules’ makes me wonder if it could apply to AI in some capacity,…”

    Yes, I think it might. Rules and computers go together rather well. Rules are essentially logical statements. Logic is math, and computers have a single skill: they compute numbers very fast and very precisely.

    “…especially in areas that have to do with perception and learning.”

    Not sure about those areas. Computer perception might be too different from human perception (both in terms of sensory systems and in the representative data models perceptions would create).

    I’m not clear on how learning ties in with phenomenology, yet, so hard to say either way on that one.

    “However, phenomenologically speaking, we live in an environment that is not closed, which seems to imply that computers just aren’t like us.”

    I’m not certain I understand you exactly…

    That computers act essentially like lookup tables and can’t do anything they aren’t programmed to do (i.e. something not in the “table”)?

    The argument that humans aren’t the same is a hard one to make (either way), but those who believe in hard AI are making that argument. Hard AI possible because it is (on some level) indistinguishable from what goes on in a human brain.

    Conversely, if the distinction is meaningful, (software-based) hard AI may not be possible.

    “Does that which allows for creativity and learning in us preclude algorithmic AI?”

    I’m increasingly convinced it (and many other factors) does. But I could be wrong. Here, I’ll assume it doesn’t.

    “Do we really take in new information just as it comes to us, spontaneously, or do we have to synthesize that information onto pre-existing charts?”

    This is where I got a little confused about your meaning. Assuming Kant’s empirical realism, I’d say we definitely take in new information as we experience the physical world.

    I think I understand you to mean how we process that input — according to what rules, innate and learned? What is synthesized and what is analytical?

    If it turned out there was a set of universal codifiable rules governing how we framed experience, that would indeed be helpful. It’s pretty much what some AI researchers are doing (but without referencing phenomenology perhaps (or maybe they are, for all I know)).

    So,… re our airplane conversation, I’d probably start by telling you that (hard) AI breaks down into several general areas:

    {1} Attempts to replicate the physical network of the human brain in some sort of hardware. I don’t know of anyone attempting or planning this. Ironically, it seems the most likely to work since it only seeks to replicate meat with metal.

    {2} Attempts to replicate the physical network of the brain in some sort of software. In this case, neurons and their synaptic connections are simulated with a software model. Neural networks are this type, and they are a common form of research.

    {3} Attempts to replicate the function of the brain (in software) without regard to replicating its structure in any way. Here’s where phenomenology might be especially useful, because as you said:

    “These ‘rules’ are not likely to be revelatory in describing what happens inside a biological brain.”

    So, yes, if phenomenology produced a codifiable set of rules regarding experience and perception, they could be very useful!

    So the next question is: What kind of rules can it offer?

    [To avoid a mondo-comment, I’ll stop now. (I’ve only skimmed the intention sections so far, anyway. Seems like a distinct topic.)]

    Liked by 1 person

    • “Not sure about those areas. Computer perception might be too different from human perception (both in terms of sensory systems and in the representative data models perceptions would create).”

      I’m not sure either, not being a computer expert by any stretch of the imagination. If a camera is combined with temperature sensors, etc., and all of this combined with locomotion, and all of this combined with a system that processes that information to produce the same outcome (remember, weak AI is on my radar) then maybe?

      But your point about representative data models is well taken. In fact, I have a mega monkey wrench to throw into the whole phenomenology-helping-AI premise.

      “I’m not clear on how learning ties in with phenomenology, yet, so hard to say either way on that one.”

      The way we learn might have a lot to do with what I’ve been calling “background information”…but Husserl would call this the horizon. This is where the monkey wrench comes in.

      Quoting you quoting me: ““However, phenomenologically speaking, we live in an environment that is not closed, which seems to imply that computers just aren’t like us.”

      Quoting you: I’m not certain I understand you exactly…

      That computers act essentially like lookup tables and can’t do anything they aren’t programmed to do (i.e. something not in the “table”)?

      The argument that humans aren’t the same is a hard one to make (either way), but those who believe in hard AI are making that argument. Hard AI possible because it is (on some level) indistinguishable from what goes on in a human brain.”

      As I see it, the human brain, as such, could be considered a closed system. But maybe we’re wrong in looking at it that way? After all, the brain is impacted by its environment in ways we have yet to understand.

      Besides, phenomenology isn’t necessarily looking at the brain, per se, but at the 1st person POV experience. Which most definitely is tied to environment.

      The idea that we are like computers—lookup tables or not, you’d have to tell me—might be true. But what we experience when we ‘leap ahead’ to the answer is not a matter of looking things up. That’s what the pig-pen example was meant to show. It may be that our brains are processing all these things “behind our backs,” that somewhere in our brains is a dictionary of the word ‘pen’ and all possible meanings, and our brains are simply processing this information so quickly that we’re not even aware of it. But even if our brains are doing this sort of processing, our experience is not of that. Which is in itself revealing. Suppose we want to replicate the way we think in AI, and we decide that the brain does in fact have a lookup table for “pen,” and we’re looking up everything at a rapid-fire speed. But we don’t want the AI to start spouting out wrong answers or stalling or misbehaving in any way…we don’t have this speed in AI yet. Then maybe we’d want to program it to use the lookup table only in circumstances in which the AI has reason to believe it has the wrong answer or can’t come up with a meaning on the fly. But how would that occur? I think AI would have to be able to take in the context, it’s environment, and build on that information to optimize its results (like Googling something, I suppose, although I’m not sure how relevancy works there or if it would be the same as what we’d want). So if our AI is in a farm, it would know that fact and “the pig is in the pen” would have an obvious correlation because we’d give it that correlation (animals, farms, pens, etc.) It wouldn’t have to search hard…(I’m thinking of something like having a page already up, saved in some way, like a link in my “Favorites” bar. It doesn’t have to Google since this info is saved and made readily available.) But in the instance of someone holding up a writing instrument, the AI would have to find a different correlation, which might mean ‘looking things up.’ It might have to be told “people generally don’t refer to pictures or representations as such, but instead refer to the thing being represented in the picture and assume you’ll know it’s a picture,” (since the pig is obviously not a real pig). There might have to be a lot of this kind of linguistic analysis to help our AI understand exactly what is being referred to. So “pig” gets turned into “too small to be a real pig” or “this is a picture/representational object” which gets turned into “representation of a pig.”

      And maybe once AI gets familiar enough with the fact that we don’t usually call pictures pictures, that TV is not just TV, but what’s being represented, etc., then maybe unnamed representations will become objects in the “Favorites” bar. So for instance, AI has camera-eyes or whatever, and is able to identify television sets and picture frames. Things inside these objects are what we talk about, not the frame. Then other representations get added on to this: “This is like when people talk about what’s on TV, but don’t mention the actual TV.” Etc. I hope I’m making sense?

      After all, we don’t want our AI to have a dialogue like this:

      Child: “Check out my bear!”
      AI: “That’s not a bear.”
      Child: “Is too. His name’s Berry Bear.”
      AI: “That is a plush toy, but it is not an animal (blah blah blah with a tedious definition)”
      Child: “…”

      I could be missing the mark here entirely. Like I said, I have no idea if computers work this way or could be made to work this way.

      “Conversely, if the distinction is meaningful, (software-based) hard AI may not be possible.”

      Hm. I dunno. And I guess the question of hard AI hasn’t been on my mind, necessarily. I also suppose and assume that this AI (above) has some sensory mechanisms that may not be exactly like ours, but that do similar work. Seeing and hearing seem important for many tasks. Not sure about being able to feel hot and cold, etc…I suppose it depends on what the AI is designed to do, what purpose it has.

      “Attempts to replicate the function of the brain (in software) without regard to replicating its structure in any way. Here’s where phenomenology might be especially useful…”

      Exactly what I was thinking (although I don’t know about software or how important that is). The idea is that it will take a long time to figure out the way our brains work. It seems we could do some soft AI work in the meantime.

      Quoting you quoting me: ““Do we really take in new information just as it comes to us, spontaneously, or do we have to synthesize that information onto pre-existing charts?”

      Quoting you: This is where I got a little confused about your meaning. Assuming Kant’s empirical realism, I’d say we definitely take in new information as we experience the physical world.

      I think I understand you to mean how we process that input — according to what rules, innate and learned? What is synthesized and what is analytical?

      If it turned out there was a set of universal codifiable rules governing how we framed experience, that would indeed be helpful. It’s pretty much what some AI researchers are doing (but without referencing phenomenology perhaps (or maybe they are, for all I know)).”

      I meant the question in a phenomenological sense, so Kant is applicable up to a certain point (until he gets into noumena). So the answer I would give is, yes, we do have to synthesize the information onto pre-existing charts. But the charts themselves are somewhat fluid. This I’m basing off of reflection on my own learning. In philosophy, we’re asked to think differently and it’s very uprooting. Each time I had to learn a new philosophy or way of thinking, I had to compare it to something, be given examples, etc. And I’ll tell you a secret—each time I learned a new philosophy, I almost always thought, “By God, he’s right!” Then later, “By God, he’s wrong! This new guy’s right!” It’s funny. We like to think we’re so skeptical and logical, but, in my experience anyways, learning begins in belief. Which corresponds, by the way, to the idea of synthesizing and looking for patterns, making correlations…even when there are none or even when the correlation is idiosyncratic. (Like my “backwards D shape but over here on the seventh fret and with this finger added” totally inefficient way of learning new chords).

      “So the next question is: What kind of rules can it offer?”

      Well, I’m not entirely sure I know the rules, but for instance, the linguistic one about representations. We don’t hold up a painting and say it’s a painting, generally. (Funny, I was just writing about something in my next post which brings this issue up in a sideways manner, but has nothing to do with this point.) Another rule could be: When on a farm, assume that the statement “The pig is in the pen” means an animal is inside an enclosure for animals, until evidence (like holding up a writing instrument) makes you reassess the meaning.

      “When on a farm…” That would have a set of rules too. What’s a farm?

      The first example (the representations) is a matter of a common feature of intentionality in humans, really. Dogs too, actually. Geordie can recognize that the dog on the TV is not real (at least Geordie has learned to do so by trying to get behind the screen), but at first he ‘leapt ahead’ to the dog, that which was being represented. We can leap directly into the story in a different way, suspending disbelief, and still have background awareness that what we’re viewing isn’t actually happening. When we talk about representations, we rarely identify them as such. This is kind of complicated when you think about it. There are so many ways in which we talk about TV or paintings or representations of any sort. AI would have to be instructed about that, otherwise statements like, “Look! Sherlock just told Watson blah blah blah…but behind him there’s this blah…” would have to be restated for our stupid AI: “Sherlock, the television character on the television series titled, ‘Sherlock,’ which is derived from…(and so on)…is represented on television right now and he just told Watson, another character on television…” Etc. We’d want AI to ‘leap ahead’ with us, to get the meaning the way we do (or at least as fast as we do). This requires unearthing a lot of our assumptions, stuff we don’t think about often.

      The other day I tried to tell Siri to “text Alysha,” but Siri didn’t recognize that spelling. I tried saying it several times, as clearly as possible. Then I texted manually, and Siri pronounced “Alysha” correctly when it advised me that it would send the text. That last bit sort of annoyed me. If Siri could pronounce the name, why could it find the contact? (It pulls up my brother’s Korean name on a regular basis, but can’t pronounce it correctly!)

      So rules…there are just too many to enumerate here. But I hope this gives you an idea. I hope I’m making sense…can’t be sure…I woke up at 4am and I’m not sure I’m clear-headed.

      Liked by 1 person

      • “If a camera is combined with temperature sensors,…”

        Which don’t act like biological ones; none of those systems do. It’s possible we might invent sensory systems that operate like biological ones do, but the ones we have now do not.

        More to the point, how qualia are modeled in current AI systems is very different.

        We’re talking here about the crucial difference between a software model of a thing and the thing itself.

        “Quoting you quoting me:”

        Oh, dear! There’s a hall of mirrors! XD

        “As I see it, the human brain, as such, could be considered a closed system.”

        I’m still not entirely clear what you mean by “closed” and “open.”

        “The idea that we are like computers—lookup tables or not, you’d have to tell me—might be true.”

        I can’t tell you. It’s a, perhaps the, fundamental question in AI. Is the human mind enough like a software process that it can be replicated in software.

        A central point of my series last fall involves lasers. There is no software model that, on its own, can produce laser light. Some physical material capable of lasing is necessary.

        The idea that mind can be produced by software is a belief that — so far — is without fact (or much foundation, frankly).

        IF (and only if) it can be shown the human mind is essentially a computation, essentially a lookup table (but more on that in a bit), then software AI is possible.

        “But what we experience when we ‘leap ahead’ to the answer is not a matter of looking things up.”

        We don’t know that one way or the other. I agree it doesn’t “seem” that way to us consciously, but there’s no proof that isn’t exactly what’s happening under the hood.

        Your long example, in every case, just shows different levels of looking things up.

        Again (and this is not my opinion; this is basic computer science as it relates to AI), if (software-based) hard AI is possible, then the human mind has to essentially be a “lookup table,” but I can see that needs to be unpacked a bit.

        Consider a computer program that adds two number (which the user supplies). The program has no idea what numbers it will get.

        One way to implement such a program is to create a gigantic table with infinite rows and columns. The first number given the program indicates a row, the second a column. The intersection of the row and column is a cell of the table containing the answer. No math, just a lookup.

        The other way is to determine how “adding two numbers” works and designing an algorithm that implements the adding function. Effectively we pack the contents of an infinite table into a calculation, a process, that acts identically.

        And because those act identically they are seen as essentially the same thing.

        So when we say (all) computers are just lookup devices, we’re including algorithms that accomplish looking things up through calculation.

        Which brings us back to the main question: Does the human mind work like that? We don’t know. If it does, then AI ought to be eventually possible. If not, then it probably isn’t.

        “After all, we don’t want our AI to have a dialogue like this:”

        You can see now, perhaps, how that’s just a bad lookup function, not including the fact that animals have toy representations beloved by children. Just a matter of bad programming.

        “Like I said, I have no idea if computers work this way or could be made to work this way.”

        [grin] It does make the conversation a bit more involved. XD

        “Hm. I dunno.”

        That assertion (of mine) wasn’t opinion, but CS fact. If (hard) AI is possible, then the human mind has to be algorithmic (and hence just a (really awesome) lookup function).

        “And I guess the question of hard AI hasn’t been on my mind, necessarily.”

        If we’re talking about software with intention and meaning, we’re already there! 🙂

        “Exactly what I was thinking (although I don’t know about software or how important that is).”

        Software is the whole point here. The whole AI field is about software. (As I said, I don’t know of any project that seeks to replicate the mind by literally replicating the brain. That would be a strictly hardware-based project. All AI I know about involves software, so, yeah, pretty important! 😀 )

        “The idea is that it will take a long time to figure out the way our brains work. It seems we could do some soft AI work in the meantime.”

        And hard AI work, but soft AI is growing in leaps and bounds.

        “I meant the question in a phenomenological sense, so Kant is applicable up to a certain point (until he gets into noumena).”

        Yeah, I can see that. Perhaps his noumena fit in as the basic programming, the inherent a priori knowledge?

        “We like to think we’re so skeptical and logical, but, in my experience anyways, learning begins in belief. Which corresponds, by the way, to the idea of synthesizing and looking for patterns, making correlations…even when there are none or even when the correlation is idiosyncratic.”

        Heh, yeah. You ever have that flash where someone says something, and it’s about something you never really thought about, but when they say whatever it was it suddenly just locks in like, “D’oh! Yeah! That’s soooo true!”

        You just did that to me. I never thought about how we do embrace a new knowledge toy fairly uncritically at first. That rush of “love” focuses our attention perhaps? Then as we get to know it we begin to see its flaws.

        Kind of like how relationships with people work, actually. The ones we keep around are the one we like and think aren’t too flawed.

        “We’d want AI to ‘leap ahead’ with us, to get the meaning the way we do (or at least as fast as we do).”

        Nothing you’ve described as ‘leap ahead’ is really anything more than an improved lookup engine. Google looks ahead as I type in a search phrase (and it always astonishes me that apparently a bunch of other people have searched for [how many grains of rice in a [cup|pound|bowl] or whatever other bizarre thing I’m looking up.

        Siri is (from what I’ve heard) even better at it. Computers do lookup real good, and it’s a well-studied (computer) science at this point (searching and sorting are two major fields in CS, and they actually pre-date physical computers).

        “The other day I tried to tell Siri to ‘text Alysha,’ but Siri didn’t recognize that spelling. […] Then I texted manually, and Siri pronounced ‘Alysha’ correctly when it advised me that it would send the text.”

        She probably heard that verbal comma after “Alysha” and got confused… XD

        Seriously, though, the algorithms for pronouncing a known word are much, much easier than those for identifying spoken words. We’ve had the former for quite a while, but the latter is only recently gotten very good.

        My car has a voice interface that interacts with my iPod to play music. I was impressed that it played the album “11:11” (by Rodrigo y Gabriela) on verbal request. It also got Bruce Cockburn correct, which really impressed me (’cause it’s pronounced “coh-burn”).

        “So rules…there are just too many to enumerate here.”

        And that may be a problem. Early AI work tried to code lots and lots of rules into a rule-based system. But, as you indicate, that’s an almost impossible job because there are so many rules and so many variations and so many exceptions.

        Yet the human brain manages it, which suggests is not impossible.

        Liked by 1 person

        • “I’m still not entirely clear what you mean by “closed” and “open.””

          Hehe…I’m not sure I fully get it myself. I guess I meant that since we have this infinite horizon, and since we are able to know that and incorporate new knowledge from that, we’re open. As far as I know, computers don’t have that infinite horizon. They have a really big one (certainly the internet is huge), but infinite?

          And this is all in terms of what we’d probably call in ordinary language, the world or the universe.

          That said, if we’re talking about weak AI, like a robot that cleans the house and does chores, it might not need an infinite horizon. It might be enough to establish certain linguistic rules (like an improved Siri) and have it learn the space of the house. But this task in itself seems monumental.

          The ‘leap ahead’ is mostly like a lookup engine in the way I’ve described it, but I’m not sure that’s the way it works with us. Still, a lookup engine might be good enough for certain purposes.

          Gotta run…the opera calls. (Don Giovanni…)

          Liked by 1 person

          • “[S]ince we have this infinite horizon, and since we are able to know that and incorporate new knowledge from that, we’re open. As far as I know, computers don’t have that infinite horizon.”

            You seem to take as axiomatic something hard AI assumes is false, and I’m not sure if speaking phenomenologically muddies the waters here or not.

            The problem I’m having is that any useful definition of open-as-infinite applies pretty equally to both humans and computers. Likewise, any definition of closed-as-limited also equally applies to both.

            Computers are capable of incorporating new information, and the set of things they can be programmed to do is infinite, so they are “open” in those senses. There is an infinite number of apps I can load on my computer, and those apps can generate information they didn’t come with.

            A definition of “closed” might be that computers are just lookup devices; that is all they can do. (Anything that can be calculated can be looked up in a pre-calculated table, although I’m not sure this doesn’t dip below the phenomenological horizon into how computers work.)

            Both are “open” in a sense, and both are “closed” in a sense. The main reason there appears to be a difference is that human minds have evolved for millions of years, but computers aren’t even 100 years old, yet.

            A central premise of the belief in hard AI (such as is shared by several of your guests here) is that the human brain is really not different than a computer and currently only seems different due to our limited development of computers.

            “The ‘leap ahead’ is mostly like a lookup engine in the way I’ve described it, but I’m not sure that’s the way it works with us.”

            If that’s true, then (hard) AI may be impossible!

            Any claim that human minds work in a way computers cannot denies AI. (Of course, I wrote that whole series of posts making that exact claim and trying to support it, so I’m entirely sympathetic to the claim! 🙂 )

            As you mentioned, in soft AI, none of this applies. Soft AI is definitely nothing more than a lookup engine.

            But I keep running afoul of what’s phenomenological and what’s not… maybe it’ll clear up for me in your replies below…

            Liked by 1 person

            • “A central premise of the belief in hard AI (such as is shared by several of your guests here) is that the human brain is really not different than a computer and currently only seems different due to our limited development of computers.”

              Yeah, I can see that’s a prevalent belief. I’m agnostic on it. I certainly don’t know enough about the human brain or computers to compare the two.

              “You seem to take as axiomatic something hard AI assumes is false, and I’m not sure if speaking phenomenologically muddies the waters here or not.”

              I don’t know either, but I don’t mean to take anything about computers being “closed” as axiomatic. That was more of an assumption based on what I’ve heard elsewhere about computers, which I just took to be a fact. But not knowing about it, I’ll just say, whoops.

              “A definition of “closed” might be that computers are just lookup devices; that is all they can do. ”

              I’d assumed the “closed” aspect had to do with computers not spontaneously incorporating new information from its environment; in other words, whatever it “knows” and can come to “know” must be programmed in advance. Maybe I’m saying the same thing but without the phrase “lookup device”?

              Whereas presumably we are more flexible in that we live in a world that basically never ends (until WE do, of course…but then who knows what happens.) Of course, there are those who say we are the same as computers with more advanced lookup devices. I don’t know. Maybe. Maybe not.

              Your definition of “closed” is probably better than mine, since you know computers.

              All I know is that if I’m a lookup device at every level and every state, that’s definitely really far under the hood, a hood that’s fairly well locked. It could be true, but it’s definitely not in my experience of the world.

              I would take issue with computational theories that don’t take into account that our environment and bodies impact our brains/minds. As if we’re simply programmed and that’s it (with the exception of brain damage causing system failure, etc.) As I see it, it seems it’s a two-way street, perhaps on a fundamental level. And if that’s the case, then how will we create hard AI without taking those factors into account? We could, theoretically, but it seems like we’d be overlooking a great part of what makes us who we are. But here I’m just spouting out ill-informed opinions. I don’t really know what I’m talking about. (And this is way outside of phenomenology, just to be clear.)

              Which is why I focus on soft AI here. I have no problem with trying to make a lookup device that does cool stuff. I’m not sure I want a conscious robot cleaning my house. On the other hand, it’s not clear that our experience (phenomenologically understood) can be translated into lookup devices that are so advanced, but who knows.

              Liked by 1 person

              • “I’m agnostic on it.”

                The idea that computers are “closed” and humans are “open” — counter to the claim of hard AI that they’re the same — is what I’m getting at. That’s not an agnostic position; that’s a position against AI.

                Maybe a problem here is that the current state of AI is way (way, way) too young to really know how different (or not) brains and computers are. Given the over-blown calculators we call computers today, yes, there’s a huge difference.

                But we’re comparing a model millions of years in development with one less than 100 years in development. It’s like comparing a paper airplane to a modern fighter jet.

                And, if I’m following (which I may well not be), phenomenology seeks to unpack what we experience consciously, so there might be a chicken-egg problem here.

                Phenomenology might apply once AI is actually conscious, but trying to apply it to produce consciousness in a metal brain might be too circular.

                Consider a simple system consisting of a camera, a computer, and software that analyzes the image sent to the computer. This software uses a reference image of a static, empty frame to detect movement. If the image changes from the reference, the software emits some kind of alarm.

                Does this system “watch” for movement? (In a sense, yes.) Does it experience anything? All it’s doing is comparing two sets of numbers for a mismatch exceeding a certain threshold.

                It’s not much different from your thermostat “experiencing” that it has gotten cold enough to turn on the furnace (or hot enough to turn on the A/C).

                Maybe we’re just too far away from a system capable of experiencing phenomena in any meaningful way? According to hard AI, the only difference between these systems and humans is a matter of degree.

                “I’d assumed the ‘closed’ aspect had to do with computers not spontaneously incorporating new information from its environment; …”

                Depending on how you define “spontaneously” and several other words in that sentence, they do. Or they don’t, but then neither do we. The problem here is that we don’t know if the human mind is just a computer (it might be). If it is, then all these difference are just matters of degree. They exist because computers are so young compared to humans.

                One might say a thermostat “spontaneously” decides to turn on the A/C because it has incorporated the new environmental information that the house has gotten too warm. If we imagine a newly installed device that’s never done that before, it’s a new action possible because of the innate and designed properties of that system.

                One might say the same thing about babies, although they tend to come with more features than your average thermostat. (On the other hand, thermostats are available in a wider range of colors and shapes.)

                “All I know is that if I’m a lookup device at every level and every state, that’s definitely really far under the hood, a hood that’s fairly well locked.”

                It’s possible AI is too crude so far (paper airplanes!) for there to be anything above the hood. We’re very much in the position of trying to figure out what’s even going on under the hood — what the pieces are.

                The question might be whether phenomenology can help us work backwards. Start from conscious states and try to work back towards mechanisms that cause them.

                Which, I think, is your point. 😀

                Liked by 1 person

                • “The question might be whether phenomenology can help us work backwards. Start from conscious states and try to work back towards mechanisms that cause them.
                  Which, I think, is your point. :D”

                  The working backwards part definitely is. The idea I had in mind was that, even if we don’t operate computationally in all cases or even at all, maybe we can translate what we experience into computation. Like an artist who has to look at things differently in order to reproduce a realistic image, but not reproduce the thing itself. So the aim is to dismiss what we think we see in order to get at the true experience. Does that metaphor fly?

                  I don’t assume that working from our conscious states through phenomenology will give us conscious AI. I’m not sure about the possibility of conscious AI. That’s where I’m agnostic. On closed vs. open, I didn’t realize that was a claim against hard AI…if so I’ll take those words back!

                  As far as phenomenology’s application, I don’t really know if it applies anywhere here. I see it applying more in the soft AI realm, since so much about us and consciousness is unclear. But maybe I’m wrong…I’m leaving that up to the experts. 🙂

                  Liked by 1 person

        • Hi, I’m back…and I don’t think I’ll see Don Giovanni for a long time. They should leave off the subtitles so we have no idea what the characters are saying. At least it wasn’t Wagner.

          Anyways, there were a few things I didn’t address:

          ““But what we experience when we ‘leap ahead’ to the answer is not a matter of looking things up.”

          We don’t know that one way or the other. I agree it doesn’t “seem” that way to us consciously, but there’s no proof that isn’t exactly what’s happening under the hood.”

          I was speaking from the phenomenological POV. There, we don’t have any ‘under the hood.’ (That’s not technically correct, but I don’t want to get confusing.) So everything is experience-able. This is a premise of phenomenology that we’ve assumed, not asserted.

          “Perhaps his noumena fit in as the basic programming, the inherent a priori knowledge?”

          No, because noumena is by definition beyond experience. So phenomenology can’t touch it. However, the stuff about not knowing things about the world IS experienced in some capacity. Phenomenology broadens the scope of experience. Consider that a unicorn may not be experienced as a physical thing, but it can be experienced as an idea or in our imaginations or as a cartoon, etc. We say, “Unicorns don’t exist.” Phenomenology would say, “They do, they’re just not experienced in the same way as, say, a rose.”

          I’ll get into this stuff more in my next post. I think I probably need to.

          Liked by 1 person

          • “They should leave off the subtitles so we have no idea what the characters are saying.”

            It’s fun to watch foreign language TV that way!

            “So everything is experience-able. This is a premise of phenomenology that we’ve assumed, not asserted.”

            (Are you saying it’s an axiom not a thesis or conclusion? English language is so vague sometimes. One can assert things one assumes, so the difference isn’t clear. 🙂 )

            The things is, don’t we know that isn’t true (that humans have a sub-conscious mind, that we don’t experience everything)? (It’s a key assumption in the Johari Window, although admittedly the Window is just a metaphor.)

            I’m confused! I’m starting to think I should just STFU until I read your next post(s)!

            Liked by 1 person

            • Ah, okay. I’ll try to be less vague. How about “everything is experience-able” because we’ve bracketed the “world in-itself” (Kant’s noumena). It’s essentially a tautology, but an important one. We’ll find that things we say we don’t experience are things we DO experience, in some sense, otherwise we wouldn’t be talking about them. It’s a matter of teasing out HOW we experience the thing that we suppose isn’t real or is beyond us.

              But wait. I don’t want to go there yet.

              Back to: we aren’t making claims about what’s real and what’s not. No ontological claims at all. There might in fact be stuff that exists “out there” forever un-experience-able. But we won’t worry ourselves about it. (And I really like this aspect of phenomenology. I’ve always felt the whole noumena thing was like wondering whether rocks are talking to us, but we just can’t hear them. Kind of boring.) So we have description of experience as its experienced. In the context of the history of philosophy, it’s a methodological alteration that helps us understand what phenomenology is about.

              Consider phenomenology’s foundation placed on an “as if” basis. It’s not even a hypothesis because we aren’t seeking to find out ontological truths (in Husserl anyways). It’s a way of thinking which makes claims about the 1st person POV without making claims about reality in-itself. Does that make sense?

              On the sub-conscious mind, I could get into that but I’m afraid I’ll just confuse things. That’d be seriously advanced phenomenology. I’m really tempted. But I think I’ll STFU as well.

              The next post is taking forever because I’m trying to figure out how to organize the information so it’s not overwhelming. I tend to just dump a load of stuff into one post and expect everyone to jump through hoops. I’ve got to learn how to spread things out a bit.

              Liked by 1 person

              • “We’ll find that things we say we don’t experience are things we DO experience,”

                Oh, totally with ya on that! (That McDonald’s you didn’t notice as you drove past, for example. And, of course, also unicorns.)

                “No ontological claims at all.”

                Okay. I think I’m confused about noumena. Maybe I’m giving too much emphasis to the older definition of noumena that aligns them with Plato’s forms (and hence with the “charts” you mentioned). Kant seems to align them with his “thing-in-itself” as an inaccessible object but at the same time seems to link them as objects of pure reason.

                Philosophy is hard! 😮

                “Consider phenomenology’s foundation placed on an ‘as if’ basis.”

                That’s helpful, I’ll focus on that. It’s a concept often used in computer programming. System ‘A’ should work as-if it was System ‘B’ — despite being a completely different system. It’s an approach based purely on system function, not mechanism.

                Liked by 1 person

                • “Kant seems to align them with his “thing-in-itself” as an inaccessible object but at the same time seems to link them as objects of pure reason.”

                  Yeah, that’s it. Very different from Plato indeed. So to make it more concrete, noumena for Kant is stuff like freedom and God and the world as it exists in-itself. He considered these completely unknowable but important because we have to believe in them, according to him.

                  I know. Philosophy is hard. But mostly it’s the terminology which changes according to whom you’re talking about.

                  Also, I want to make sure you get that not all phenomenology is placed on an as-if basis. Heidegger’s is not. Husserl, in my opinion, made a perfect move in avoiding ontological claims. His method helps to clear away confusion, whereas Heidegger muddies the water. But that’s just me. 🙂

                  Liked by 1 person

                  • Very different from Plato, but connected in the sense of being objects of reason?

                    The “charts” of perception you wrote about; they don’t connect at all with this, is that right?

                    Like

                    • Well, for Plato these are objects of reason in a positive sense; for Kant in a negative. Plato didn’t put “the Good” (or God) out of reach, just really far away. 🙂

                      As far as charts of perception connecting, I don’t think so. As far as noumena goes, it’s helpful if you know what it is in the Kantian sense because then you can explain phenomenology as putting noumena aside.

                      Like

  9. So,… Intention. I’m seeing two levels in your description.

    Steve’s driving a car analogy is a good example of the first level. A nice link there is that, when driving, you are always involved in some aspect of driving. There is always an object of intention.

    It made me think of a CPU. It might be devoted to this application or that application, but even when “idle,” it’s chugging along doing something. Mine is semi-aware even when “hibernating” since it responds to keyboard or mouse clicks (and makes the power light flash a slow sexy orange instead of steady kryptonite green).

    The “focus” of the CPU is always on (with computers, “in” is more accurate, but same thing) some function.

    I’ve heard the term “focus” used in AI to describe this level of intention. Exactly as with a CPU, an AI system always is doing something, even if that something is “just hanging out doing nothing” (which computer AI might be way better at than we are — modern CPUs can pretty much pull off entering a “totally meditative” state XD ).

    This kind of intention is fairly trivial from an AI perspective.

    The other level, as I grasp it, is the structuring of experience within intention (call it interpretation), and this supervenes on the background (which I take to be both innate and learned).

    Per the Oedipus Rex analysis, it also supervenes on immediate experience.

    [As an aside, a big part of my definition of sanity is the degree to which one’s background matches external reality.]

    The human brain is an awesome pattern-matching machine. (Trained to resolve subtle clues in nearby movements of shadow and leaf into: ARG! I’m being stalked by a tiger! Run away!)

    That’s what’s happening with that pig pen example, right? Words have multiple meanings; we know that. Sentence grammar usually disambiguates multiple word meanings. But not always. (“Time flies like an arrow. Fruit flies like a banana.”)

    [I had a friend who once sent out a resume that came with a “Free Pen!” And, sure enough, drawn right on each one was a nicely done drawing of a pig pen. So that popped into mind during your example. 😀 ]

    In AI, neural networks (which seek to replicate a brain’s network in software) are holistic pattern-matching machines very much in the fashion of the human brain. These are generally trained by inputs, just like humans are. (And, just like humans, they have innate programming, too.)

    So, yeah! The structure of intention-interpretation is definitely an AI challenge. As I said before, to the extent phenomenology can provide codifiable rules, it seems quite helpful.

    Liked by 1 person

    • “I’ve heard the term “focus” used in AI to describe this level of intention.”

      Interesting. That makes sense. I wonder if it would be more useful for AI to adopt phenomenological vocabulary to allow for more layers of “focus”?

      “[I had a friend who once sent out a resume that came with a “Free Pen!” And, sure enough, drawn right on each one was a nicely done drawing of a pig pen. So that popped into mind during your example. 😀 ]”

      Or the old joke, “I’ll draw you a bath.” Then you pull out a piece of paper and a pen…or should I say, ‘writing instrument’? 🙂

      “The other level, as I grasp it, is the structuring of experience within intention (call it interpretation), and this supervenes on the background (which I take to be both innate and learned).”

      I’d say there are many many many layers of intentionality. So the car driving metaphor is really good because most of us aren’t really paying attention in a “focused” way. In other words, we’re not thinking, “I should press my foot on the gas pedal just so to accelerate.” The pig-pen example is similar. We don’t think about the meaning of the word “pen” consciously, unless its meaning is for some reason brought to our attention. The word “focus” seems more like “pay attention” in an active way. So I’d say we focus on “pen” when someone gives us a context that isn’t familiar. Interpretation is usually on the gas-pedal level.

      On the background…we might not want to get into “innate” here. The background, as I’ll discuss later, is infinite. It’s a tricky one. It’s basically everything. Or maybe I should say Everything. There are things in the background which I don’t know, but know I can possibly know if I were to attend to the matter. For instance, what is the temperature of my right foot? What is a CPU? What is the time in Japan? Etc. The background is the whole world. There are things in the background that I know I don’t know anything about, things I don’t even know exist. So these not-yet-existing things for me are in the background as not-yet-existing, a very fuzzy way of saying I know my limitations (and that the horizon is infinite…there will always be something I don’t know, and I know that). Don’t kill me, please. 🙂

      Then there’s what we care about, what we are likely to find in the foreground as humans, on the whole, and this I haven’t heard much about. That’s where we might get into “innate.” (The TV example in the other response, for instance, and our tendency to refer to images by their content.) But “innate” would have to be grounded in a phenomenological way as meaning something like, “Generally true for us.” Not as “It’s in my DNA.” I’d probably abandon that word. It conjures up “in my head” too much.

      “The human brain is an awesome pattern-matching machine. (Trained to resolve subtle clues in nearby movements of shadow and leaf into: ARG! I’m being stalked by a tiger! Run away!)”

      Totally agree. And I think, while our pattern-matching instinct can lead us into error, it’s also the mechanism that allows us to learn. It’s also a synthesis of information which allows us to do things quickly, “leap ahead,” not look up everything and assess every possible meaning, every possible temperature, every possible shade, etc.

      Liked by 1 person

      • “Or the old joke, “I’ll draw you a bath.””

        Or recently on Angie Tribeca their police captain had Angie and her partner into his office where he says, “Take a chair.” And they both pick up a chair and start to leave… 😀

        (If you liked the Police Story TV series, or the Naked Gun movies, you’d like Angie Tribeca.)

        “I wonder if it would be more useful for AI to adopt phenomenological vocabulary to allow for more layers of ‘focus’?”

        It would alter the meaning, since only one thing can have the focus.

        “I’d say there are many many many layers of intentionality.”

        Well, yes, clearly! I was distinguishing between the idea of intention-focus and intention-interpretation because those sound very different to me (or am I missing a crucial point here).

        “So the car driving metaphor is really good because most of us aren’t really paying attention in a ‘focused’ way.”

        I think you’re using the term ‘focus’ in the sense of: “I was really focused on getting my dissertation done.” I mean it more in the sense used in software as “the main or primary thing.” Even if you’re not doing anything and the app is just sitting there idle, there’s always one that has the focus.

        There is a topic of conscious versus unconscious thinking. As you said:

        “In other words, we’re not thinking, ‘I should press my foot on the gas pedal just so to accelerate.'”

        Right, but at some point in your training, you did. That knowledge is part of your unconscious thinking now. Muscle memory, so to speak. Most of our training and knowledge sinks into an unconscious level, although, as you say, we can bring it into consciousness.

        Which makes it different than other mental processes that are forever below our horizon of perception. (In computer science, it’s referred to as reflection. Some languages have access to (some of) their internal states — they are reflective, whereas others have none. Humans are not reflective in this sense; we have no access to our internal states.)

        “The background, as I’ll discuss later, is infinite.”

        It seems to be kind of a re-statement of realism? Yeah, the external world is out there and you can learn things about it?

        “So these not-yet-existing things for me are in the background as not-yet-existing, a very fuzzy way of saying I know my limitations (and that the horizon is infinite…there will always be something I don’t know, and I know that).”

        Okay. I’m not sure how to use this… Or is it just a case of stating the obvious to make sure we’re all on the same page about it? A starting foundation, so to speak?

        “Then there’s what we care about, what we are likely to find in the foreground as humans,…”

        I’m getting seriously lost here… If the background is all the stuff you don’t know, the foreground must be all the stuff you do?

        But I’d gotten the impression the foreground was your intentional focus. Where does all the stuff you’ve learned and internalized, but aren’t thinking about, go?

        Like

        • Ah, okay. I didn’t realize “focus” was a technical term.
          “The background, as I’ll discuss later, is infinite.”

          “It seems to be kind of a re-statement of realism? Yeah, the external world is out there and you can learn things about it?”

          Emphasis on “kind of.” The ‘external’ world comes back post-bracketing in the phenomenological epoche as it’s experienced. So not realism, because we’re not doing ontology. We’ve bracketed that (unless we’re talking about Heidegger, and I’m leaving him aside.)

          “I’m getting seriously lost here… If the background is all the stuff you don’t know, the foreground must be all the stuff you do?”

          The background includes stuff you don’t know (but CAN know, at least given enough time and in theory), but also includes things you just experience peripherally. If you’re driving into a town you’ve never seen before, you’re experiencing all sorts of new things, but may not be directed toward them. For instance, yet another McDonald’s on the side of the road. You don’t care about that because you’re looking up an address and maybe you don’t like McDonalds, and maybe they’re everywhere so you’ve stopped noticing them. So you aren’t thinking about McDonalds. But there is McDonald’s, hovering in the background along with the rest of the world.

          “But I’d gotten the impression the foreground was your intentional focus. Where does all the stuff you’ve learned and internalized, but aren’t thinking about, go?”

          I’d say in the background. But that background is filled with things besides your learning. Thinking about it visually might help. I’ll be giving an example of this later, and hopefully that will help.

          Liked by 1 person

          • “The background includes stuff you don’t know (but CAN know, at least given enough time and in theory), but also includes things you just experience peripherally.”

            It’s becoming clear I may be outta my depth here and talking ignorant trash, but that sounds like a major category conflation there!

            That McD’s I didn’t notice still became part of my mental landscape. Unlike the McD’s I’ve never driven past. Or like the ones on alien planets I’ll never drive past.

            The example of driving rules as background. The rules are there and presumably available to recall, but the rules for driving on Alpha Centauri are not.

            So it’s like we need foreground, midground, and background?

            “Thinking about it visually might help. I’ll be giving an example of this later, and hopefully that will help.”

            Looking forward to it! 😀

            Liked by 1 person

            • Maybe I’m just giving terrible examples. Maybe the original term “horizon” is more helpful…the things “on the horizon” might be in my peripheral vision, or they might be hidden. Whatever’s on the other side of the mountain is “on the horizon”…the McDonald’s I drive past but don’t quite notice is “on the horizon.” The hot guy crossing the street is an intentional object.

              Well, okay, I’ll try better next time. 🙂

              Like

              • The problem may be my difficulty wrapping my head around an approach that conflates “things I’ve taken in but haven’t actively noticed” with “all the things I might someday notice (or not).”

                Those seem like such vastly different categories to me, even approached from the outside and not considering what’s going on under the hood.

                Maybe I’ll figure it out as you explore the topic more! 🙂

                Liked by 1 person

                • That’s a really good point. I don’t think we should conflate them. Maybe Husserl doesn’t, but I just can’t bring myself to read him.

                  I’d be okay with not conflating them, or at least having them both under the category of “horizon” and then maybe sub-categories for the different types of “horizon.” That would be totally fine by me.

                  Liked by 1 person

                  • I suppose it depends on what relevance the horizon objects have in what follows. If they really don’t need any differentiation (unless and until they’re no longer horizon objects), then the conflation doesn’t much matter.

                    Kind of like how you store all this undifferentiated stuff in your attic. It’s only common property is that you don’t care about it right now.

                    Liked by 1 person

  10. “In a way, we’re looking at our own experience as if it were virtual reality.”

    Who is the “we” here? I cannot control half the things that pop into my head. When asked about my favorite films, three pop into my head and I have no idea why I chose them rather than hundreds of others I could have chosen. Furthermore I was not in control of the fact that I could not recall all of the films that I had seen in my life. I cannot even control what my next thought will be. There seem so many independent ‘agents’ working in my body/mind that I am not sure it makes sense to talk about someone looking at their own experience. Does Husserl’s phenomenology depend on us being aware of who exactly is doing the perceiving or experiencing?

    Liked by 1 person

    • “Does Husserl’s phenomenology depend on us being aware of who exactly is doing the perceiving or experiencing?”

      Not necessarily. It’s something he talks about later on, but it’s not a precondition for doing phenomenology. It’s not as if we have to define the self in some concrete way before we can do phenomenology. We’ve bracketed reality for methodological purposes (in other words, we’re not doing ontology), so we certainly don’t have to define the self in terms of science either. Husserl does have an explanation of the self, but I don’t see his version as necessary to doing phenomenology. We might even find we disagree with Husserl’s “self” and still do so within the parameters of his methodology.

      What I meant by that statement was we’re looking at our experience as it appears and unfolds, without questioning the causal relationship of it to something outside of experience. In other words, we’re dodging the mind-body problem. (And remember, we’re not talking about Heidegger here…he DOES do ontology. I don’t think that would be helpful for our purposes, although maybe some parts of his philosophy might be. Still, I’d rather avoid that.)

      Really it doesn’t matter whether you are in control of whatever pops into your mind. That aspect of it would simply be folded into the description of the experience. Intentionality is usually uncontrolled. We don’t have to say to ourselves, “I’m going to place this ottoman in the foreground of my experience” in order to do so. Tons of random things appear as given. A lot of what we experience isn’t a matter of choosing to experience x. Same goes for thoughts, even random ones. They may constitute the foreground of our experience (being an intentional object-content), but we don’t have to will them or control them. I’d wager that most of the time we don’t. I know I’d go mad if I had to control my thoughts!

      Liked by 1 person

  11. Pingback: Eidos and AI: What is a Thingamajig? | Diotima's Ladder

  12. Pingback: The Natural Attitude | Diotima's Ladder

Leave a reply to rung2diotimasladder Cancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.