Eidos and AI: What is a Thingamajig?

To understand this post, you might have to read part I and part II on phenomenology and artificial intelligence.

The question I’m asking is not: Can computers think? Or: Can AI have consciousness? But: Can meaning “run ahead” for AI the way it does for us? Can we program intentionality, the “about-ness” or “directed-ness” toward things, as well as the horizon that makes things/objects possible? And, most of all, does it matter what processes are involved in arriving at the correct response or behavior?

For the last question, I don’t know. I see efficiency in the way we experience, but a specific kind of efficiency. Our efficiency is not in grasping everything equally and honing in on the correct answer or response. When we make mistakes, it’s often not just computational error. Error sometimes comes from grasping meaning and relevance in context, grasping it in a plausible and maybe reasonable way, but not necessarily in the technically or scientifically-correct way. Can a lookup device be designed to act as we do (in a timely manner)?

I’m starting to break free from the well-known philosophers here. If you hope to learn about phenomenology as it appears in the history of thought, in a technically precise way, you might not want to read this. I’d recommend my other post on Husserl as a starting point (which has been checked by the in-house philosopher).

Things might get messy, but hopefully not messy in a pedantic, overly-hyphenated-German-philosophy way.

Also, I’d promised some of you I’d bring up things in this post that I’m not actually going to bring up now. I realized I’d crammed too much into one post and this is not the platform for long discourses. Speaking of brevity…

The Ashtray Example


What do you see above?

This isn’t a trick question. It’s an ashtray. Or you might say it’s a representation of an ashtray, being an image on a blog post. In any case, let’s pretend it’s a physical ashtray sitting before you, one in which you can put out a cigarette if you so desire.

The act of calling an ashtray an ashtray may not seem particularly amazing, but consider this: the ashtray has an infinite number of perspectives. You are looking at it right now from one perspective, and you will never in fact see with your eyes or feel with your fingers the entire ashtray in all of its possible states. (We won’t talk about the smells or tastes…) You could spin the thing around and around all your life, but hopefully you won’t—in this case one glance gives you all you need: ashtray. More importantly, one unified object.

First of all, let me make clear that we are not talking about a priori ideas in the usual way. There is no ashtray-form sitting in your mind and some ashtray-ish-stuff ‘out there’ pushing the impression buttons of your senses, which then get interpreted by the mind. We’re still doing phenomenology and we’re still confining ourselves to experience as it’s experienced. There’s no mind vs. objective mind-independent stuff in our investigation. There’s only experience.

Plus, we needn’t compare various ashtrays and wonder how it is that from this multitude of ashtrays, each one of which is not exactly identical to any other, we are able to label them all the same: ashtray. We’re not looking at what every single possible ashtray has in common. We’re not talking about ashtray-ness. We’re talking about one particular ashtray. This ashtray. (Okay, strictly speaking, the hypothetical physical one before you.) How is it that this ashtray, despite it’s infinity of perspectives, is perceived as one unified self-same object?

Take another example, an object you’ve never seen before. Let’s imagine it’s a solid plastic wad. You have no idea what its function is, but you still experience that nameless plastic as a unified object.

You could argue that we err in leaping to this unity, that we shouldn’t say we actually experience a unified object, but instead particular moments of the ashtray. When we see a particular moment of a particular ashtray in a particular way, we theorize about the rest of the ashtray. The unity of the ashtray is nothing but leaping to conclusions, a story we tell ourselves to get by, a quick synthesis, perhaps subconscious. We impose unity. Since the unity itself is never something we actually see (with our eyes), it’s “just” a theory. Like gravity. Like causality. Like necessity. Completely invisible and possibly not really there. We had a sense impression yesterday that the sun rose, and the same impression the day before that, but who’s to say the sun will rise tomorrow? (If you start having apocalyptic nightmares, you can blame David Hume). In other words, there is no visible or perceivable necessary connection between events/impressions. We see event A, then event B. That’s it. Like constellations in the night sky, it’s we who connect the dots and make up stories about them.

Kant comes in here to say something like: “Wait. The sun’s rising is not just a theory! Necessity, causality, synthesis of the manifold of experience, etc. are indeed ‘in our heads,’ but they cannot be taken off like a pair of sunglasses. We couldn’t experience anything at all without these a priori conditions.”

I’d argue that neither have hit upon experience as it’s experienced. Hume errs in supposing that experience is equal to or derived from sense perception. Kant errs in making this same presupposition, but he adds that knowledge is derived from both experience (sense perception) and the a priori conditions which make experience possible. Kant nobly tried to bridge the rationalist-empiricist divide, but maybe a bridge wasn’t needed. Perhaps experience itself needed to be re-examined. It seems we’ve made ‘experience’ too narrow.

Here is where you must decide for yourself by ‘looking at’ your own experience.

The ashtray’s unity comes first in most ordinary experience, and this entails assuming properties about the object that are not strictly visible with the eyes in the moment (I use the word “assume,” but this is not meant to be taken as an active thought process or a matter of logic…it’s grasped immediately, intuited wordlessly.) The object appears to us all at once in its past and possible states, maybe only in a vague way, but it’s all there in that moment. We don’t experience these disjointed perceptions—a certain temperature + a certain color + a certain shape + a certain weight, etc.—and then add on unification, except when we theorize about experience in analysis. But in that case, when we theorize, we experience a theory, not the disjointed perceptions that we suppose we’ve experienced, at least not directly and “in the order in which they were received” (to quote telephone answering services, which may be a faulty analogy, but I couldn’t resist.)

In other words, when we theorize about experience by analyzing it, we change the experience from a naive ordinary one to a conceptual one. I repeat, this sort of theorizing is also within experience as a certain kind of experience, and therefore it’s possible to study phenomenologically too…but that’s a complex matter that I don’t want to get into. That’s advanced phenomenology, and we’re in phenomenology 101. Here we’ll stay with this: the “adding up” of sense data doesn’t quite fit the bill as an explanation of the ordinary, original experience.

Much of what we experience as we experience it isn’t given as sense data. 


Husserl uses the term “eidos”—literally “seen,” but here we’ll go with: shape, form—in a way that’s similar to what I’ve called “leaping ahead” in previous posts. His term is way better than mine for technical reasons, but I thought “leaping ahead” might make more sense in earlier contexts, as a means of preparation and to avoid scary words.

So, eidos = form, like Platonic ideas. However, Husserl does not use eidos in a fully Platonic sense; he does not (and cannot) posit a world of forms separate from the world we experience, but rather, eidos is constrained by its particular manifestations. I think of Aristotle here, but I hesitate to make that comparison…so take that with a unified self-same grain of salt.

With eidos Husserl seeks to do a different kind of analysis, one which he thought would uncover the basic elements of phenomena.

The Eidetic Reduction is described in the IEP, which I’ll quote here:

The eidetic reduction involves not just describing the idiosyncratic features of how things appear to one, as might occur in introspective psychology, but focusing on the essential characteristics of the appearances and their structural relationships and correlations with one another. Husserl calls insights into essential features of kinds of things “eidetic intuitions”. Such eidetic intuitions, or intuitions into essence, are the result of a process Husserl calls ‘eidetic’ or ‘free’ variation in imagination. It involves focusing on a kind of object, such as a triangle, and systematically varying features of that object, reflecting at each step on whether the object being reflected upon remains, in spite of its altered feature(s), an instance of the kind under consideration. Each time the object does survive imaginative feature alteration that feature is revealed as inessential, while each feature the removal of which results in the object intuitively ceasing to instantiate the kind (such as addition of a fourth side to a triangle) is revealed as a necessary feature of that kind. Husserl maintained that this procedure can incrementally reveal elements of the essence of a kind of thing, the ideal case being one in which intuition of the full essence of a kind occurs. The eidetic reduction compliments the phenomenological reduction insofar as it is directed specifically at the task of analyzing essential features of conscious experience and intentionality.

In other words, in the eidetic reduction, we seek to determine whether the “actual thing” (not thing in itself, remember) qualifies as an instance of the eidos we assign it. What we seek is whether or not the particular instance meets the essential qualifications of, say, a triangle, or a building. The eidetic reduction is a process in phenomenology which is indeed descriptive, but on the more theoretical side, being analysis. So what, then, makes this sort of analysis truer to experience as it’s experienced? I don’t have the answer. The use of the term “eidos” seems fine, but then to go on and try to create a science out of it seems to be a stretch. All I can say is my inner Plato lover is completely biased in favor of such an exploration, but I’ll admit that few have taken this “science of essences” stuff seriously. Perhaps this is the particular juncture at which people turn away from Husserl. It’s not quite Plato reincarnate, but it’s close enough.


Science of Essences: why we should resurrect Husserl

It seems to me that eidetic intuition applies everywhere in ordinary experiences, including those cases in which we experience something novel. Taking this as given, we might then use analysis to find out more about essences, a science of essence for a specific purpose. We might find out general things about essences; for instance, there’s an infinite number of them, given that each particular is unified in eidetic intuition. The plastic wad is a unity by virtue of being one thingamajig, and there can be an infinite number of such thingamajigs (that’s my technical term). Then there are named unified objects that we classify either according to likeness or some other classification system. Trees, bushes, flowers, vegetables, etc. might have a different classification system than plate, chair, ashtray or 3.14, -5, 1/2 or justice, truth, God. Plus, objects that were designed for one purpose can be used for other purposes, and often are (those of you who’ve taken a sip from a beer bottle-turned-impromptu ashtray know this all too well.) The difference may not be so much in the material, but in the function. Function is an important part of the way we classify things. Other times the classification will depend on the material. There are so many ways of adjusting our lenses here to suit our purposes.

For soft AI, perhaps a “science of essences” could be applied in a particular environment in which we can predict and control the objects within that environment according to essence classification alongside image identification (which already exists to some degree.*) The assumption of eidetic intuition is not to be taken lightly in philosophy, but in AI, it seems to make sense of the problems AI research has faced by explaining that there’s this bizarre unity of the manifold in our experience. It’s a tangible problem, regardless of how it arises in the human brain or whether it arises there or whether it has something to do with self-awareness or consciousness. The mere fact of this “transcendence within immanence” might be enough to outline a strategy to be taken in replication.

A quick Google search showed me that algorithms for object identification have come a long way. The difficulty lies in speed of object recognition. I’d guessed that there must be some sort of way to eliminate unlikely possibilities, to cut corners, but apparently that process is not as good as random sampling. Weird.

Claire, the Robotic Maid

Let’s get concrete. Let’s create a maid robot and name her Claire. This way we start small: the confines of a house. We don’t have an infinite horizon—otherwise known as the entire universe—on top of an infinite number of perspectives of each individual object. That’s just too hard.

Also let’s assume either: a) we don’t need an infinite number of perspectives to have Claire identify a self-same object* or b) we can figure out how to replicate an infinite number of perspectives in a unified way, which sounds impossible, but maybe it isn’t.

And let’s assume the robot mechanisms work fine. Maybe she’ll be better than human in terms of mechanics. Now it’s a matter of getting her to see objects as we see them—to know when that plate is not being used as a plate, but as a saucer; to know that a photograph of a human is not a real human; to know that she doesn’t need to water the plastic fern; to know to stay away from the rare book collection and not smoke your stash or rat you out, etc.

If an object appears to us with all possible variances of it alongside the self-same-ness of it, we should want that for Claire…to some degree. After all, she must know that the refrigerator is dirty, not that it’s a new thingamajig that doesn’t need her attention. And she shouldn’t need to know what a refrigerator looks like after it’s been smashed to smithereens at a monster truck rally either. There must be some threshold of experience that mimics our awareness of differences in objects. Claire might have the capacity right now to know what a refrigerator is—the mere name—but she also needs to know what various components do or at least how to deal with them, what to use to clean them, that she doesn’t need to clean the Coke bottle in there, etc. Perhaps for moveable objects she needs to know their function as dictated by the environment, but for other things she doesn’t need to know much. She doesn’t need to know what an escutcheon is in order to clean it (hence our need for a class of objects called thingamajigs). If she wasted her time finding out what an escutcheon is, that would be inefficient.

*I’ve taken Husserl’s object identification—an infinite number of perspectives somehow alongside a unity of this infinity—throughout this post as true. In consulting with my own experience, I wonder if the unity we perceive, while still being a priori in a phenomenological sense, is not quite infinite, but a shadow of the infinite. In other words, perhaps this “infinity” he speaks of is theoretical and not directly experienced, and what we actually experience is openness or possibility, but not quite infinity. Maybe we experience a very larger number of possible perspectives, but at some level there’s a vanishing point. Maybe infinity as we actually experience it in our usual naive way is nothing more than: “A lot more than I wanna count.” Not infinity infinity. (And certainly not infinity times infinity.)

How would you create Claire? What stumbling blocks do you foresee? What is an escutcheon? 


Intentionality and Meaning

In the previous post, I put forth the question of whether Husserl’s phenomenology could be of use to AI, weak or strong. This is a genuine question that I put out there to discuss…I have no thesis to support. Just curious to hear what you think.

In writing this post, I realized I’d have to break this down into several segments. From now on, I’ll be using Husserl for the most part, not Heidegger, to explain aspects of phenomenology…although I do like Heidegger’s readiness-to-hand and presence-at-hand distinction. But I prefer the bracketing methodology of Husserl for these purposes. I could see Maurice Merleau-Ponty coming into the picture, especially on the issue of AI embodiment, but I haven’t read him. (Perhaps those of you who have can weigh in. I’d love that.)

I might stray from Husserl too, setting out on my own. In other words, not everything here will be a lesson on Husserl. I don’t want to be encumbered by referring back to his works to verify what I’m saying, because that would make what should be a simple blog post an academic enterprise. I’m not feeling that game right now.

Conditions of experience

Phenomenology allows us to describe experience as it’s actually experienced. In doing so, we look for conditions that make experience possible—the constitution of meaning. These “rules” are not likely to be revelatory in describing what happens inside a biological brain. However, phenomenology could run parallel to neuroscience. After all, in order to know what’s going on in the brain, we must know what brain states correspond to—the so-called “subjective” experience, i.e. 1st person accounts. One might argue that 1st person accounts tend to miss the mark, fall into error, but we can’t allow all 1st person accounts to err on a grand scale. There must be a back and forth here, perhaps only a preliminary one at the outset. There is no mapping of the brain without knowing what it is we’re mapping.

Why should we care about a philosophy that sounds very much like navel-gazing? Well, this navel-gazing isn’t about the stuff we ordinarily think of as “subjective”: our favorite ice cream, the personal feelings we get when we listen to music…that stuff we generally agree is “a matter of taste.” Husserl’s direction is actually scientific (like, Wissenschaft scientific, “the sciences” scientific) in the sense that we are looking for elements of experience that are essential to it.

For example, those of you familiar with Kant’s Critique of Pure Reason may remember that space is the a priori outer form of experience, and time, the inner form. Causality was explained in this way too; everything we experience will be shaped by the categories because these are necessarily presupposed. (Kant also believed there were inexperience-able things “out there”—noumena—which causes phenomena. Let’s leave this aside.) Husserl goes further than Kant by setting forth a philosophy that seeks to ground the content of experiences individually, on a case by case basis. We’ll see how this works in later posts. Let’s just say for now that Husserl’s phenomenology is a lot more detailed and specific.

The very fact that phenomenology seeks out “rules” makes me wonder if it could apply to AI in some capacity, especially in areas that have to do with perception and learning. It might actually be preferable to bracket the “natural world”: “objective” reality, Kantian “things in themselves.” In a way, we’re looking at our own experience as if it were virtual reality. Like a computer.

However, phenomenologically speaking, we live in an environment that is not closed, which seems to imply that computers just aren’t like us. It seems that AI would have to progress significantly to allow for open-ended possibilities if we want to achieve those hard-to-accomplish tasks that for us seem basic. Does that which allows for creativity and learning in us preclude algorithmic AI? Maybe, maybe not. I’m not well-informed in this area, but it seems at the very least we’d have to know what makes our experience what it is in order to answer the question. Do we really take in new information just as it comes to us, spontaneously, or do we have to synthesize that information onto pre-existing charts? I suspect the latter, and I suspect if we could “crack the code” that allows us to understand our own learning methods, we’d be better able to do the same for AI (even if only in weak AI, or for certain specific goals).

In my last post I told you I’d explain how phenomenology operates by exploring Husserl’s intentionality. Let’s do phenomenology.


Husserl’s Intentionality is at the heart of his phenomenology. Intentionality is our directed-ness toward things, and it’s basically this: Consciousness is always consciousness about or of something. Pause here for a moment. Really stop and give this consideration. Much of phenomenology is reflection on experience. If you don’t do it, if you read articles on phenomenology and look for ways to summarize the logic, to relate to it only on the level of mere verbal cohesiveness, you’re missing a crucial aspect of it. The process is intuitive. You analyze the veracity of such statements as “consciousness is always consciousness about or of something” via intuition, reflection on your own experience.

Try not to think about anything. You might think you’ve experienced something like this once: a dreamless sleep, a coma perhaps. But were you conscious? No. So right now do this: Really try not to perceive something, to be aware of. You can close your eyes, close the windows, block out the sound, but time goes by. What happens? Well, if you’re like me, perhaps even more happens in your consciousness now that the senses are closed off. Ideas, daydreams, random thoughts…these are included as content, “about-ness.”

Those of you who meditate may raise objections, and these will be well taken. I, for my part, have never found myself to be conscious while being conscious of nothing, absolutely nothing.

It is the nature of our experience to be directed towards things or about things. (What I’m loosely calling “things” are not just objects of sense perception, but can include thoughts, ideas, memories, etc.) Intentionality is always there. In other words, it plays a pervasive role in every kind of experience: perceiving, judging, remembering, dreaming, screwing up, etc.

Imagine an omniscient camera (or recorder of some sort) that captures the infinity of experiences, all sense data, equally, without any directness toward things, without signifying any particular experience. We are not even a time-limited “subjective” version of such a camera. We can speak of this omniscient experience just as we can speak of a square circle, but we can’t really picture it. That’s because, in an a-logical—non-logical—way, it is nonsense. Through intuition we know that in such a world, there would be no objects. No objects, no intentionality. No intentionality, no objects.*

You might’ve guessed by now that intentionality is broader than what we mean when we say, “I intend to fix this,” but includes such statements and meanings. Plus, intentionality is not attention, necessarily, but includes attention.

What intentionality does is acknowledge that there is always a foreground and background to experience. The background is a vague summation of the world. This world may not be the world of science, may not include the world ‘in itself’ (or it may, phenomenologically, but let’s not get too complicated here). Let’s say for now that, at a minimum, it’s a world that’s available for us, and therefore it coheres in a loose sense—it must. This background is what Husserl calls the “horizon.” It can be thought of as a potential experience, past or future, which has not yet shown itself or is not now in view. The horizon is also infinite (more on this later.)

Intentionality is mostly passive as we go about our everyday lives, and on philosophical-phenomenological reflection we can “see” it operating, to some extent.

We quickly disregard what isn’t relevant to us at the moment while simultaneously knowing that those things that are currently irrelevant or out-of-focus—on the horizon—are possible experiences that could come into the foreground. Those background possibilities constitute our foreground experiences. We know what’s behind us in a loose sense. We have expectations about what’s behind us and those inform our foreground experiences.

I repeat, these foreground experiences are not necessarily “paying attention.” More often than not, we’re not trying to focus.

We grasp content in its context, leaping ahead to the most likely meaning or its totality, its unity, often unaware of other possible meanings or interpretations of the content, although further investigation may warrant a change. This is all done in a flash due to the intentional nature of our experience. The horizon, the background, is operating at the same time that we make the leap. The meaning of words/objects are constituted in time and situation, and this constitution is holistic, yet adaptable and subject to constraints.

Furthermore, the object or content of the experience is the way we look at it. Here’s a good example found in this article:

Consider the plight of poor Oedipus Rex. Oedipus despised the man he killed on the road from Delphi although he did not despise his own father; he desired to marry the Queen although he did not desire to marry his mother; and he loathed the murderer of King Laius before he came to loathe himself. But of course the man he killed was his father, the Queen was his mother, and he himself was the King’s murderer. How shall we describe the intentionality of such acts? Oedipus’ desire, for example, seems to have been directed toward Queen Jocasta, but not toward his mother. But Queen Jocasta and Oedipus’ mother were the very same person…Oedipus’ desire was therefore not simply “for” Jocasta: it was for Jocasta as conceived in a particular way. And the same sort of thing is true, not only of Oedipus’ mental states, but of everyone else’s as well…The intentionality of an act depends not just on which object the act represents but on a certain conception of the object represented.

The intentional conception of an X is not just an imposition of our minds on “facts” and therefore subject to error. (Remember, intentionality is always there, and it doesn’t always err. Error is just a clear way of showing the difference between fact and intention.) The example above demonstrates how meaning is constituted, but also how new conceptions can arise from new evidence. The meaning of Oedipus Rex would be entirely lost on us if we did not understand Oedipus’ intentions and the context which guided those intentions.

*Here I’m combining “object” and “content” for the sake of avoiding pedantry. We’ve established we’re not talking about noumena, so I hope you’ll excuse my sloppy language.


Meaning Constitution

Let’s look at our intentionality, our guiding mental behavior, linguistically.

Consider the sentence: The pig is in the pen.

I would be incredulous if you interpreted this sentence to mean, “There is a pig that is inside a writing instrument.” (Unless you happened to look down at the picture first, and you probably did because images tend to command attention. And there’s another topic for discussion…but anyway. Pretend you didn’t.)

The truth about the world, the background—that pigs don’t fit in writing instruments—informs your foreground interpretation. Yet you did not (I hope) have to analyze the sentence and determine all possible meanings of the word “pen” in order to arrive at your interpretation. You probably didn’t even think of writing instruments.

Consider the sentence: The pig is in the pen. Then imagine someone pointing to this while saying the sentence:


The pig is in the pen?

You might laugh and say, “Well, the pig is on the pen, or maybe the pig’s relationship to the pen is something about which we don’t wish to speculate.” Whatever the case may be, the sentence now has a different meaning constitution. You might wonder…why would the speaker say, “The pig is in the pen?” Does this person speak English? Is this person having a prepositional brain fart?

And the best question: Would you have considered “pen” in this case as signifying “an enclosure for animals”? Probably not in this situation.

Or maybe the speaker of the sentence is a moderately funny, punny person who has this whole theory about truth and language and you two have discussed this pig in the pen example on many occasions.* In this case, you might grasp both meanings of “pen” simultaneously to get the joke. You might only get the joke because you know this person makes this sort of joke on a regular basis.

As you can see, the holistic interpretation is adaptable and situational; even as it “runs ahead of itself,” it is subject to all sorts of constraints. In other words, intentionality is not just some willy-nilly imagining of the world, some sort of act of creation from nothing.

Also, this “leaping ahead” applies to all experience of objects, not just language interpretation. In my next post, I’ll go into further detail on this topic. Be on the lookout for eidos…

Ha ha. (Okay, not funny. But you’ll “see” what I mean later.)

*This is my husband’s example, which he used in a different context in his unpublished book on language and generosity (Donald Davidson’s “charity”).



Phenomenology: Cotton Candy or Ripe Fruit for Artificial Intelligence?

Phenomenology is the study and description of experience as it’s experienced, without the preconceived notions of what lies behind the experience. “Preconceived notions” can be common sensical or scientific. For more on Husserl’s method of arriving at a phenomenological POV, see this.

Artificial intelligence is, according to Wikipedia, the intelligence exhibited by machines or software. It is also the study of how to create such machines.

In my previous posts, on Husserl’s phenomenological method and on Heidegger ( Part II here and Part III here), I discussed phenomenology from a purely philosophical perspective as a means of solving or doing away with the infamous mind-body problem. The truth is, I barely touched the surface of what phenomenology does. It’s sort of a joke in philosophy…Husserl has a propensity to rehash method, but when will he do phenomenology?  I suspect this constant upheaval of methodology could have something to do with why phenomenology is largely ignored for practical purposes. Now I want to ask whether phenomenology—even if rejected on philosophical grounds as a sort of masked solipsism—can prove fruitful for AI research.

Those of you who are well-versed in artificial intelligence may not see the possible connection, and I’m not sure there is one, but the question has been sitting in the back of my mind for some time. I’ve read a lot of posts concerning the stickier issues in AI: consciousness, self-awareness, mind-uploading, the possibility of AI achieving singularity, AI ethics, etc. I don’t hear as much about the more mundane matters, like how we might create a machine that cleans the house—that really cleans the house—or takes care of the elderly. These mundane robots are what I’m curious about. From what I understand, there’s a lot of work to be done. We have self-driving cars and facial recognition, Siri and Roombas. It’s a great start, but there’s a lot more to be discovered. I’d like to have a robot that will take care of me when I’m older, something that I can have confidence in, something that will allow me to stay in my home. I don’t care if it’s “conscious.”

It seems to me there remain problems in AI that involve interacting with the world, and these are things that seem simple for us. Phenomenology explores these issues, especially the seemingly simple things. And taking up phenomenology for AI research doesn’t require that we buy into  Heideggerian ontological upheaval or dismiss science as fundamentally wrong. We can simply borrow the techniques, or maybe even stick to Husserl’s program of bracketing the natural world.

I’ve often wondered how much implicit phenomenology is happening in certain areas of AI, specifically in perception, object recognition, and embodiment. I finally got up the energy to look into it, briefly.

In my Google searches, I came across Hubert Dreyfus’ criticisms of AI, which started back in the sixties when AI researchers focused on symbol manipulation (GOFAI=”good old fashioned AI”) and made some overly optimistic claims about AI capabilities. Dreyfus was an expert in Heidegger and modern European philosophers. The AI community didn’t respond well to his criticism (and some even refused to have lunch with him, according to Wiki), probably because he reacted to AI optimism with overly pessimistic claims. Nevertheless, he pointed out problems in AI that turned out to be real, problems that phenomenology allowed him to foresee. And according to that same Wiki article, Dreyfus barely gets credit for highlighting phenomenological issues that were later addressed and resolved in a piecemeal way. The article points to the lack of common language and understanding between the AI researchers and phenomenology experts such as Dreyfus:

“Edward Feigenbaum complained, ‘What does he [Dreyfus] offer us? Phenomenology! That ball of fluff. That cotton candy!'”

Well, the “cotton candy” stresses the importance of the so-called unconscious skills that we possess in our human intelligence. These skills rely on holistic understanding of context, embodiment, a background of ‘common sense’ about the world, relevance and intentionality, to name a few.

An analogy of the problem:

Why is it that some people are able to replicate your best friend’s face so well that you can barely distinguish their work from a photograph?


He’s your BFFE.

They know how to distinguish between what they think they see and what they “really see.” They learn how to be good empiricists, at least in one domain. (BTW, realism in this sense is not phenomenology…this is just a metaphor.)

And yes, there are guidelines for the artist. For instance, objects in the background are grayer, bluer, and less distinct than objects in the foreground, which are more vibrant, warmer, and have greater contrast.


My photograph from Wasson peak…notice the mountain in the background.

They also know all that stuff you would’ve learned in middle school about perspective and vanishing points on the horizon:


The artist who can replicate your best friend’s face realistically, photographically, undoubtedly has even greater knowledge. True, many artists have what we call talent, which is to say, an innate knack. But what is talent? I think of it as an ability to quickly and easily acquire a kind of knowledge. Those of us who lack the knack can sometimes achieve the same results, but it requires a lot more labor for us. A lot of what we achieve is done through painstaking conscious effort, whereas the talented person often relies on knowledge that’s more or less subconscious. It’s true that talent needs instruction and practice, but if you’ve ever witnessed a talented person in action, you’ll see that there really is such a thing as talent.

Are there rules for creating art? I suspect I could take classes all my life and never produce anything like the portrait above. We might conclude there are no strict rules, only loose guidelines. Not to mention that realism is not necessarily worthy of being called art. A camera is not an artist. What’s interesting about the portrait is not just that it’s realistic.

Right now it seems to me we are at the point in AI in which we have cameras, but not artists.

Are there rules for human experience? And if there are, can we discover them? The problem is, we are all too talented. If there are rules that govern what we do, they are buried deep. We are like those awful college professors— geniuses in their fields—who can’t teach worth a damn. They don’t know how they do what they do. They just do.

It seems natural to attack the problem through our biology, through scientific understanding. But from what I hear, that method could take a long time. There’s the problem of the sheer amount of information that somehow exists in that squishy stuff between our ears. And what about embodiment? Is perception integral to learning? I don’t know.

It seems to me that a descriptive philosophy of experience could be useful in understanding how AI could evolve. We could uncover some rules (or maybe lack thereof) on a non-scientific basis…in other words, via philosophical reflection. The idea here is that perhaps some progress could be made outside of and/or alongside a full neuro-biological modeling.

I don’t pretend to know what is involved in the human-to-computer/robot translation of human experience. All I know is that we don’t know our own experience all that well. Does this knowledge seem like it might be a start? Or at least a possible means of creating robots that are useful…now-ish? For instance, Roombas that don’t suck at sucking? (I’m aiming low here. I don’t want to get into theories of consciousness or general intelligence. One step at a time.)

Next post: Intentionality. This will hopefully give you a clearer idea of what phenomenology does.