Phenomenology: Cotton Candy or Ripe Fruit for Artificial Intelligence?

Phenomenology is the study and description of experience as it’s experienced, without the preconceived notions of what lies behind the experience. “Preconceived notions” can be common sensical or scientific. For more on Husserl’s method of arriving at a phenomenological POV, see this.

Artificial intelligence is, according to Wikipedia, the intelligence exhibited by machines or software. It is also the study of how to create such machines.


In my previous posts, on Husserl’s phenomenological method and on Heidegger ( Part II here and Part III here), I discussed phenomenology from a purely philosophical perspective as a means of solving or doing away with the infamous mind-body problem. The truth is, I barely touched the surface of what phenomenology does. It’s sort of a joke in philosophy…Husserl has a propensity to rehash method, but when will he do phenomenology?  I suspect this constant upheaval of methodology could have something to do with why phenomenology is largely ignored for practical purposes. Now I want to ask whether phenomenology—even if rejected on philosophical grounds as a sort of masked solipsism—can prove fruitful for AI research.

Those of you who are well-versed in artificial intelligence may not see the possible connection, and I’m not sure there is one, but the question has been sitting in the back of my mind for some time. I’ve read a lot of posts concerning the stickier issues in AI: consciousness, self-awareness, mind-uploading, the possibility of AI achieving singularity, AI ethics, etc. I don’t hear as much about the more mundane matters, like how we might create a machine that cleans the house—that really cleans the house—or takes care of the elderly. These mundane robots are what I’m curious about. From what I understand, there’s a lot of work to be done. We have self-driving cars and facial recognition, Siri and Roombas. It’s a great start, but there’s a lot more to be discovered. I’d like to have a robot that will take care of me when I’m older, something that I can have confidence in, something that will allow me to stay in my home. I don’t care if it’s “conscious.”

It seems to me there remain problems in AI that involve interacting with the world, and these are things that seem simple for us. Phenomenology explores these issues, especially the seemingly simple things. And taking up phenomenology for AI research doesn’t require that we buy into  Heideggerian ontological upheaval or dismiss science as fundamentally wrong. We can simply borrow the techniques, or maybe even stick to Husserl’s program of bracketing the natural world.

I’ve often wondered how much implicit phenomenology is happening in certain areas of AI, specifically in perception, object recognition, and embodiment. I finally got up the energy to look into it, briefly.

In my Google searches, I came across Hubert Dreyfus’ criticisms of AI, which started back in the sixties when AI researchers focused on symbol manipulation (GOFAI=”good old fashioned AI”) and made some overly optimistic claims about AI capabilities. Dreyfus was an expert in Heidegger and modern European philosophers. The AI community didn’t respond well to his criticism (and some even refused to have lunch with him, according to Wiki), probably because he reacted to AI optimism with overly pessimistic claims. Nevertheless, he pointed out problems in AI that turned out to be real, problems that phenomenology allowed him to foresee. And according to that same Wiki article, Dreyfus barely gets credit for highlighting phenomenological issues that were later addressed and resolved in a piecemeal way. The article points to the lack of common language and understanding between the AI researchers and phenomenology experts such as Dreyfus:

“Edward Feigenbaum complained, ‘What does he [Dreyfus] offer us? Phenomenology! That ball of fluff. That cotton candy!'”

Well, the “cotton candy” stresses the importance of the so-called unconscious skills that we possess in our human intelligence. These skills rely on holistic understanding of context, embodiment, a background of ‘common sense’ about the world, relevance and intentionality, to name a few.


An analogy of the problem:

Why is it that some people are able to replicate your best friend’s face so well that you can barely distinguish their work from a photograph?

Unknown.jpeg

He’s your BFFE.

They know how to distinguish between what they think they see and what they “really see.” They learn how to be good empiricists, at least in one domain. (BTW, realism in this sense is not phenomenology…this is just a metaphor.)

And yes, there are guidelines for the artist. For instance, objects in the background are grayer, bluer, and less distinct than objects in the foreground, which are more vibrant, warmer, and have greater contrast.

IMG_0173.JPG

My photograph from Wasson peak…notice the mountain in the background.

They also know all that stuff you would’ve learned in middle school about perspective and vanishing points on the horizon:

IMG_1852.JPG

The artist who can replicate your best friend’s face realistically, photographically, undoubtedly has even greater knowledge. True, many artists have what we call talent, which is to say, an innate knack. But what is talent? I think of it as an ability to quickly and easily acquire a kind of knowledge. Those of us who lack the knack can sometimes achieve the same results, but it requires a lot more labor for us. A lot of what we achieve is done through painstaking conscious effort, whereas the talented person often relies on knowledge that’s more or less subconscious. It’s true that talent needs instruction and practice, but if you’ve ever witnessed a talented person in action, you’ll see that there really is such a thing as talent.

Are there rules for creating art? I suspect I could take classes all my life and never produce anything like the portrait above. We might conclude there are no strict rules, only loose guidelines. Not to mention that realism is not necessarily worthy of being called art. A camera is not an artist. What’s interesting about the portrait is not just that it’s realistic.

Right now it seems to me we are at the point in AI in which we have cameras, but not artists.

Are there rules for human experience? And if there are, can we discover them? The problem is, we are all too talented. If there are rules that govern what we do, they are buried deep. We are like those awful college professors— geniuses in their fields—who can’t teach worth a damn. They don’t know how they do what they do. They just do.

It seems natural to attack the problem through our biology, through scientific understanding. But from what I hear, that method could take a long time. There’s the problem of the sheer amount of information that somehow exists in that squishy stuff between our ears. And what about embodiment? Is perception integral to learning? I don’t know.

It seems to me that a descriptive philosophy of experience could be useful in understanding how AI could evolve. We could uncover some rules (or maybe lack thereof) on a non-scientific basis…in other words, via philosophical reflection. The idea here is that perhaps some progress could be made outside of and/or alongside a full neuro-biological modeling.

I don’t pretend to know what is involved in the human-to-computer/robot translation of human experience. All I know is that we don’t know our own experience all that well. Does this knowledge seem like it might be a start? Or at least a possible means of creating robots that are useful…now-ish? For instance, Roombas that don’t suck at sucking? (I’m aiming low here. I don’t want to get into theories of consciousness or general intelligence. One step at a time.)


Next post: Intentionality. This will hopefully give you a clearer idea of what phenomenology does.

Advertisements