Phenomenology: Cotton Candy or Ripe Fruit for Artificial Intelligence?

Phenomenology is the study and description of experience as it’s experienced, without the preconceived notions of what lies behind the experience. “Preconceived notions” can be common sensical or scientific. For more on Husserl’s method of arriving at a phenomenological POV, see this.

Artificial intelligence is, according to Wikipedia, the intelligence exhibited by machines or software. It is also the study of how to create such machines.


In my previous posts, on Husserl’s phenomenological method and on Heidegger ( Part II here and Part III here), I discussed phenomenology from a purely philosophical perspective as a means of solving or doing away with the infamous mind-body problem. The truth is, I barely touched the surface of what phenomenology does. It’s sort of a joke in philosophy…Husserl has a propensity to rehash method, but when will he do phenomenology?  I suspect this constant upheaval of methodology could have something to do with why phenomenology is largely ignored for practical purposes. Now I want to ask whether phenomenology—even if rejected on philosophical grounds as a sort of masked solipsism—can prove fruitful for AI research.

Those of you who are well-versed in artificial intelligence may not see the possible connection, and I’m not sure there is one, but the question has been sitting in the back of my mind for some time. I’ve read a lot of posts concerning the stickier issues in AI: consciousness, self-awareness, mind-uploading, the possibility of AI achieving singularity, AI ethics, etc. I don’t hear as much about the more mundane matters, like how we might create a machine that cleans the house—that really cleans the house—or takes care of the elderly. These mundane robots are what I’m curious about. From what I understand, there’s a lot of work to be done. We have self-driving cars and facial recognition, Siri and Roombas. It’s a great start, but there’s a lot more to be discovered. I’d like to have a robot that will take care of me when I’m older, something that I can have confidence in, something that will allow me to stay in my home. I don’t care if it’s “conscious.”

It seems to me there remain problems in AI that involve interacting with the world, and these are things that seem simple for us. Phenomenology explores these issues, especially the seemingly simple things. And taking up phenomenology for AI research doesn’t require that we buy into  Heideggerian ontological upheaval or dismiss science as fundamentally wrong. We can simply borrow the techniques, or maybe even stick to Husserl’s program of bracketing the natural world.

I’ve often wondered how much implicit phenomenology is happening in certain areas of AI, specifically in perception, object recognition, and embodiment. I finally got up the energy to look into it, briefly.

In my Google searches, I came across Hubert Dreyfus’ criticisms of AI, which started back in the sixties when AI researchers focused on symbol manipulation (GOFAI=”good old fashioned AI”) and made some overly optimistic claims about AI capabilities. Dreyfus was an expert in Heidegger and modern European philosophers. The AI community didn’t respond well to his criticism (and some even refused to have lunch with him, according to Wiki), probably because he reacted to AI optimism with overly pessimistic claims. Nevertheless, he pointed out problems in AI that turned out to be real, problems that phenomenology allowed him to foresee. And according to that same Wiki article, Dreyfus barely gets credit for highlighting phenomenological issues that were later addressed and resolved in a piecemeal way. The article points to the lack of common language and understanding between the AI researchers and phenomenology experts such as Dreyfus:

“Edward Feigenbaum complained, ‘What does he [Dreyfus] offer us? Phenomenology! That ball of fluff. That cotton candy!'”

Well, the “cotton candy” stresses the importance of the so-called unconscious skills that we possess in our human intelligence. These skills rely on holistic understanding of context, embodiment, a background of ‘common sense’ about the world, relevance and intentionality, to name a few.


An analogy of the problem:

Why is it that some people are able to replicate your best friend’s face so well that you can barely distinguish their work from a photograph?

Unknown.jpeg

He’s your BFFE.

They know how to distinguish between what they think they see and what they “really see.” They learn how to be good empiricists, at least in one domain. (BTW, realism in this sense is not phenomenology…this is just a metaphor.)

And yes, there are guidelines for the artist. For instance, objects in the background are grayer, bluer, and less distinct than objects in the foreground, which are more vibrant, warmer, and have greater contrast.

IMG_0173.JPG

My photograph from Wasson peak…notice the mountain in the background.

They also know all that stuff you would’ve learned in middle school about perspective and vanishing points on the horizon:

IMG_1852.JPG

The artist who can replicate your best friend’s face realistically, photographically, undoubtedly has even greater knowledge. True, many artists have what we call talent, which is to say, an innate knack. But what is talent? I think of it as an ability to quickly and easily acquire a kind of knowledge. Those of us who lack the knack can sometimes achieve the same results, but it requires a lot more labor for us. A lot of what we achieve is done through painstaking conscious effort, whereas the talented person often relies on knowledge that’s more or less subconscious. It’s true that talent needs instruction and practice, but if you’ve ever witnessed a talented person in action, you’ll see that there really is such a thing as talent.

Are there rules for creating art? I suspect I could take classes all my life and never produce anything like the portrait above. We might conclude there are no strict rules, only loose guidelines. Not to mention that realism is not necessarily worthy of being called art. A camera is not an artist. What’s interesting about the portrait is not just that it’s realistic.

Right now it seems to me we are at the point in AI in which we have cameras, but not artists.

Are there rules for human experience? And if there are, can we discover them? The problem is, we are all too talented. If there are rules that govern what we do, they are buried deep. We are like those awful college professors— geniuses in their fields—who can’t teach worth a damn. They don’t know how they do what they do. They just do.

It seems natural to attack the problem through our biology, through scientific understanding. But from what I hear, that method could take a long time. There’s the problem of the sheer amount of information that somehow exists in that squishy stuff between our ears. And what about embodiment? Is perception integral to learning? I don’t know.

It seems to me that a descriptive philosophy of experience could be useful in understanding how AI could evolve. We could uncover some rules (or maybe lack thereof) on a non-scientific basis…in other words, via philosophical reflection. The idea here is that perhaps some progress could be made outside of and/or alongside a full neuro-biological modeling.

I don’t pretend to know what is involved in the human-to-computer/robot translation of human experience. All I know is that we don’t know our own experience all that well. Does this knowledge seem like it might be a start? Or at least a possible means of creating robots that are useful…now-ish? For instance, Roombas that don’t suck at sucking? (I’m aiming low here. I don’t want to get into theories of consciousness or general intelligence. One step at a time.)


Next post: Intentionality. This will hopefully give you a clearer idea of what phenomenology does.

93 thoughts on “Phenomenology: Cotton Candy or Ripe Fruit for Artificial Intelligence?

  1. Happy new year! It is late and after spending two days translating some old text and several hours to prepare it for publication, reading this one has to wait until tomorrow 🙂 It is nice to see you back on business.

    Liked by 1 person

    • Thanks! And Happy New Year to you too. Yes, it’s been a long time since I’ve posted. I decided to stop worrying about the quality so much and just get it out there. I figure the corrections can be made once people put me straight. 🙂

      Like

      • I’ve got the same thing going… if I treat posts as “publications” I go crazy trying to make them right, but if I consider this an informal conversational venue, then a post is just the beginning of a conversation.

        Liked by 2 people

  2. I wonder if the term Artificial Intelligence isn’t really a misnomer, in fact an oxymoron? From the little I know, the approach seems to be more about mimicry and the replication of an only partially autonomous, pre-conditioned, functionality, moreso than intelligence per se, which surely is something dynamic and performed within or as awareness, a process that is more than simply procedural learning. Intelligence embraces things like instinct and intuition, it seems to me; as such it at times derives its life from feelings and a recursive biological feedback.

    When we don’t understand what conscious phenomenal objects are, beyond being able to describe them, and can’t even agree if consciousness exists at all, then how the hell are we supposed to replicate it? And if intelligence isn’t consciousness in action, then it’s just action, just functionality, and it isn’t, is it? By the way, I’ve always thought the idea of machine consciousness is nonsensical – quite literally – as consciousness without feeling (sentience) is not consciousness. See: Penrose’s fictional ‘Ultronic’ Tin Man/computer.

    Liked by 1 person

    • I wasn’t sure about the definition of AI either. I tend to see it very broadly. For instance, Siri is AI. I would bet this is something people debate about. Is Siri “intelligent”? Meh. I guess that depends on how we define intelligence, one of those sticky topics. If you tell Siri to “fuck off,” you’ll get a vaguely appropriate response. Maybe a funny one, if you’re lucky. But that’s all been programmed. It’s not as if Siri is “thinking,” certainly not the way we do.

      As far as consciousness goes, I have no idea whether/how we are able to replicate it other than the traditional methods. 🙂

      I think in AI, “intelligence” is just intelligent behavior…at least for now. I could be wrong. As far as the hard questions of consciousness goes, I’m ducking out. I know everyone loves those topics, but I feel too much like I’d be shooting in the dark if I gave any opinion about such matters. In truth, I don’t have an opinion.

      I really just want cooler robots that perform functions. I’m not sure I’d want robots that are sentient. That’s what dogs are for. 🙂

      Liked by 1 person

      • I hope Geordie doesn’t read that last comment of yours Tina, or you may be needing a Roomba to gather up what remains of your wardrobe contents.

        You seem to be saying that Artificial Intelligence (AI) seems largely to be no more than a simulated form of Appropriate Behaviour (AB); if so, I agree.

        I see from your link that Dreyfus thought human intelligence and expertise depend primarily on unconscious instincts, which was my point earlier.

        You’d better go and apologise to Geordie now I think Tina – off you go. 😉

        Liked by 1 person

        • Yes, Dreyfus thought even AB was impossible, but was proven wrong to some degree, at least when it came to the things we have now (computers that beat humans at chess, etc.). But he was right about the rudimentary things and they way we know. I’ll get into that more in the next post.

          Oh Geordie doesn’t mind being a sentient being. He’d probably get jealous if my Roomba started having feelings. (I think he prefers being an only child.)

          Speaking of Roombas and dogs, I figured out a practical use for the Roomba. If I strap a little plastic mouse to it, it become a source of amusement for Geordie. But only mild amusement. He follows it around for a minute or two, then decides it would be safer to stay in the other room.

          Liked by 1 person

          • “Oh Geordie doesn’t mind being a sentient being.” – Of course he doesn’t, though what I meant was that he might rather object to be being likened to a robot: “I’m not sure I’d want robots that are sentient. That’s what dogs are for.” Crossed wires? Me or you?

            Like

          • I would not call the Roomba intelligent, except you call parameciums intelligent. When a paramecium hits an obstacle, it swimms a little bit backwards, turns a little bit and then tries another direction. That way, it can explore an area and find other small organisms it can eat. I think they are comparable in their behaviour to such cleaning machines, although these ciliates are actually more sophisticated. Like Geordie, I would become bored after some time watching or chasing such a machine, but I cannot easily get tired looking through a microscope watching cilliates.
            Actually these cilliates are astonishing and fun to watch. It is amazing how much behaviour could be packed into single cells there. See for example https://www.youtube.com/watch?v=eeuONqXD_OU, with paramecium and euplotes.

            Liked by 2 people

      • There are different kinds of AI, and there’s not much use complaining over the term… it’s historical and we’re stuck with it.

        If “natural” intelligence is humans (which for purposes of this definition we’ll assume are, in fact, intelligent despite appearances), then “artificial” intelligence is machines. The original definition of AI assumed “intelligence” was human-like self-awareness, not systems like Siri.

        Back when I was studying this stuff, they used the terms “hard AI” to mean the above (“intelligence”) and “soft AI” to mean what are essentially search algorithms, like Siri (who searches its database for an appropriate answer to your question — it’s not terribly different from Google with a voice-activated front end).

        I think they use different terms these days, but those are distinctly different fields. The “Holy Grail” of AI research is replicating a human-like self-aware consciousness and, in many regards, is pure research and an attempt to understand our own minds and not an obviously practical purpose (in fact, creating new genuine minds raises all sorts of sticky ethical questions).

        Point is, when talking about “AI” the first thing is to consider what kind of AI is involved.

        The Roombas will get better and better (the suction problem may be due to their small size). All sorts of soft AI will get better and better. Most of that, we know the science. Hard AI is the challenge.

        Liked by 2 people

        • I agree with you about not seeing much purpose in reaching the “Holy Grail”. First of all, where would the funding be? Where’s the commercial aspect? Maybe someone will convince Trump that we need such a thing and it’ll take his place as president so he can sit back and comb his hair.

          I think as far as soft AI goes, that’s where I see progress coming soon. I hope.

          Liked by 2 people

          • “I agree with you about not seeing much purpose in reaching the ‘Holy Grail’.”

            Oh, I think there’s a very important purpose (more than one), but it lies in the pure research area. Basic science in understanding our world — in this case, our minds.

            But practical uses in society? Yeah, no. 😀

            The practical uses will come from soft AI (the term I’ve liked is “expert systems”).

            “I think as far as soft AI goes, that’s where I see progress coming soon.”

            We’re already well on the way, there!

            Liked by 1 person

            • Ah! She creeps me out. Couldn’t they have given her a fake-smiley face like a real receptionist? What would happen if someone started cursing her out? I could see that happening, and then the guy behind says, “Wait a minute, I have kids here, you can’t use that kind of language…” etc. But let’s say that guy has a lazy eye which makes the AI receptionist think he’s looking at her. Let’s put her to a real life test. 🙂

              Okay, I know, I’m cruel. But it would be fun.

              Liked by 1 person

  3. I have no idea about phenomenology, but I like your notion of robots that really know how to clean. Currently there are automated vacuum cleaners that patrol the house, following some kind of algorithm that ensures they cover as much of it in as efficient a way as possible. But no human would ever clean in such a way.

    A human-like intelligent machine that approached the task in the same kind of way as a human may take shortcuts and miss bits. Maybe it would sweep the dirt under the bed and hope no one noticed. That might be the most efficient use of its time.

    So humans are exceptionally good at lots of tasks, but we make all kinds of mistakes. Even the most diligent of us look for ways to cheat at housework. Those who don’t are diagnosed with OCD, and are inefficient in different ways. Current machines may be better than humans at certain tasks, but we have to decide what we want from our machines if we are to design them properly.

    Liked by 2 people

    • Very true. My Roomba seems to be made to navigate without knocking over too many things, scooping up the fringes on the rug, etc., but it doesn’t EVER get that spot I want it to get. I end up following it around and kicking it to make it go where I want it to. The only real use for it that I can see—as it is now—is a vacuum for the disabled. You can buy a remote control for it and make it go where you want it to. On the other hand, it really doesn’t pick up very well. The suction needs to be increased. (You’d think this would be a simple thing, but maybe it interferes with the avoid-the-carpet-fringes function?)

      On mistakes, that’s coming up soon. Phenomenology explores that area quite a lot. What is it to make a mistake? If we can figure that out, and the way we make them, we might get closer to understanding how we think and experience the world.

      Liked by 1 person

  4. Totally agree about wanting a robot maid that actually works. I don’t think we’ll need a conscious entity to get that. Just as in the case of the self driving cars, when we get it, I doubt anyone will be tempted to conclude that there is another conscious being in there.

    But in this specific case, I think there’s another roadblock, mechanical ability. Having a machine sophisticated enough to navigate the house and clean without breaking things is one thing, but having one with the mechanical ability, the fine motor control to do so is another. That may well represent a bigger obstacle to having a robot take care of you in old age than the intelligence part. Humans aren’t unique just because of our intelligence, but also for our dexterity. Only the other primates rival us there, and robots still have a long way to go before they have anything like it.

    Regarding using phenomenology to help with a non-conscious intelligent device, I’m not sure. It feels like a stronger case could be made for using it, along with human and animal psychology, to help us in building a conscious system. But some of this may well be due to my ignorance of phenomenology itself.

    Liked by 2 people

    • Well I’m unsure about phenomenology’s role as well. I think as far as fine motor control goes, that might depend on a mechanistic improvement, but also a perceptual one. If the “gears are in place” for the robot, but it still can’t manoeuvre properly due to it’s lack of knowledge about the world and how to move in it, that could be where phenomenology might help. When I get into intentionality this might be made clearer. I don’t know what could be programmed into a robot to make it learn its environment in the way we do, or whether that kind of thing is possible in AI.

      Liked by 2 people

      • I tend to think if an AI can navigate the roads, recognize bike riders, move around messy road constructions sites, etc, then they can probably navigate the home.

        But the self driving car may inspire overconfidence. The mechanics of a car are well established. They’ve been in wide use for over a century. The fine motor capabilities of a robot that can move delicate items off of a dresser so it can dust, wash a dog, or give an infirm person a sponge bath? Even if it recognizes everything that needs doing, I think we have some way to go before it has the fine motor capabilities to do it. One company that’s been working in this area, Boston Dynamics, appears to be making progress, but it recently lost a military contract because its gas powered animal-like robots were too loud.

        On the other hand, more intelligent Roombas, robot lawn mowers, and similar devices strike me as things we’ll see fairly soon. Add enough of these intelligent appliances into the mix, the ability of someone with diminished physical abilities to live by themselves should still be a lot better than it is today.

        Looking forward to the intentionality post!

        Liked by 2 people

        • There are surgical “robots” capable of performing surgery, so the motor control isn’t the main issue. (I say “robots” because these machines are fully controlled by the surgeon during the operation.) Gear systems allow a lot of precision, and we can even build systems that sense pressure.

          The problem is how complex the real world is. It’s possible there’s a huge gap of sorts between what is possible with soft AI (such as Siri and Roomba) and what does require human-like intelligence. As you say, the driving universe is actually fairly simple once you come down to it. The number of (“legal”) actions possible just in the room I’m sitting in likely far exceeds those possible during my entire drive to work.

          (I’ve seen videos of those experimental military robot “pack mules” walking through rough terrain and… they’re kind of scary in a way. Flashes of Terminator! Just mount some machine guns on them! 😀 )

          Liked by 2 people

          • From what I understand, the domain of surgical robots is still pretty narrow. They require careful positioning by medical personnel prior to use. But I’ll admit it’s not an area I’m familiar with in depth.

            Robots can have fine motor control, for a specific domain. But from what I’ve seen, if you try to design one to have that control in a variety of domains (which I’d think we’d want our hypothetical maid to have), the apparatus quickly escalates in size and complexity. I’m sure it will all eventually be worked out, but I think it will require advances in mechanical engineering. If not, we’d already have the exoskeletons that show up in so many sci-fi movies.

            On self driving cars vs human intelligence, I don’t think a robot maid would need human intelligence. It wouldn’t need to hold conversations, track social connections, or compose art. But it would need something like human mobility and dexterity. Unless, as I mention above, we split its functions among several different types of machines and appliances.

            Did you see the Boston Dynamics Christmas video? It had a woman in a sleigh being pulled by a team of their robots. A lot of people were creeped out by it.

            Liked by 2 people

              • It may well be that the optimal solution doesn’t involve a humanoid machine, but it sure would be nice to have a machine that transferred the clothes from the hamper to the washing machine, later from the washing machine to the dryer, then hung, folded, or ironed the clothes as necessary.

                Like

                • Folding the laundry! Yes. That would be awesome. I’d be happy with a robot that did just that. The rest of the laundry process really isn’t so bad. (Besides, I never iron. Ever. If an article of clothing needs to be ironed, I won’t buy it.) 🙂

                  On the point of robots doing things differently, I think that would make a lot of sense. There’s usually a more efficient method. Like this (I learned about this one from a video someone posted on FB a long time ago, and that was in English, but now I can’t find it. Sorry.):

                  And here’s a robot that folds laundry…more painful to watch than the Roomba at the beginning of the video that misses quite a lot of what it’s supposed to vacuum:

                  Like

                    • True story. These different ways may not be so annoying when you consider that you didn’t have to do the laundry…who cares if it takes the machine 3 hours? It wasn’t your 3 hours. (Although, to be honest, when I use my Roomba I can’t help but follow it around to make sure it’s working. Which really defeats the purpose.)

                      Like

                    • Yeah, it is noisy, but no more so than most vacuums. I have an Electrolux “ultra silencer” and it’s about the same as that. (My Electrolux isn’t all that quiet.) My husband can’t stand the Roomba or the other vacuum (and Geordie hates all vacuums, but I don’t think he’s afraid of them really. I think he just doesn’t like the sound. However, he’ll put up with any obnoxious sound if he thinks it’s a toy. So if I turn the Roomba into an object of interest by strapping on a toy mouse, he’ll follow it around and try to grab the mouse…but only, as with all forms of playing, if I’m there watching him).

                      The Roomba runs whenever you want it to. I think it’s ideal for people who work and don’t have pets. You can program it to run while you’re away, and it is supposed to find its own dock to recharge. (I’ve yet to really test that. I have serious doubts.) I think it would work really well if you have:

                      a) only tile or wood/laminate floors

                      b) a smaller space, such as an apartment…although you can dedicate it to one space by using virtual walls and by simply closing doors.

                      c) not too many obstructions.

                      Supposedly it won’t fall down stairs. I don’t have stairs, so I can’t tell you about that.

                      I don’t notice it getting underfoot. That could be due to the layout of my house. I have an open floor plan…very open. So basically a living room, a dining room, an entry, and a kitchen all in one big space.

                      I decided to buy it after all because I just liked the novelty of it. I consider it an in-between cleanings cleaning machine.

                      It does do a “spot clean” which means it vacuums for a minute or so in a spiral shape. I’ve used that feature a lot. Mostly for entertainment. I don’t know why, but it’s fun to watch.

                      My main problem with the Roomba is that it doesn’t do carpets very well and it can be messy to empty and clean it. The instructions say you not only have to empty the bin each time, but also clean the brushes. (They give you a nifty gizmo that helps with the brushes, but it’s still annoying.) On top of that, you’re supposed to clean the wheels and sensors once a week or so. This is sort of time-consuming because of the design. I had to get my husband to take the front wheel off because it seemed really stuck. Geordie’s downy hairs get caught up in the wheels and they’re hard to get out. With my other vacuum, there’s virtually nothing to be done except throw out the bag.

                      The one thing I like about the Roomba is that it seems to get places I wouldn’t, which makes me happy. It always comes back with a full bin, even after I’ve just vacuumed. (Which makes me sound bad, I know. And I used to have my own cleaning business. So that’s bad…although, you know what they say about housekeeper’s houses.)

                      Like

                    • Wow, that thing sounds like more trouble that it’s worth. I’ve thought about buying one before, but it was with the fantasy of taking it out of the box, charging it up, saying “go forth and clean” and then forgetting it exists except for occasionally seeing it scamper across the floor. That’s what the commercials imply. They somehow forget to mention the noise, weekly maintenance, or functional limitations.

                      My current vacuum cleaner, an ancient Eureka, is loud enough to wake the dead. My old dog hated it and barked at it the entire time I used it before I finally realized it was kinder to have her outside when vacuuming. But it does a decent enough job with the carpet. My only real beef is that the electrical cord is constantly in the way.

                      Like

                    • Yeah, and considering how much they cost, it’s not really worth it unless you’re just a super anal type who likes to vacuum between vacuums. You can get a really excellent vacuum for a lot less money (although, then you have to do it yourself.) On the other hand, if you program the Roomba to run every day while you’re at work, you might see results eventually. I just don’t think it would work if you have a lot of furniture, a big space, or mostly carpet. (It does a fine job with pet hair on tile, but not carpet).

                      Liked by 1 person

                    • Hate to admit it, but the only time I really vacuum is just before company comes over. But my house is mostly carpet and I have lots of obstructions (furniture, stacks of books, and other junk), so I think I’ll hold off until they get a little cheaper and maybe a little more intelligent.

                      Like

                    • Ah, then your current vacuum is probably pretty good for you then. I remember the one my mother had. I forget the brand, but it was ancient. It was heavy as hell, loud as hell, but it lasted all my life and maybe even longer. I don’t know when she bought the thing, but I have early childhood memories of it. It was odd using that same vacuum to clean her house before it was sold. That damned thing still did an excellent job on the carpet!

                      Liked by 1 person

            • “I’m sure it will all eventually be worked out, but I think it will require advances in mechanical engineering.”

              You are, perhaps, unaware of state of the art mechanical engineering. We’re really, really good at it. You’d be amazed. (I’ve done some programming for motor-driving systems, and what you can do with modern stepper motors is mind-blowing.)

              “If not, we’d already have the exoskeletons that show up in so many sci-fi movies.”

              You are, perhaps, unaware of what’s being done with prosthetic limbs and with some military systems. Exoskeletons look neat in movies, but all those cool-looking external moving parts don’t work out so great in reality.

              The problem, as I alluded to already, is control systems. Surgeons work the surgical robots (although they can be programmed to do specific steps), and human nerve impulses (or mental commands in some cases) control prosthetic limbs.

              We’re just now getting to the point of moving a box through a rule-based environment of other moving boxes and various obstacles. A big part of that is training; these cars learn through conditioned experience. Even a Roomba learns the shape of its environment.

              Consider what’s required to train a (really good) professional human maid…

              “[A robot maid] wouldn’t need to hold conversations, track social connections, or compose art.”

              But it would need to solve some complex problems involving knowledge of human behavior in general and “their” human(s) in particular. Think about the value of a really good maid and all that implicit knowledge and experience that comes along with that.

              The world is hugely complex, and I think Tina’s point, at least in part, is that many with an interest, either professional or passing, in AI may not appreciate all that’s involved in the phenomena of experience. (And, further, that they may benefit from the philosophical work done to date.)

              Liked by 2 people

              • I’ll admit I’m not up on the latest in mechanical engineering. The most advanced stuff I’ve seen is NASA’s Robonaut, which is impressive, but seems more limited the more I read about it.

                On exoskeletons, I can’t see where they’d need that much intelligence. Just the ability to detect and respond to pressure from the rider’s limbs. I used equipment 20 years with those kinds of sensors and responsiveness, but it was bulky, and single purposed. As far as I can tell, the missing ingredient is the raw ability to physically provide that functionality. If I’m wrong on this (and that’s certainly possible), I’d be grateful for links or terms I could google.

                “But it would need to solve some complex problems involving knowledge of human behavior in general and “their” human(s) in particular. ”
                That’s very similar to what people used to say about self driving cars. They would “never” be able to navigate city streets, deal with pedestrians, pets, bike riders, etc, yet unless Google is being totally dishonest about their results, they’re handling this stuff.

                “The world is hugely complex, and I think Tina’s point, at least in part, is that many with an interest, either professional or passing, in AI may not appreciate all that’s involved in the phenomena of experience.”
                I don’t necessarily disagree. I’ve said myself that many people making pronouncements about AI could stand to learn more about the human mind, from psychology, neuroscience, and philosophy of mind.

                Liked by 1 person

                • “…but seems more limited the more I read about it.”

                  I’m not sure we’re talking about the same thing… when you say we lack the “fine motor capabilities” are you talking about the control systems or the motor capabilities?

                  If you mean the former, we’re saying the same thing. All I’m saying is that the motors — the mechanics — aren’t a problem at all. Designing intelligent control systems is a whole other ballgame.

                  “On exoskeletons, I can’t see where they’d need that much intelligence.”

                  Sure, but I wasn’t suggesting they did. You’d said (about maid robots), “I think it will require advances in mechanical engineering. If not, we’d already have the exoskeletons that show up in so many sci-fi movies.”

                  I emphasized “in mechanical engineering” because that’s what made me think we’re talking just about the motors. Mechanical engineering isn’t why we don’t have sci-fi exoskeletons (if you’re thinking Matt Damon in Elysium).

                  It’s because, while they look cool in movies, they’re not practical — think about the battery necessary for a powered wheelchair. Think about the weight. An exoskeleton is comperable.

                  For a person who can walk, it’s a lot to schlep around (and expensive). For a person who can’t walk, once we solve the two-leg balance issues, we might see ambulatory exoskeletons replacing wheelchairs. There are small versions now for limb support.

                  “That’s very similar to what people used to say about self driving cars.”

                  That’s a fair point. I still think the driving universe — especially in terms of implicit knowledge — is far less complex then the maid universe.

                  Remember that Google cars are being trained. They would have undergone a great deal of training in closed courses before being allowed on the streets. (And, I would be surprised if there wasn’t considerable virtual modeling done before that.)

                  This is a long process, still in progress. Yet the phase space of driving is fairly simple. It consists of pathways and objects, some moving, some not. The goal is fairly simple: proceed down this path; don’t hit stuff.

                  A great deal of the effort is in correctly recognizing objects. Given an accurate model of physical space, the actual driving is fairly trivial. Airplane auto-pilots have done it for a long time.

                  Likewise, recognizing objects in the physical world of the maid will be a big part of the challenge and it’s not as simple as just the objects themselves. There is the need to identify clean objects versus dirty ones.

                  And it won’t be enough to identify an object as being in a general area of phase space (such as “pedestrian” which has quite different properties from “bicycle” or “truck”). The object resolution needs to be much more fine-grained and conditions of objects need to be identified.

                  I’m not saying it’s uncrackable. Just that I think it’s a much harder problem than driving.

                  Liked by 1 person

                  • “It’s because, while they look cool in movies, they’re not practical — think about the battery necessary for a powered wheelchair. Think about the weight. ”

                    That’s pretty much my point Wyrd. You seem to be saying that if we just take the human out, the problems go away. I think that needs justification, but let’s instead say we have a remote controlled humanoid machine. I can see definite practical uses for such a machine (remote controlled infrantry, remote controlled search and rescue, etc). We have lots of remote controlled robots, but none with the dexterity of a human. Why would that be if we already have the mechanical technology to make it happen?

                    I’m definitely not exceedingly knowledgeable in mechanical engineering, and I’m totally open to learning that we do already have those capabilities (it would be cool). Again, if you have examples or links, I’d be grateful.

                    Liked by 1 person

                    • ” You seem to be saying that if we just take the human out, the problems go away.”

                      [boggle] Once again you’ve interpreted my words through some distorted filter of your own. I never said, nor implied, anything of the sort.

                      Go back and re-read my first comment. I made two points:

                      One. If you — as I got the impression you did — really think mechanical engineering is the limitation, that is simply not correct. The necessary mechanical systems have existed a long time. This is not a matter of opinion.

                      Two. I suggested it was possible the “maid universe” was significant orders of magnitude different than the “driving universe.” This is my opinion and absolutely subject to debate.

                      (Further, it seems possibly aligned with Tina’s ideas expressed in this post, so I think it’s a worthwhile point of discussion. Maybe Tina can even apply some ideas of phenomenology to the domains of driving and maiding.)

                      “Again, if you have examples or links, I’d be grateful.”

                      Most of my mechanical engineering background is many years in the past, but visit the web site of any electric motor manufacturer (especially micro- and stepper-motors). Or look into what Disney has done with animatronics, or what Hollywood has done with mechanical special effects, or what’s being done with prosthetic limbs.

                      Motors, and mechanical engineering in general, is an old and well-explored area!

                      Like

        • Very true about the Roomba-like devices. There are a lot of things out there that are so close to working out. I got my mother a pill reminder that was lockable and would beep until you turned the machine over to take the pills out. It didn’t ensure that she’d take them (and she didn’t), but for those who just need the reminder, it wasn’t too bad.

          The lawn mower thing should already be here! Seriously, where is it? Most lawns are easier to navigate than a room in a house. Or imagine even a remote-controlled lawn mower.

          The self-driving car may be in the far future, but I agree about it inspiring over confidence. Also the fact that some people like to drive.

          Liked by 1 person

  5. I’ve never really explored phenomenology, so it’s hard to make a coherent contribution regarding its value in AI. For a very general definition of phenomenology, it almost sounds like it’s at least part of what AI studies, so it does seem like it would have some value.

    If nothing else, in perhaps distinguishing between human experience and machine experience (if, in fact, machines can experience anything at all). The question might be how to turn phenomenological ideas into computer code (or other machinery).

    Certainly the goal of some AI is to replicate the human experience of experiencing. To the extent AI deals with what goes on in the mind (regardless of external objects), that seems phenomenological to me.

    I’ve often thought scientists in all disciplines need to spend more time exploring outside their domain (which can be a challenge; keeping up with their own domain can take all their time). I’ve read articles by scientists and wondered if they knew about developments in some other area I’d read about that seemed useful.

    You’d think they would, but… do they?

    p.s. Happy 2016!

    Liked by 2 people

    • For phenomenology, I had in mind the way we navigate through the world, interpret situations, the way we understand speech, etc. You’ll see in my next post.

      I wonder about the possibility of turning our experience into computer code. Siri is, as you said, like a Google search. But do we have an internal “Google search” or is something different going on? These are the sort of questions I want to bring up.

      On scientists studying phenomenology…yeah, they probably have a lot of other things to deal with. And a few sentences into Being and Time would send them running for sure.

      In my searches for AI and phenomenology, it was mostly phenomenologists interested in the subject. Writing papers for themselves, it seems. I read a few and they were mostly too obscure for me.

      Like

      • “I wonder about the possibility of turning our experience into computer code.”

        Likewise. (As I think you know, I’m not convinced it’s possible. An actual physical “neural” network may be required.)

        “On scientists studying phenomenology…yeah, they probably have a lot of other things to deal with.”

        It may also be that philosophy doesn’t appear, to them, to offer concrete approaches, let alone hints at how to write the computer code.

        And there is a disdain some scientific types have for philosophy. A common complaint is that, after 2000 years, it doesn’t seem to have produced any actual answers whereas science, in that time, has provided so many.

        I think they’re wrong and that philosophy has value, but that is a rather different discussion.

        Liked by 1 person

    • “Certainly the goal of some AI is to replicate the human experience of experiencing.”

      Yet it’s experience without feeling, and what kind of experience is that? At the risk of sounding terribly sexist, I think there’s a lot of very male thinking going on with all this AI stuff, and guess what, the last thing men think about is feelings. Apologies for the gross generalisation.

      Liked by 2 people

      • “Yet it’s experience without feeling, and what kind of experience is that?”

        Well, a goal of hard AI research is to determine whether that is the case or whether consciousness is a computable process. IF the latter is true, then feelings are just calculations, and they can happen in metal just as easily as they can in meat.

        I don’t think so,… but I might be wrong.

        (FWIW, I wrote a rather lengthy series about my feelings on AI — in part so I wouldn’t have to discuss it in comments so much anymore 🙂 — and why I don’t think consciousness is a computable process. The last post, Information Processing, summarizes the series and has a handy list of the posts.)

        Liked by 2 people

      • Well, there’s the phenomenology of emotion too, including the phenomenology of sex (Merleau Ponty, being French, might have some interesting things to say about that). Translating this to AI behavior might be difficult, but who knows…maybe someday…that’s what you guys should be afraid of. 😉

        Liked by 1 person

  6. I think I am going to make several comments here since there are several questions raised.
    For the beginning, a comment on the question of what intelligence is. I have put my ideas on that here: https://creativisticphilosophy.wordpress.com/2015/12/12/a-definition-of-intelligence/.
    Basically, I think intelligence is the ability to generate new knowledge. In that sense, a system like SIRI is not intelligent because all the knowledge contained by it is programmed into it by people. It does not generate new konwledge by interaction with its environment. One can view intelligence in this sense and creativity as the same thing.

    Liked by 2 people

    • I guess I was just going along with what others are calling “AI” or, more specifically “soft AI”.

      I wouldn’t call Siri “creative”, so if that’s what’s necessary in order for a thing to be deemed intelligent, I would think we have a long way to go (although, I admit to having virtually no knowledge of what’s going on in AI.)

      Liked by 1 person

      • In my view, classical AI is an example of what Kuhn (as far as I remember) called a pathological research program. It does not work, but people are invested in it, they have bet their carriers on it. Instead of using science to disprove their opponents and critics, they try to suppress them. Refusing to sit on the same table with Dreyfus is a symptom of that.
        It did not work, in my view, because they tried to find the intelligent algorithm, the laws of thinking by which intelligence works. However, such fixed laws do not exist. Cognition is changing and reprogramming itself all of the time and this process of change is what makes intelligence. Back in the 1950s, they started with a wrong assumption. What comes out of this are things like these language recognition systems. They “understand” your question for the weather and things like that and come up with a more or less useful reply, but they have to be hand-programmed. Such systems contain different analytical spaces (chunks of knowledge about certain aspects of the world), like weather, the map or your city, restaurants and shops, etc. but they cannot extend themselves. They apply knowledge (like any computer program does) but do not generate new knowledge. Calling this intelligence is hype. Hype from those researchers who have to sell their research to the people funding them, and hype from the marketing people whose job it is to be hyping.
        That it is like this is the reason I have left the area and became a programmer in the finance sector instead. Back then, in the 1980s, I found the stuff that was being done in AI quite ridiculous, and what I see now is only a little bit more impressive because computers have become more powerful. But most researchers in the area still have not understood that the real job would be to create a self-developing system. That would be a system for which there is no general theory, i.e. one cannot in general describe how it works (although you can understand each single instance). This is something familiar from the humanities, psychology and social and cultural studies, but it is alien to people doing hard science.
        However, this is an outsider’s view. I am an outsider. The people in the AI-community would probably predominantly dismiss my views as nonsense.
        I think we are soon going to see self-driving cars on the streets. However, this seems to be a task that does not require too much intelligence. Navigating in a cluttered environment in which others are moving around as well is something many animals have been doing for a long time, e.g. fish or insects are doing it. So I think this is feasible. I am hesitating to call it intelligence, but I see that coming.
        I am not so sure we have a very long way to go to get real artificial intelligence in the way of self-developing systems that can generate new knowledge. As I have been trying to point out in other articles, I think that a core system that can start to develop this way might be rather small, so it could be within reach (one argument for that is here: https://creativisticphilosophy.wordpress.com/2015/12/30/an-estimate-of-the-complexety-of-human-innate-knowledge/). I don’t know if we should try. There is no way to control such a system’s development. If such a system develops consciousness of some kind (whatever that means), trying to control it might also be some form of slavery, so there are thorny issues here.

        Liked by 2 people

        • “Calling this intelligence is hype. Hype from those researchers who have to sell their research to the people funding them, and hype from the marketing people whose job it is to be hyping.”

          I can see your point. I was thinking it was merely a matter of semantics, but calling Siri “intelligent” could lead people to the wrong idea.

          “It did not work, in my view, because they tried to find the intelligent algorithm, the laws of thinking by which intelligence works. However, such fixed laws do not exist.”

          What if there were fixed laws—but not fixed in a very specific or culture-dependent way, but more broadly, generally—that depended on certain things like perception, or maybe even embodiment? Could it be that creative thinking relies on having a background “working” knowledge or “hypothesis” of how the world works? Could it be that learning is achieved by both a) some foundational rules about perception and flexible thinking and b) moving in the world, interacting with it, as children and babies do?

          I’ll check out your post now….I probably should have done that first.

          Liked by 1 person

    • The creative aspect of our intelligence is one of those matters that could be looked at phenomenologically, but I haven’t read about it. It would be an interesting matter. I hope to at least touch upon this issue in upcoming posts. I think it’s very important too.

      Liked by 1 person

      • Creativity, as I understand it, changes the cognitive system, so it could also change the way we experience things. As far as I understand Husserl, he tried to describe experience without looking at the experiencing human being (I have read about this as a “proscription to do anthropology”, so he was against the current of “philosophical anthropology” that existed in the 1920s. In that current of thought, the human being was considered to be “world open” in the sense that cognition, and hence experience, did not have a fixed structure. I have recently written a few thoughts in this direction, see https://creativisticphilosophy.wordpress.com/2015/12/11/a-note-on-analytic-philosophy-and-phenomenology/. If cognition is developing, experience will change. The way a stone age hunter gatherer experiences something might be totally different from what we would experience. And bring him to a modern city and put him into a street cafe. You will relax there and enjoy your coffee while that poor guy will suffer a culture shock and almost die. So this refraining from looking at the perceiving human and how “it works” seems problematic to me. If the mind is historic/biographical and develops by integrating new information from experience, then you cannot do so consequently, so the original Husserlian form of phenomenology might not work in the general case. Classical AI has the same problem (as explained in that article). If you integrate the creative or historic aspect, you can do cognitive psychology and phenomenology, and they would complement each other (as two ways of looking at the same thing, two sides of the same coin), but it cannot be static.

        Liked by 2 people

        • “The way a stone age hunter gatherer experiences something might be totally different from what we would experience.”

          I think Husserl would disagree with this statement to some degree. It could not be “totally” different. Husserl discusses some really fundamental elements of experience; how we come to “know” an object as such, how time factors into this knowing, however loose. The stone age experience would have to have these elements. On the other hand, sure, you take this primitive man and put him in a Starbucks, and he surely won’t interpret things the same way we would. He’d have to figure out how cups work, etc. He might freak out if the barista made a cappuccino. But he wouldn’t try to walk through a wall. He might hit the glass, if it were especially clean, however. Geordie did that once, but never again. Hell, my husband did that once. 🙂

          The point is, Husserl does believe in a priori-ness…this is going to need a lot of clarification. What is “a priori” for Husserl is not “a priori” in the classical rationalist sense of the word. “A priority” comes out of experience, emerging alongside it, fundamentally. There’s no Platonic world of ideas for Husserl. Hopefully I’ll make this clearer in my next post(s). It’s confusing language, I know.

          Liked by 2 people

          • Think of the difference in you listening to somebody speaking English and listening to somebody speaking another language, say Chinese. The experience is totally different if you know the language. What we know, the language and the culture, makes a big difference in how the experience turns out.
            “how time factors into this…” There seem to be large possible differences between cultures, although most cultures today may be similar in this respect. It is interesting to read about the Piraha language in this respect that seems to have reverted to an earlier stage of development, see https://denkblasen.wordpress.com/2015/09/12/notes-on-language-and-the-semiotic-revolution/.
            I think one can say there is something like a relative “a priori” that forms the background of interpretation and experience at a given moment, but this is, in itself, developing both in the single human being (i.e. it is biographical) and in a culture (i.e. it is historic).

            Liked by 2 people

            • It is true that language makes experience different. Even within the same language, the person who can express the nuances of a fine meal is more than likely experiencing that meal differently than someone who has nothing more to say than, “Dude, this is good.” It’s a question as to whether or not the experience precedes the language or language precedes the experience. I suspect the two play off of each other, just as culture and genetic inclinations play off of each other.

              I’m okay with a “relative” a priori, for the most part. But I do think our shared experience(s) points to an “a priority” that’s evidenced by our ability to converse with each other at all.

              Like

              • To an extent, our concepts and language shape the experience and “generate” it. However, the experienced reality should contain some differences into which these concepts can “dock in”. We must be able to distinguish something in order to make a distinction. However, this distinction might come from the context, so that two totally equal objects are experienced differently depending on the context.
                I don’t think that the ability to converse is evidence for an apriori. I think that language develops step by step. In this development, it is possible to use signs and “mean” something even if these signs did not previously exist in the language or sign system. Using them in a context introduces them into the language and this extends the language. There is always the possibility of misunderstandings in this process. In such a process, a simpler language can be extended step by step. This process can historically have started from an “empty language”. I have written about this here: https://creativisticphilosophy.wordpress.com/2015/09/09/creativity-and-language/

                Liked by 1 person

                • I read your post and agree…in fact, I want to discuss language more, specifically our ability to understand it as opposed to a computer’s “understanding” via formal systems.

                  The “a priori” I’m speaking of here is not the classical one, nor is it involved in thinking of the brain or genes or as a code or formal system. Normally, we’re taught it means “before experience” or “not derived from experience”…if we go by these definitions, then phenomenology would have nothing to do with “a priori” since its nature as a description of experience excludes it. But classically, “experience” is meant in a very narrow sense…think of “sense data” or “sense impressions” which act upon us and which we then interpret (a la Kant). This is too narrow for phenomenology. So “a priori” in the phenomenological sense is different. I’ll get more into this later. I hope I haven’t made things too confusing by introducing that phrase…speaking of the changing nature of language. 🙂

                  Like

  7. Your article reminds me of a specific example I think of every time somebody assures me that the singularity is just around the corner.

    Get on youtube and find the most impressive robot you can. Note how it moves. Note how it plays soccer or swims or does whatever. Then compare the efficiency and coordination of its movements with a housefly.

    It seems there’s a lot of “obvious stuff” our Ai engineers need to figure out.

    Liked by 2 people

    • Totally agree. I’m amazed by what Geordie can conceptualize given the context and my reaction to things. Sometimes when we play rough, we get to the point where we’re growling at each other and I get a little afraid of his growling. He gets really into it. But he does know that we’re playing…it’s amazing. He’s never even accidentally bitten me. This amazes me to no end. One time his teeth just barely made contact with my skin, and he backed off from playing instantly and I couldn’t get him back into it. It was almost like he was upset with himself. I couldn’t do what he does…the way he avoids hurting me so nimbly, almost effortlessly.

      (Funny aside: One time “Daddy” and I were having a play wrestling match that got pretty heated. I shouted out a few times and Geordie started to throw up. He was really upset because he didn’t know we were playing…he’d never seen us do this before. Once we reassured him we were cool, he was back to his old happy self.)

      I don’t think singularity will come in my lifetime. I’m not even sure it’s possible. But I do think robots made for specific purposes will get better and better. For healthcare purposes, I can see a lot of commercial interest. And given how expensive healthcare is, especially when you consider things like the price of nursing homes/assisted living (and the fact that medicare won’t cover it, not unless you’ve depleted your own resources), I can really see robots fulfilling a need. If they come about in my lifetime I imagine these robots will still require supervision, but they could take the burden off of caregivers by allowing them to leave their homes to run errands, etc. In fact, I saw something on PBS last night about a cute little robot that can be used as a monitor for your home. The news didn’t get very detailed in describing the robot, unfortunately. I may have to look that up. It was some conference in Vegas…that’s all I have to go by.

      Anyways, the point I was trying to make earlier was that nursing homes cost a fortune. My uncle stayed in one in KY for a short while, and that was 9K per month! That was the “cheap” one! My mom’s assisted living was only 3K/month, but the memory care was 5K. At those prices, if a robot could keep someone in their homes, even with aid from a caregiver, there’s the possibility that the robots could be commercially feasible even at a high upfront cost. Especially considering that people prefer to stay at home. But they (the robots) really do have to be reliable.

      Liked by 1 person

      • That’s interesting. It’s interesting to think about with the American context just how much technology could be driven by the desire to avoid the medical system. I’d mostly thought about it in political terms (it’s the biggest single reason I live in Korea, for example), but avoiding doctors and nurses could also drive a boom for robotic services.

        And yeah, dog’s are on a completely different level of brain power than any computer. Although the automatic romance novel generator has made me think computers really understand us. http://www.plot-generator.org.uk/create.php?type=1

        Liked by 1 person

        • So much fun!

          “Gordon Picky is a marshmallow tummied, bunny eyed and virtuous retired from the island of Lesbos. His life is going nowhere until he meets Chelsea Clinton, a pickle skinned, constantly sunburned woman with a passion for bacon.

          Gordon takes an instant disliking to Chelsea and the Heideggarian and sensitive ways she learnt during her years in Ukraine.

          However, when a nark tries to hawk a loogie in the face Gordon, Chelsea springs to the rescue. Gordon begins to notices that Chelsea is actually rather super cool at heart.

          But, the pressures of Chelsea’s job as a nurse practitioner leave her blind to Gordon’s affections and Gordon takes up vinyl records to try an distract herself.

          Finally, when soporific logician, Anastasia Buckley, threatens to come between them, Chelsea has to act fast. But will they ever find the tender love that they deserve?

          Praise for Gordon Picky’s Diary

          “I fell in love with the wicked awesome Chelsea Clinton. Last night I dreamed that she was in my teapot.”
          – The Daily Tale
          “About as enjoyable as being slapped with a dead fish, but Gordon Picky’s Diary does deliver a strong social lesson.”
          – Enid Kibbler
          “I love the bit where a nark tries to hawk a loogie in the face Gordon – nearly fell off my seat.”
          – Hit the Spoof
          “I could do better.”
          – Zob Gloop

          Liked by 1 person

  8. Pingback: Intentionality and Meaning | Diotima's Ladder

  9. Pingback: Kurt Godel and the Limits of AI | Ben Garrido's Author Page

  10. Pingback: Eidos and AI: What is a Thingamajig? | Diotima's Ladder

  11. Pingback: The Natural Attitude | Diotima's Ladder

  12. Pingback: Warnings over an imminent AI singularity distract from the real threats. – AI & Society

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.