Eidos and AI: What is a Thingamajig?

To understand this post, you might have to read part I and part II on phenomenology and artificial intelligence.

The question I’m asking is not: Can computers think? Or: Can AI have consciousness? But: Can meaning “run ahead” for AI the way it does for us? Can we program intentionality, the “about-ness” or “directed-ness” toward things, as well as the horizon that makes things/objects possible? And, most of all, does it matter what processes are involved in arriving at the correct response or behavior?

For the last question, I don’t know. I see efficiency in the way we experience, but a specific kind of efficiency. Our efficiency is not in grasping everything equally and honing in on the correct answer or response. When we make mistakes, it’s often not just computational error. Error sometimes comes from grasping meaning and relevance in context, grasping it in a plausible and maybe reasonable way, but not necessarily in the technically or scientifically-correct way. Can a lookup device be designed to act as we do (in a timely manner)?

I’m starting to break free from the well-known philosophers here. If you hope to learn about phenomenology as it appears in the history of thought, in a technically precise way, you might not want to read this. I’d recommend my other post on Husserl as a starting point (which has been checked by the in-house philosopher).

Things might get messy, but hopefully not messy in a pedantic, overly-hyphenated-German-philosophy way.

Also, I’d promised some of you I’d bring up things in this post that I’m not actually going to bring up now. I realized I’d crammed too much into one post and this is not the platform for long discourses. Speaking of brevity…


The Ashtray Example

images

What do you see above?

This isn’t a trick question. It’s an ashtray. Or you might say it’s a representation of an ashtray, being an image on a blog post. In any case, let’s pretend it’s a physical ashtray sitting before you, one in which you can put out a cigarette if you so desire.

The act of calling an ashtray an ashtray may not seem particularly amazing, but consider this: the ashtray has an infinite number of perspectives. You are looking at it right now from one perspective, and you will never in fact see with your eyes or feel with your fingers the entire ashtray in all of its possible states. (We won’t talk about the smells or tastes…) You could spin the thing around and around all your life, but hopefully you won’t—in this case one glance gives you all you need: ashtray. More importantly, one unified object.

First of all, let me make clear that we are not talking about a priori ideas in the usual way. There is no ashtray-form sitting in your mind and some ashtray-ish-stuff ‘out there’ pushing the impression buttons of your senses, which then get interpreted by the mind. We’re still doing phenomenology and we’re still confining ourselves to experience as it’s experienced. There’s no mind vs. objective mind-independent stuff in our investigation. There’s only experience.

Plus, we needn’t compare various ashtrays and wonder how it is that from this multitude of ashtrays, each one of which is not exactly identical to any other, we are able to label them all the same: ashtray. We’re not looking at what every single possible ashtray has in common. We’re not talking about ashtray-ness. We’re talking about one particular ashtray. This ashtray. (Okay, strictly speaking, the hypothetical physical one before you.) How is it that this ashtray, despite it’s infinity of perspectives, is perceived as one unified self-same object?

Take another example, an object you’ve never seen before. Let’s imagine it’s a solid plastic wad. You have no idea what its function is, but you still experience that nameless plastic as a unified object.

You could argue that we err in leaping to this unity, that we shouldn’t say we actually experience a unified object, but instead particular moments of the ashtray. When we see a particular moment of a particular ashtray in a particular way, we theorize about the rest of the ashtray. The unity of the ashtray is nothing but leaping to conclusions, a story we tell ourselves to get by, a quick synthesis, perhaps subconscious. We impose unity. Since the unity itself is never something we actually see (with our eyes), it’s “just” a theory. Like gravity. Like causality. Like necessity. Completely invisible and possibly not really there. We had a sense impression yesterday that the sun rose, and the same impression the day before that, but who’s to say the sun will rise tomorrow? (If you start having apocalyptic nightmares, you can blame David Hume). In other words, there is no visible or perceivable necessary connection between events/impressions. We see event A, then event B. That’s it. Like constellations in the night sky, it’s we who connect the dots and make up stories about them.

Kant comes in here to say something like: “Wait. The sun’s rising is not just a theory! Necessity, causality, synthesis of the manifold of experience, etc. are indeed ‘in our heads,’ but they cannot be taken off like a pair of sunglasses. We couldn’t experience anything at all without these a priori conditions.”

I’d argue that neither have hit upon experience as it’s experienced. Hume errs in supposing that experience is equal to or derived from sense perception. Kant errs in making this same presupposition, but he adds that knowledge is derived from both experience (sense perception) and the a priori conditions which make experience possible. Kant nobly tried to bridge the rationalist-empiricist divide, but maybe a bridge wasn’t needed. Perhaps experience itself needed to be re-examined. It seems we’ve made ‘experience’ too narrow.

Here is where you must decide for yourself by ‘looking at’ your own experience.

The ashtray’s unity comes first in most ordinary experience, and this entails assuming properties about the object that are not strictly visible with the eyes in the moment (I use the word “assume,” but this is not meant to be taken as an active thought process or a matter of logic…it’s grasped immediately, intuited wordlessly.) The object appears to us all at once in its past and possible states, maybe only in a vague way, but it’s all there in that moment. We don’t experience these disjointed perceptions—a certain temperature + a certain color + a certain shape + a certain weight, etc.—and then add on unification, except when we theorize about experience in analysis. But in that case, when we theorize, we experience a theory, not the disjointed perceptions that we suppose we’ve experienced, at least not directly and “in the order in which they were received” (to quote telephone answering services, which may be a faulty analogy, but I couldn’t resist.)

In other words, when we theorize about experience by analyzing it, we change the experience from a naive ordinary one to a conceptual one. I repeat, this sort of theorizing is also within experience as a certain kind of experience, and therefore it’s possible to study phenomenologically too…but that’s a complex matter that I don’t want to get into. That’s advanced phenomenology, and we’re in phenomenology 101. Here we’ll stay with this: the “adding up” of sense data doesn’t quite fit the bill as an explanation of the ordinary, original experience.

Much of what we experience as we experience it isn’t given as sense data. 


Eidos

Husserl uses the term “eidos”—literally “seen,” but here we’ll go with: shape, form—in a way that’s similar to what I’ve called “leaping ahead” in previous posts. His term is way better than mine for technical reasons, but I thought “leaping ahead” might make more sense in earlier contexts, as a means of preparation and to avoid scary words.

So, eidos = form, like Platonic ideas. However, Husserl does not use eidos in a fully Platonic sense; he does not (and cannot) posit a world of forms separate from the world we experience, but rather, eidos is constrained by its particular manifestations. I think of Aristotle here, but I hesitate to make that comparison…so take that with a unified self-same grain of salt.

With eidos Husserl seeks to do a different kind of analysis, one which he thought would uncover the basic elements of phenomena.

The Eidetic Reduction is described in the IEP, which I’ll quote here:

The eidetic reduction involves not just describing the idiosyncratic features of how things appear to one, as might occur in introspective psychology, but focusing on the essential characteristics of the appearances and their structural relationships and correlations with one another. Husserl calls insights into essential features of kinds of things “eidetic intuitions”. Such eidetic intuitions, or intuitions into essence, are the result of a process Husserl calls ‘eidetic’ or ‘free’ variation in imagination. It involves focusing on a kind of object, such as a triangle, and systematically varying features of that object, reflecting at each step on whether the object being reflected upon remains, in spite of its altered feature(s), an instance of the kind under consideration. Each time the object does survive imaginative feature alteration that feature is revealed as inessential, while each feature the removal of which results in the object intuitively ceasing to instantiate the kind (such as addition of a fourth side to a triangle) is revealed as a necessary feature of that kind. Husserl maintained that this procedure can incrementally reveal elements of the essence of a kind of thing, the ideal case being one in which intuition of the full essence of a kind occurs. The eidetic reduction compliments the phenomenological reduction insofar as it is directed specifically at the task of analyzing essential features of conscious experience and intentionality.

In other words, in the eidetic reduction, we seek to determine whether the “actual thing” (not thing in itself, remember) qualifies as an instance of the eidos we assign it. What we seek is whether or not the particular instance meets the essential qualifications of, say, a triangle, or a building. The eidetic reduction is a process in phenomenology which is indeed descriptive, but on the more theoretical side, being analysis. So what, then, makes this sort of analysis truer to experience as it’s experienced? I don’t have the answer. The use of the term “eidos” seems fine, but then to go on and try to create a science out of it seems to be a stretch. All I can say is my inner Plato lover is completely biased in favor of such an exploration, but I’ll admit that few have taken this “science of essences” stuff seriously. Perhaps this is the particular juncture at which people turn away from Husserl. It’s not quite Plato reincarnate, but it’s close enough.


 

Science of Essences: why we should resurrect Husserl

It seems to me that eidetic intuition applies everywhere in ordinary experiences, including those cases in which we experience something novel. Taking this as given, we might then use analysis to find out more about essences, a science of essence for a specific purpose. We might find out general things about essences; for instance, there’s an infinite number of them, given that each particular is unified in eidetic intuition. The plastic wad is a unity by virtue of being one thingamajig, and there can be an infinite number of such thingamajigs (that’s my technical term). Then there are named unified objects that we classify either according to likeness or some other classification system. Trees, bushes, flowers, vegetables, etc. might have a different classification system than plate, chair, ashtray or 3.14, -5, 1/2 or justice, truth, God. Plus, objects that were designed for one purpose can be used for other purposes, and often are (those of you who’ve taken a sip from a beer bottle-turned-impromptu ashtray know this all too well.) The difference may not be so much in the material, but in the function. Function is an important part of the way we classify things. Other times the classification will depend on the material. There are so many ways of adjusting our lenses here to suit our purposes.

For soft AI, perhaps a “science of essences” could be applied in a particular environment in which we can predict and control the objects within that environment according to essence classification alongside image identification (which already exists to some degree.*) The assumption of eidetic intuition is not to be taken lightly in philosophy, but in AI, it seems to make sense of the problems AI research has faced by explaining that there’s this bizarre unity of the manifold in our experience. It’s a tangible problem, regardless of how it arises in the human brain or whether it arises there or whether it has something to do with self-awareness or consciousness. The mere fact of this “transcendence within immanence” might be enough to outline a strategy to be taken in replication.

A quick Google search showed me that algorithms for object identification have come a long way. The difficulty lies in speed of object recognition. I’d guessed that there must be some sort of way to eliminate unlikely possibilities, to cut corners, but apparently that process is not as good as random sampling. Weird.


Claire, the Robotic Maid

Let’s get concrete. Let’s create a maid robot and name her Claire. This way we start small: the confines of a house. We don’t have an infinite horizon—otherwise known as the entire universe—on top of an infinite number of perspectives of each individual object. That’s just too hard.

Also let’s assume either: a) we don’t need an infinite number of perspectives to have Claire identify a self-same object* or b) we can figure out how to replicate an infinite number of perspectives in a unified way, which sounds impossible, but maybe it isn’t.

And let’s assume the robot mechanisms work fine. Maybe she’ll be better than human in terms of mechanics. Now it’s a matter of getting her to see objects as we see them—to know when that plate is not being used as a plate, but as a saucer; to know that a photograph of a human is not a real human; to know that she doesn’t need to water the plastic fern; to know to stay away from the rare book collection and not smoke your stash or rat you out, etc.

If an object appears to us with all possible variances of it alongside the self-same-ness of it, we should want that for Claire…to some degree. After all, she must know that the refrigerator is dirty, not that it’s a new thingamajig that doesn’t need her attention. And she shouldn’t need to know what a refrigerator looks like after it’s been smashed to smithereens at a monster truck rally either. There must be some threshold of experience that mimics our awareness of differences in objects. Claire might have the capacity right now to know what a refrigerator is—the mere name—but she also needs to know what various components do or at least how to deal with them, what to use to clean them, that she doesn’t need to clean the Coke bottle in there, etc. Perhaps for moveable objects she needs to know their function as dictated by the environment, but for other things she doesn’t need to know much. She doesn’t need to know what an escutcheon is in order to clean it (hence our need for a class of objects called thingamajigs). If she wasted her time finding out what an escutcheon is, that would be inefficient.

*I’ve taken Husserl’s object identification—an infinite number of perspectives somehow alongside a unity of this infinity—throughout this post as true. In consulting with my own experience, I wonder if the unity we perceive, while still being a priori in a phenomenological sense, is not quite infinite, but a shadow of the infinite. In other words, perhaps this “infinity” he speaks of is theoretical and not directly experienced, and what we actually experience is openness or possibility, but not quite infinity. Maybe we experience a very larger number of possible perspectives, but at some level there’s a vanishing point. Maybe infinity as we actually experience it in our usual naive way is nothing more than: “A lot more than I wanna count.” Not infinity infinity. (And certainly not infinity times infinity.)

How would you create Claire? What stumbling blocks do you foresee? What is an escutcheon? 

 

73 thoughts on “Eidos and AI: What is a Thingamajig?

  1. Excellent post Tina! Object recognition is definitely a difficult aspect of AI processing, and I can see now what you meant about this stuff being useful.

    Interestingly, I didn’t recognize the ashtray in the picture as a ashtray before you named it. I grew up around ashtrays (both of my parents smoked), but ashtrays haven’t been a part of my daily life for a long time, so when I saw the image, it just looked like a circle thingee, but it clicked immediately after you identified it.

    I’m reminded about all the times I’ve failed to identify objects or people in radically different circumstances, lighting, etc. Therein, I think, lies our primary clue. The human ability to identify things is far from infallible. In other words, this is probabilistic reasoning, that is, quick and dirty reasoning that makes a quick guess using lots of hacks and shortcuts. Our success rate probably depends on how much experience we’ve had with the object or similar objects.

    When we recognize an object, we immediately and intuitively map it to a category, a pattern we already know. Indeed, identifying a single object is itself a type of categorization, because it maps disparate sensory data to the category of that one thing, which then might get mapped to the broader category of similar things. We are pattern matching engines, and we map new objects into existing patterns as much as possible.

    When we encounter a new type of object (or experience), it can often confuse us. Our first attempt is often to map it to the closest known category of objects. It’s why two people with different life experiences can come away from an identical experience with radically different perspectives. All observation is tangled up with the patterns we already have stored. In other words, all observation is theory laden, and as you note, we only have theories about objective reality.

    So, constructing Claire, it seems to me, involves finding the right probabilistic algorithms, which we’re still learning about. I think Claire will be a lot easier to construct if we don’t require her to be infallible, if we allow that sometimes she is going to make mistakes. Indeed, without that allowance, too much processing power might be required. (I don’t know if I would say infinite, but an astoundingly huge amount.)

    The question is to what extent we will have to train her. (Although once the first instance is trained, we should be able to duplicate her knowledge base to other units.) Hopefully she wouldn’t need the decade or two a human needs before they can do the job.

    I have no idea what an escutcheon is. Just cheated and googled it, but none of the results (coat of arms, decoration around a keyhole, back of a ship with the name on it, etc) made sense for something in a fridge. Although maybe I have a pattern of it in my brain without that label and would recognize it if I saw it 🙂

    Liked by 2 people

    • “Interestingly, I didn’t recognize the ashtray in the picture as a ashtray before you named it.”

      Now that’s hilarious. Here I thought I had the thing in the bag. 🙂 Well, if it makes you feel better, my husband has a hard time with facial recognition. When we watch movies, he gets utterly confused because “everyone looks alike” and “why do these guys all have the same amount of facial hair, as if they never shave but never grow a beard?” Combine that with my excellent facial recognition, but poor memory of names, and you have a lot of talking over the movie. “He’s not the same guy you’re thinking of…you know, the other one…the one in the shop with the girl.”

      “In other words, this is probabilistic reasoning, that is, quick and dirty reasoning that makes a quick guess using lots of hacks and shortcuts. Our success rate probably depends on how much experience we’ve had with the object or similar objects.”

      The probabilistic reasoning is indeed quick and dirty, but the astounding thing is we’re more often right than wrong (otherwise we’d have a pretty different sort of world, quite unimaginable). The success rate for us might be how much experience we’ve had, a slow process of being babies and figuring out how our fingers work and so on, but I wonder if Claire could have the advantage of pre-sorted a priori categories? For instance, the idea of something that isn’t symmetrical in a 360 view, then a subcategory of things like chairs, people, couches, etc. And then the opposite…etc. It would be a massive chart, really massive, but it could be faster maybe?

      I confess, I had a lot more on Claire, but then it all got deleted somehow. I got discouraged and just hit “publish” to get the damned draft out of my face. I meant to give some examples of eidetic reductions to give her as rules, but instead I just let it go.

      “I think Claire will be a lot easier to construct if we don’t require her to be infallible, if we allow that sometimes she is going to make mistakes. Indeed, without that allowance, too much processing power might be required. (I don’t know if I would say infinite, but an astoundingly huge amount.)”

      I’d agree with that. I think another shortcut could be a situation-location specific teaching process, sort of like the way Roomba supposedly “learns” the room. (It doesn’t really…I can’t say it does in any way that works.) Our Claire could be supplied with quite a lot of information that’s specific to the user, and maybe that would help. I wouldn’t expect most people to be very clear in their instructions, so that would have to be taken into account. But maybe simple things, like, “This is a refrigerator. This part dispenses water. This dispenses ice.” Or, “Don’t clean this area.”

      “I have no idea what an escutcheon is.”

      A great example of mapping to existing categories. In my forever-deleted post, the escutcheon example wasn’t located in the paragraph about refrigerators. Your assumption made sense, but escutcheons are little plates that go over three hole sinks:

      http://www.amazon.com/PEP1-2-10-Inch-Kitchen-Escutcheon-Brushed/dp/B00HWL5MHG/ref=sr_1_2?s=kitchen-bath&ie=UTF8&qid=1459734837&sr=1-2

      I learned this word recently because we had our bathroom remodeled. Who knew that little thingie wasn’t stuck on the faucet? I’d always assumed they were all one piece.

      Liked by 2 people

      • I’m not always great with names myself, although I can usually recognize faces. The exception is when someone changes their hair, grows a beard, etc, or is in just a radically different context than I’m used to seeing them. I remember one time seeing one of my teachers at the beach and not recognizing her because I had never seen her anywhere but dressed as a teacher in a classroom.

        ” but I wonder if Claire could have the advantage of pre-sorted a priori categories?”

        I think she would. But I also think humans have a lot of them ourselves, although it’s really more like predispositions to learn certain things rather than the sharp definitions Claire might start with. For instance, babies look at faces more than anything else (unless they’re autistic) and seem to have an easier time learning them than many other things. Psychological testing has shown that there are certain types of things humans learn easier and other things that take a lot more work, all likely due to pre-wiring we come out of the womb with. And just about all primates seem to have an inborn aversion to slithering things, even if they’ve never seen a snake before.

        Too bad on the lost Claire examples. I would have liked to have read them, but I totally know where you’re coming from. Back when my shoulder was hurting, I lost a long post after days of short painful writing sprints, and was too discouraged to rewrite it. (Although I did eventually use some of the ideas in a later post.)

        Interesting on the escutcheon. I sometimes wonder what the names of certain things are, but finding it is difficult when all you can search on is “that thing that sits in that spot…”. I’ve taken pictures before and done a Google graphic search, but it doesn’t always work.

        Liked by 1 person

        • On a priori:

          I think we definitely have predispositions, some relatively universal—hehe—and some specific to the person, but also there’s the other sort of a priori…like Kantian categories. Causality, synthesis, etc. I think we have those too, but that’s a long discussion. Whether or not we have them tends to bring babies into the discussion, and I don’t like babies. They don’t count. They’re like slithery things that shouldn’t exist. 🙂

          “And just about all primates seem to have an inborn aversion to slithering things, even if they’ve never seen a snake before.”

          Oddly, snakes don’t terrify me in a visceral way, but spiders do. Probably close enough, since both can be dangerous. The weird thing is that tarantulas don’t scare me so much. Even though they’re bigger…yeah, weird.

          Liked by 1 person

          • I took your mention of a priori to mean innate knowledge. I think we do have innate knowledge of causality. (Strikingly, despite their intelligence, crows do not.) I’d have to bone up on my Kantian for the categories or synthesis.

            Yeah, I like cute babies, but it’s probably just as well I’ve never had to care for one. They slobber, poop, throw up, and generally are money pits. But they are interesting to study for innate human nature.

            One of the things about human adults is that, even if we originally had an innate aversion to something, it doesn’t mean that we didn’t unlearn it at some point. And, of course, not everyone is the same. We all have innate differences. It’s why conscientious people can honestly disagree on moral conundrums. I personally dislike snakes, but I can’t say whether it’s innate or learned from growing up with everyone constantly telling me to beware of snakes whenever I played in tall grass or around dead trees.

            Liked by 1 person

            • “I took your mention of a priori to mean innate knowledge. I think we do have innate knowledge of causality. (Strikingly, despite their intelligence, crows do not.) I’d have to bone up on my Kantian for the categories or synthesis.”

              For me, yeah…a priori would be innate knowledge. That conjures up a sense of “born with” however. In phenomenology we might not say that…that might be going too far outside the realm we’ve set out to investigate. Not to say there’s nothing like a priori knowledge (what else could eidos be, then?) but the whole meaning of the term changes or shifts to emphasize “not sensory” rather than “born with.” (That’s my non-expert opinion anyways.)

              Babies are interesting to study in that way, to get at what is innate human nature in a scientific manner.

              I imagine people have to unlearn a lot of innate aversions, especially when you consider some of the amazingly terrifying things people do in their jobs. (Those people who wash windows on skyscrapers…I hope they’re making more money than I would expect.) I think a lot of our typical fears make sense: fear of heights, fear of spiders, snakes, enclosed spaces, etc. They all pose risks that are real in some circumstance. And yet we have people who don’t seem to be afraid of things they should be afraid of. Rattlesnake bites around here usually involve a few intoxicated guys and a bet.

              Liked by 1 person

              • On eidos, as another non-expert, I concur 🙂 When reading your post, I took eidos as something that could be innate or arrived at from experience, either directly or developed through reasoning. I know the Platonic sense is that forms exist timelessly and that we are born with knowledge of them. I’m skeptical of much of that notion, but I agree that it’s not necessary to address it for this discussion.

                The fact that we override so many of our innate instincts is what finally pushed me into concluding that an objective morality doesn’t exist. People too often go against their own instincts in the name of doing something moral. Sometimes in a good way, such as overriding our natural inclination toward xenophobia, but sometimes in horrible ways, such as a parent killing their child to protect her honor.

                What’s interesting about this, is that we override an instinct, such as fear of heights, in service of other instincts, such as making a living or providing for our family. Even in the case of someone pestering a snake on a dare, they’re overriding their fear (which admittedly may be dulled by alcohol) in service of their instinctual need for social status.

                Our actions arise from a collection of instincts, many of which contradict others. It’s always a battle for which one dominates at any one time. It’s the result of evolution’s haphazard manner of introducing and culling attributes.

                Liked by 1 person

                • What sort of morality do you believe in? Just curious. I don’t know if I’d say an objective morality doesn’t exist, except that for the most part it doesn’t seem to. I think there might be a core of morality, but much of it is relative to various circumstances.

                  Liked by 1 person

                  • Descriptively, I think morality arises from a combination of instincts and social conditioning. (I like Jonathan Haidt’s work in this area.) Our instincts do put limits on its relativism, but historically those limits have been shown to be broader than just about anyone is comfortable with. For example, in a hunter-gatherer culture, it can be considered moral to strangle your parent when they can no longer keep up with the tribe, an act that would be considered heinous in most sedentary cultures.

                    Normatively, I don’t see any of the major philosophical frameworks as authoritative. Most of the time, it seems to me, they are used to rationalize results we already emotionally prefer. Maybe the question is, when should we override our emotional preferences? My own personal answer often comes down to minimizing suffering and maximizing the potential for happiness (which I suppose is consequentialism), but I can’t claim to follow that consistently.

                    All of which is to say that I’m not rigorously logical in my morality, and I’m skeptical of anyone who claims to be.

                    Liked by 2 people

                    • “For example, in a hunter-gatherer culture, it can be considered moral to strangle your parent when they can no longer keep up with the tribe, an act that would be considered heinous in most sedentary cultures.”

                      Hm. Perhaps that’s euthanasia in combination with utilitarianism? Maybe a kinder death than being left behind. If you think about the ways we keep people alive, I wonder how heinous our sedentary culture would appear to them.

                      “Our instincts do put limits on its relativism, but historically those limits have been shown to be broader than just about anyone is comfortable with.”

                      I know what you mean. Religious human sacrifice seems less rational than the strangling of lagging parents, at least to me. 🙂

                      Liked by 1 person

                    • Religious human sacrifice presumably seemed perfectly necessary to those who carried it out. Aztec creation myths told how the world was created by the sacrifice of the gods, and continued human sacrifice was necessary to sustain it. The druids sacrificed those convicted of stealing and other crimes to please the gods, thinking that the gods would then spare them from diseases, death in battle, and other dangers.

                      Liked by 2 people

                    • Oh, I think the hunter-gather strangling parent thing is definitely euthanasia, which in our society is a controversial thing (although just for humans, not for animals). I doubt most people who did the strangling were happy about it, but they overrode their emotions as a necessary evil.

                      The human sacrifice thing is perhaps the most damaging data point when you’re looking for a universal morality, because once upon a time it was a widespread in human cultures. It tended to disappear within a few centuries of a society getting writing, maybe because written records allowed those societies to figure out that the results of harvests and wars has no correlation with sacrifices. But regardless, given human psychology, it seems doubtful most people liked it. Again, it was likely seen as a necessary evil, and so people overrode their emotions.

                      It’s this overriding of deeply felt emotions that made me realize that an objective morality was not to be found, at least not from human instinct alone. Studying human nature can give us a range of what humans might be happy or unhappy about, but the range of what humans might force themselves to do is terrifyingly much larger.

                      Liked by 2 people

                    • Good point. And the fact remains that animal sacrifices weren’t much impeded for a long time. Still, human sacrifice did tend to disappear as societies became literate, so not sure what the connection, or common causal factors, might have been.

                      Liked by 2 people

                    • The Romans stopped the Celts from sacrificing humans because they found it disgusting. And it took a lot to disgust a Roman 🙂
                      In a way, we still carry out human sacrifices in some parts of the world – Texas for example. Again, only criminals are killed. Here the intention is different, but the common thread is the idea that the authorities have the right to take life .

                      Liked by 2 people

                    • If I recall correctly, the Celts burned their sacrifices alive in wicker statues. Although, as you note, the Romans weren’t exactly squeamish, crucifying criminals and rebels, and sometimes setting them on fire on the cross. But, at least in the empire phase, the Romans did it as a penalty (at least ostensibly) rather than an offering.

                      On Texas, I don’t agree with the death penalty (though I’m in the minority in my country), but I don’t think I’d characterize it as human sacrifice. The motivations for the act are too different.

                      Liked by 1 person

                    • Hm…wicker statues? That’s pretty wild. I imagine very painful and terrifying too.

                      No, the death penalty isn’t human sacrifice…totally different motivation, as you say.

                      I once got to listen to two of my friends debate the death penalty, and the one from Texas kept repeating himself, “They deserve it!” It was the one argument that made sense to me, and he stuck by his guns, so to speak. The other arguments he could’ve made, “It gets violent criminals out of society…” etc., wouldn’t have been as powerful.

                      It’s one of those things I waffle about. In principle, I think there are some cases in which people deserve it, but those are really rare. And then there’s the possibility that the judicial system let someone slip through the cracks, and that’s intolerable. I’d rather not have the death penalty all in all.

                      Liked by 1 person

                    • I think Gandalf’s rebuke to Frodo in LOTR (book version) summarizes my issue with the death penalty: “Many that live deserve death. And some that die deserve life. Can you give it to them? Then do not be too eager to deal out death in judgement.”

                      Liked by 2 people

                    • “It’s this overriding of deeply felt emotions that made me realize that an objective morality was not to be found, at least not from human instinct alone.”

                      Maybe not from human instinct alone, but I tend to think there must be some sort of…um…not willy nilly morality…for lack of a better term…that might be derived from instinct, but not entirely dependent on it in an immediate sense. (The instinct to not let our parents die a long and painful death overrides the instinct to not strangle our parents in the moment.) Maybe. 🙂

                      Perhaps this not willy nilly morality can’t be found in a widespread real world example, but maybe there’s a moral core that’s quite real, yet we must take into account the situation?

                      I just think it would be hard to take morality seriously in a world where right and wrong are mere culturally-induced biases and nothing more.

                      Liked by 2 people

                    • I totally understand the sentiment. I do take morality seriously, but can’t see any way out of relativism. But just because I’m a descriptive relativist doesn’t mean I’m a normative one. I think we should be respectful toward other cultures, but within limits. I just can’t see any way to justify those limits except perhaps by international consensus (or raw power, might makes right, etc).

                      I’d be very interested in any convincing demonstration of objective morality.

                      Liked by 2 people

                    • I don’t know that there is a very convincing demonstration of objective morality, at least not in any specific way in which we really lay down the law. I think the most we can do is say that “might makes right” is an intolerable view. And as you say, there’s a big difference between being a descriptive relativist and a normative one, but those don’t like to be in the same room. 🙂

                      Liked by 1 person

                    • For better or worse, might makes right is the most common way these things have been solved historically. Even a consensus view worked out through debate and discussion essentially becomes the might of the majority imposing its values on the minority. Even if someone deeply and honestly feels that people should have the right to, say, engage in necro-cannibalism, the rest of society is going to impose different values on them.

                      Liked by 2 people

                    • Necro-cannibalism! By the dog, Thrasymachus, you’ve gone too far!

                      Sorry…had to go there. Your other comment on Lord of the Rings plus the “might makes right” argument fit so nicely with the Republic.

                      Well, for what it’s worth, I’m not sure I’d want a consensus view of morality. Which also ties in nicely with the Republic. 🙂

                      Liked by 1 person

                    • Ah, the Republic, a reminder that people have been trying to figure this stuff out for a long time. It’s been decades since I read Plato (as, I’m sorry to say, a mostly indifferent undergraduate). I seem to recall Socrates tearing apart everyone else’s definition of justice but can’t recall if he himself every settled on one.

                      I agree on the consensus thing. For instance, there’s a broad international consensus that blasphemy laws are valid. (Obviously there’s no consensus on exactly what blasphemy is.)

                      Here’s the rub. The west can ignore that consensus because it currently has the might to do so.

                      Liked by 1 person

                    • “I seem to recall Socrates tearing apart everyone else’s definition of justice but can’t recall if he himself every settled on one.”

                      It’s an interesting question because the whole of the Republic has to be taken into account, and that’s a crazy big task. Socrates is not necessarily the one to give the answer, being a character. So that must be taken into account too.

                      There’s also the whole analogy of individual soul to entire society to consider, and whether that analogy flies.

                      Supposing it does, virtue would be the harmony of the soul and justice would be the harmony of society. Harmony is not a simple peace (the city of pigs which Socrates prefers makes that clear…he may want the simple life, but Plato recognizes that we don’t—it’s not in our nature—we want luxury. If we want luxury, we have to make war. Now we have this expansion into something problematic, but more realistic). Harmony is various factions (the greatest possible unity of the greatest possible diversity, perhaps) moderated by reason. And the people who get to be in charge don’t want the job, but they are IN FACT the ones who are most capable of bringing about justice. The Philosopher king idea is presented as an ideal, not what Plato would actually want in the real world. (That’s my interpretation, but I could back it up.)

                      So in short, justice is not presented in definition form, (ironic, since Socrates is always forcing others to define things.) Socrates ends up making emotional appeals and starts telling stories (muthos, myths)…the placement of these at the end are important for the whole work. The sense you get from that aspect of the Republic is that there is no definition, or if there is, Plato’s not just gonna hand it to you on a silver platter—hence all the stuff about education and paradox, which leads you to take the entire Republic as a sort of puzzle.

                      You also get the idea that everything in life is a cycle of degeneration, so any justice that comes about would only be an approximation, and would only last a short while before the forces of nature tears it down.

                      What Socrates does say about justice: It’s sought for its own sake; those who do good are happy. Not a definition, but the most we get from Socrates. The ring of Gyges (the ring of invisibility) is a challenge to Socrates brought up by Plato’s brother, Glaucon, who gives the challenge to Socrates as a sort of thought experiment and makes it clear he’s not on Thrasymachus’ side, but wants to play devil’s advocate. He says: what if we play this thing out to the extreme and let the unjust man have ultimate power (ring of Gyges-like power)? In other words, the unjust man can do whatever he wants without getting caught, without suffering any external consequences like going to jail or being banished or executed, etc. Basically, he always gets away with it. He becomes a tyrant and his appetites continue to grow, but they won’t ever be checked. He always gets what he wants.

                      When Glaucon brings up this thought experiment, the Republic as most people remember it is born. Then you have this whole complicated system with “waves of paradox,” women given equal education—big ancient Greek gasp here—and communism, children not ever knowing their parents (it takes a village to raise a child, after all) and “noble lies” about people being “sprung from the earth” as gold, silver, bronze (basically, some are smart, some are good fighters, we’re not all really equal in everything so that’s why we need equal education and upbringing, to determine who’s born with it.) Not to mention censorship, which college students love to rag on. After all this, the simple city of pigs gets forgotten, because it takes up only a few pages and it’s boring. The Republic, on the other hand, is rich with detail and reflects a lot more than politics and justice as we think of it. Justice is tied to individual virtue, which is tied to epistemology and ontology, not just ethics and law. It’s a glorious reflection on everything.

                      Liked by 1 person

                  • Tina, this seems relevant to your discussion. Facebook has a new feature designed to describe a photo to a blind person. I would think it would have to deal with many of the issues we’ve been discussing.
                    https://www.washingtonpost.com/national/health-science/facebook-programs-computers-to-describe-photos-for-the-blind/2016/04/05/07a490b6-fae4-11e5-813a-90ab563f0dde_story.html
                    Note the discussion about embarrassing mistakes.

                    Liked by 1 person

                    • Oh no! Gorillas!

                      Thanks for the article. I wonder how they determine whether a photo is a “selfie”? That seems like it’d be awfully hard. Of course, there are obvious selfies in which part of an arm can be seen at a particular angle, but now they have selfie sticks for people who just can’t stop. And then there’s the pucker-y lip selfie face that certain women of a certain age do…I wonder if algorithms will catch that? 😉

                      The fact that they’re excluding certain details in the description is intriguing. How do they determine which details are important and which are not? For instance, in the pizza example, perhaps the ingredients are arranged a certain way an that’s the whole point? Or what if someone found Jesus in a tortilla? How would that idea come across if the algorithm excludes such details and merely says: “This is a photo of a tortilla”? The point would be entirely lost, unless the photo were described in words somehow…but usually the photos are meant to speak for themselves, especially on FB.

                      On the other hand, it’s a start and I’m sure blind folks will appreciate it.

                      Liked by 1 person

                    • Good questions. Unfortunately, FB isn’t likely to tell us how they’re doing it, since I’m sure they want to keep any strategic advantage they can over the competition. I suspect it’s far from perfect, but as you noted, for the blind, it’s far better than depending on people to remember to put a description there for them. (Lamentably, while some do, most don’t.)

                      Liked by 1 person

  2. LOL about the escutcheon. Not the definition I thought of! To me it has heraldic associations. But I didn’t recognize the ashtray either, until I looked at it closely and asked myself “What is this?” I did not have an intuitive, instant knowledge of it.
    What I thought of when reading the post was, what about experience versus the memory of the experience? I wonder if they are in fact quite different. Especially as I get older, I realize that the memories I lay down are selective and full of holes. I recently wrote down a conversation so that I could refer to it as a memory aid, but then later an associative cue made me remember even more of it. How can I examine an experience if I can’t fully remember it?

    Liked by 1 person

    • That’s an excellent question! I’ve been wondering the same thing myself. I think with memory, we have to be very wary. On the other hand, there are certain elements of experience that seem pervasive, regardless of whether you can recall what you ate for supper. I think it’s a bit of a sticky matter though, because in most cases in philosophical thinking you have people consulting their intuition on such matters, and mistakes have certainly been made. So it’s a bit of a problem for sure.

      So weird about the ashtray thing. I guess I picked a terrible photo! 🙂

      Liked by 1 person

  3. Well, absolutely fascinating reading (as well as entertaining.) I’m not sure I have much to add, except that facial recognition is quite a challenge for me at times. I have a particular problem to throw into the mix, which is meeting people you know out of context. For instance, I once met someone I know from yoga class at a physics lecture. I knew that I knew her, but could not place her, until she told me who she was. Now, every time I see her at yoga, I remember that we met at a physics lecture, so this has become one of my handy look-up facts about that person. So location and context is one of those properties that we associate with an object.

    We basically build mental models of everything that we encounter, so our experience is never naive. I think we are always trying to match new objects to an object we already have a model for. A book I read by Steven Pinker described how we do the same thing with language, which is how infants learn to speak so quickly, and why they make mistakes with irregular verbs, etc.

    Finding an ashtray (or even an escutcheon) in your fridge might be a similar problem to meeting someone in an unfamiliar setting. What to do with it? Eat it? Drink it? Clean it? Without context, we are probably just going to leave it there and hope it doesn’t get past its use-by date.

    Liked by 1 person

    • “So location and context is one of those properties that we associate with an object.”

      Very true. The relational nature of our experience is complex and pervasive. It makes a big mess of things when you consider AI as a lookup table. How do you describe all of those relationships, including the errors that make sense and are later rectified through the very mechanism that allowed them to surface?

      The escutcheon thing is turning out to be a good example. If you look up the word in the dictionary, you might find no definition that matches what I’m thinking of. My dictionary says it’s a shield, or a decorative metal plate covering a door handle or light switch. No mention of the sink. I thought light switch covers were called “light switch covers”…and it never occurred to me to wonder about the door handle covers. Like the sink hole cover, I thought it was attached to the door handle and once piece.

      Language is a nice reflection of the way our minds work. It’s absolutely fascinating to me to think about what happens when you learn a foreign language, how it takes so much effort to go back to that state of acceptance and holism in learning (which we presumably have as children.) If you’ve never learned a second language before, you think you can just go through all these grammatical rules and phonetic rules (English is a disaster here, BTW, but French is surprisingly not so bad), word-correlations, etc., only to find that you’re gonna have to unlearn a lot of those things. The times they are a changing, however, with language teaching. There’s a lot more emphasis on speaking and using complete sentences rather than this sort of atomistic learning that used to take place. Learn a song by heart, and there you’ve learned a great deal of grammar and vocab without really deconstructing it and making it a cerebral affair. Doing this gives you context too, which is crucial. You’ve probably heard on many occasions that it’s best to immerse yourself in the language you’re learning. That makes a certain amount of sense, because there you get the context for certain phrases, you get to hear them over and over and see where they apply, on almost a subconscious level. Almost. We can’t go back to childhood, unfortunately. 🙂

      Not that learning grammar is inessential, especially with adults, but if you’re the type who’s a stickler for rules and memorization, you’ll get frustrated very quickly. Word-for-word translations often end up just flat out wrong or awkward. In France, I couldn’t get myself to stop saying, “Une amie de moi” (A friend of mine.) You just don’t say that in French, apparently…you say “Mon amie,” which is counterintuitive after you’ve been drilled on the differences between masculine and feminine. Feminine would usually be “Ma”— except when there’s a vowel following it. And try saying “Ma amie” and it sounds kind of dumb, so that’s a fine rule as far as I’m concerned. But then how do you get across the idea of a female friend in speech? If you say “Mon amie” out loud, it sounds exactly like “Mon ami” which is masculine! So I tried to bypass the problem by reverting back to English, “A friend of mine.” That’s a big no no.

      Then there’s all those stinking silent endings which change everything. You learn that everything in French is about context. All my worries about how to get meaning across are somehow resolved by other means in the language, but don’t ask me how exactly. It just comes about, as if by magic. (And then I unlearn those endings and forget how to spell just about everything since I know I can get by if I just say the “a” sound.)

      But anyways, back to the point. So language learning in adults is this continuous back and forth between rules (consciously-learned) and context, which usually confounds the hell out of us. When all this mapping to existing charts is two-fold, and conscious, it’s so slow and time-consuming.

      After a while you get that positive reinforcement of, say, a bus ticket to the appropriate destination, and you’re not sure how it all came to pass, but you’ve got the bus ticket in your hands and no one said “Quoi?” so you’re skipping away as if you’ve just negotiated a peace treaty. The funny thing about this is no one will correct minor mistakes in your grammar, or slight mispronunciations, so you’ll go on forever making the same mistakes unless you get close to someone whose not afraid to correct you. Of course, there are moments when you say the wrong thing in such an obvious way that you’ll know tout de suite (the time I casually said “vachement” in Quebec and a woman looked at me like I’d told her to “fuck off,” but kind of forgave me and still didn’t correct me), but usually you don’t get feedback. I still don’t know why “vachement” was so horrible in that context. I’d said it on numerous occasions with my host mom, and so I figured it was an innocuous word that meant “really” or “very.” I’d heard it everywhere in France. I don’t think I’ll use that word again outside the context of close friendship.

      And compare all this to the way we deal with things in our environment. We make use of things outside their intended purpose, we make art, we make music, we make mistakes, we learn things that are utterly wrong and we wouldn’t know until we used the utterly wrong thing in a different context, and yet the utterly wrong thing we’d learned made a lot of sense if you look at it in a certain way.

      So yeah, you’ve really hit the nail on the head in my opinion. The context and associations we make with that are a big huge deal.

      Like

  4. “Can meaning ‘run ahead’ for AI the way it does for us?”

    Depends on how we define “meaning” I think. In this context, it seems mainly to mean “accurate object recognition” — along with some sense of context for recognized objects (ashtrays don’t belong in the fridge — a good maid robot would put it back where it belonged).

    ((Although a really good maid robot might recognize why it was in the fridge in the first place and that maybe it belonged there for some reason.))

    “”What do you see above?

    A much cleaner ashtray than I normally see! 🙂

    I would be curious to know if your readers who didn’t recognize it were reading your post on mobile devices (so the photo was really small). Or, perhaps, had zero experience with cigarette smoking (or smokers) and/or are young enough to not have even seen much of it in movies.

    The four notches seem like a dead giveaway, so I’m curious about the phenomenology going on there. At the least, it offers an interesting insight into the difficulty of recognizing objects — how dependent it is on experience.

    And perhaps context. Would those who didn’t recognize the picture recognize the same ashtray in a larger photograph where it sat on a table among other common sitting-on-table objects? How about if a smoker was in the photo?

    (There was a time when one found ashtrays in just about every restaurant and, certainly, every bar. And often on the desks of people at work. It actually is possible that an ashtray is no longer a common object.)

    ” (those of you who’ve taken a sip from a beer bottle-turned-impromptu ashtray know this all too well.)”

    One reason to always pour your beer into a glass!

    “What is an escutcheon?”

    A type of shellfish. XD

    (All seriousness aside, the term, which means “shield,” refers to a variety of things that are shield-like. My favorite is its use in referring to the shape of the distribution of pubic hair. I first ran into it as a reference to the front panel of a stereo system. It pops up in fantasy SF, too, usually in reference to actual shields or coats of arms.)

    “Well, if it makes you feel better, my husband has a hard time with facial recognition.”

    Ha! I used to be the same way. It was really confusing watching movies, because I’d lose track of who was who. Turns out it can be an acquired skill through practice and focusing. (That said, the typical modern hero does seem cast from a certain mold. Studly, kind of a dick, lots of guts, not much brain, three-day beard.)

    Oddly, what helped train me was modern comics and graphic novels. Video games are similar. The makers use small visual clues in very meaningful ways, so you learn to really focus on what’s being shown. Often comics can seem confusing until you realize the artist did put the necessary information there — you just have to train yourself to see it.

    So, what’s instinctive for some is a conscious act for me, and maybe there’s something that can be unpacked here about recognition.

    I do facial recognition somewhat like algorithms do in a kind of step-by-step, point-by-point, fashion. Shape of the eyes, mouth, chin, nose, etc. People who do it instinctively don’t seem aware of any real process going on; they look, they see, they recognize.

    Kind of like I did with the ashtray, but others didn’t. They were in the position I am with faces.

    I define sanity, in part, as the accuracy of your mental model to the real thing. We create that model over time on the existing mental substrate of our innate senses of extent and time along with our hardwiring.

    We constantly test that model against the real thing, updating it to make it more and more accurate (assuming we are sane).

    To me recognition is the process of taking an immediate experience and trying to find a previous experience that best matches. But previous experience is extremely holistic and holographic.

    There is a region in your mind where ashtrays reside. In that region is every ashtray you’ve ever seen, in person or in picture. The ones you’ve seen in person, you’ve likely seen from many angles. There is even the misshapen lump of clay you made in first grade art class.

    We don’t fully understand how this information is stored, but it seems to be stored in a way that allows new experience to find a pattern match — a “best fit” for a new ashtray in the ashtray region.

    Each new ashtray you experience updates that region and improves your ability to recognize ashtrays.

    If one doesn’t access a region in a long time, new experience no longer matches so easily, so while a person might once have recognized ashtrays instantly, now that ability has faded.

    Neural networks, because they mimic the brain’s function, show a lot of promise for this sort of pattern matching. They use algorithms to implement, but are not what we’d think of as algorithms.

    It’s more like having a really weirdly shaped object that rattles around in your mind until it finds a best fit. Or fails to find a good fit, leaving you puzzled. Or the fit it found wasn’t the right fit (just close), leaving you confused.

    We can describe (or model) that process with algorithms, but it doesn’t mean that it is one. One analogy might be the way water seeks the lowest point. Or how undisturbed soap bubbles or water drops are always spherical. (The technical term is “least free energy” — natural systems always seek to be at their lowest possible energy state.)

    So we seem left with two approaches: An algorithmic one that unpacks experience step-by-step and point-by-point, or a holistic one that matches entire patterns against previous experience. We humans seem to do both, although I’d suggest the “natural” method is the holistic one. That’s the one we seem to come equipped with. The other is acquired through learning.

    Our (soft) AI systems are becoming quite good at putting a label on objects they see (especially in restricted domains). Operating in a world filled with recognized objects is a new level.

    A human maid (born as a baby) starts off knowing nothing other than the human defaults we’re all born with. Over time, this maid learns skills from general (walking, talking) to specific (how to handle fine china). On the actual job, the maid learns skills particular to that environment (they like their ashtrays chilled).

    Presumably, with a machine, you can make copies of the maid that’s fully trained up through general tasks. The copies are trained maids ready to learn the specific tasks of a given environment.

    This need not be algorithmic, although it may be modeled by algorithms. Or it may be that our mental processes transcend what algorithms can do (at least digital ones), either in practical terms or in principle.

    Dang! I seem to have written a whole post here! Sorry. I’ll stop now. 🙂

    I would be very interested in what you came up with for the maid. Any chance you can recall them? (I know how much it sucks to try to recover something I’d written and then lost somehow. I find it so discouraging I usually have to stop for the day and come back later.)

    Liked by 1 person

    • “Depends on how we define “meaning” I think. In this context, it seems mainly to mean “accurate object recognition” — along with some sense of context for recognized objects…”

      Pretty much. Although I’d include meaningful error as well. For instance, if Claire makes an error by talking to your photograph or portrait, that’s a really crummy error (no pun intended.) But if she errs in a way that makes sense, I’d count that as big leap ahead. Like, suppose she mixes a dark colored shirt in with white clothes in the wash because she “knows” that usually it’s not a big deal to mix darks and whites, but in this case it’s a shirt that needs to be pre-washed and she “didn’t have time” to read the label. That’d be kind of awesome in a way (except your clothes would be ruined, but if you have a robotic maid, I’d say it’s a 1st world problem…) It’s a reasonable mistake based on the experience of having exactly one load of laundry and not wanting to get nit-picky and separate the colors, which is just a waste of time and usually doesn’t matter. Usually.

      I don’t mean to imply that Claire needs to make human errors, but errors that come from being efficient or maybe from something else I haven’t thought about yet. 🙂 We wouldn’t want error of any sort on a grand scale, of course, but this sort of error would come from a groundbreaking complexity that we’ve achieved.

      On the ashtray, I’m glad someone recognized it right away! I was beginning to wonder if it really was a photo of an ashtray. 😉

      “Each new ashtray you experience updates that region and improves your ability to recognize ashtrays.”

      This is sort of the bizarre thing about object recognition…when we experience one ashtray, we do, I think, consider its function (in a thoughtless way, perhaps), but maybe we’ve not yet extracted function from material or shape or whatnot (we haven’t done an eidetic reduction). After several experiences we become more familiar, and this allows us to see other very different-looking ashtrays as ashtrays based on function. But the function isn’t necessarily visible with the eyes, strictly speaking. Consider art installations of toilets. You wouldn’t want to make a function mistake there… Or some arty ashtray that doesn’t resemble your usual ashtray. I’m thinking that all this diversity of one kind of thing leads you to the idea of function as the common denominator. Seeing with our eyes leads to seeing with our minds. And you have a sort of feedback loop between pattern and particular. Which brings me to your point…

      “So we seem left with two approaches: An algorithmic one that unpacks experience step-by-step and point-by-point, or a holistic one that matches entire patterns against previous experience. We humans seem to do both, although I’d suggest the “natural” method is the holistic one. That’s the one we seem to come equipped with. The other is acquired through learning.”

      That sounds about right to me, although I don’t want to get into what we’ve come equipped with (you might’ve read my thing about babies and not liking them). Maybe we’re born with it. Maybe it’s Maybelline. Maybe it doesn’t matter, because we reasonable adults have both the chicken and the egg all at once. The two options you’ve pointed out sound spot on. Maybe the point-by-point “looking at” comes in when the holistic or natural “seeing” fails? That would seem most efficient. And maybe your point about facial recognition and learning to “look at” particular details fits in here?

      I agree it’s not clear that our mental processes can be translated into algorithms. It might be that our processes do transcend algorithms, but maybe they can be loosely translated to do some good? Or maybe it’s an all or nothing sort of deal? I have no idea.

      “I would be very interested in what you came up with for the maid. Any chance you can recall them?”

      You know, I can’t. Something about thresholds of experience mimicking ours, but I’m really fried at this point and Geordie’s pissed at me for not playing with him. I’ll see if it comes back to me.

      Feel free to write posts in the comments! Always welcome.

      Liked by 1 person

      • “Although I’d include meaningful error as well.”

        Yes, that’s a very good point. There is something very important to be said about the value of serendipity. Introducing some degree of randomness (error) in constrained behavior can have good outcomes.

        There is a technique of literally evolving an algorithm by first characterizing a given system’s attributes and behaviors as a set of variable parameters. Then an algorithm is designed to accomplish the desired task in that system, but only in a very rough, crude fashion. Perhaps even in an unsuccessful fashion with regard to difficult tasks.

        But the algorithm is designed so that its “perceptions” and resulting behaviors are parameterized by the criteria we defined. Change a parameter and we change how the algorithm perceives or reacts to data.

        Two or more algorithms are set to the task, each with different (randomly selected) parameters. Whichever algorithm performs best “wins” and the cycle is repeated, but this time the competitors are based on the previous winner, each with slight variations. This evolutionary cycle is repeated until a very successful algorithm results or it becomes clear things aren’t working out.

        The process is repeated many times with different starting algorithms to ensure that successful results aren’t “local maximums” — that is, they may be the best result given the starting parameters, but better results are possible with different starting parameters.

        So, in this sense, randomness (or error) actually allows a construct to explore new possibilities not anticipated by the original programming. It’s a big part of the evolutionary principle.

        The algorithm process does require (human) oversight. Certainly in the definition of parameters, but also in judging the outcomes and selecting the winner. The process is (sort of) a case of Intelligent Design! 🙂

        Allowing Claire to make “mistakes” requires oversight (“Good Claire!” … “Bad Claire!”) and limits on how much of a mistake it can make (cleaning the windows by smashing them isn’t acceptable).

        “I was beginning to wonder if it really was a photo of an ashtray.”

        The four cigarette notches were an immediate signifier to me, but like I said, it’s possible ashtrays are becoming obsolete objects. (I was cleaning the garage and found a plastic holder/container for 5.25″ floppy disks. The genuinely floppy floppy disks. I wondered how many young people today would recognize what it was.)

        “This is sort of the bizarre thing about object recognition…when we experience one ashtray, we do, I think, consider its function…”

        Yeah, exactly. I might use the word “class” but perhaps we mean the same thing. The canonical example I use is when we got those mimeographed sheets with drawings of tall skinny things (phone poles, trees, stop lights) and we were supposed to circle the trees (only). That’s a classification exercise.

        Think about how many different types of dogs there are, but we recognize them all as “dogs.” And all other smaller four-legged furry mammals as “not-dogs.”

        Recognizing a new type of tree or dog (correctly classifying it) depends on our ability to have an essential sense of tree-ness or dog-ness. That’s the region I was talking about; we build mental regions for tree-ness and dog-ness and ashtray-ness.

        Those regions consist of everything we know or think about those things — their function, their appearances, their variations, their failure modes. (That last allows us to recognize a broken cup as a cup despite lacking nearly all cup-ness properties anymore.)

        The richer the region, the more accurately it will match new experiences correctly.

        “After several experiences we become more familiar, and this allows us to see other very different-looking ashtrays as ashtrays based on function.”

        When you say “function”… would you recognize an ashtray that wasn’t functioning as an ashtray at the time? (Or do you mean something more abstract by “function”?)

        For example, in your photo, there is no overt clue to the object’s function (which might explain how some didn’t parse it). The notches are a major clue, but they require existing knowledge to interpret.

        (Aside: The Disney parks, in their ubiquitous little shops, sell many brandings of what most humans would identify as a “shot glass” — particularly in that they clearly come in single- and double-shot sizes. Every Disney employee, without exception, will tell you that they are toothpick holders. XD )

        ((And, in point of fact, they work just fine that way. I use one of the three-dozen or so I own for that very purpose. I use the others in ways that would no doubt disappoint Disney employees.))

        “Consider art installations of toilets.”

        Another good example. 🙂

        “I’m thinking that all this diversity of one kind of thing leads you to the idea of function as the common denominator.”

        FWIW, “function” seems restrictive to me, although I can’t come up with anything better than “class” (which is pretty vague). It may depend on how we define “function.”

        I wonder if appearances plus context aren’t terribly important. The context tells us the art object shouldn’t be pissed in.

        “Seeing with our eyes leads to seeing with our minds.”

        I think we’re saying the same thing. Our minds supply the context and those regions of previous experience for matching against. Is function something we can imagine for an object? (An accidental granite chunk that looks like an ashtray… I’d be inclined to say, “Holy shit, look at the ‘ashtray’!” People see objects in rocks and drift wood all the time. I made an ashtray from two phone bells bolted back to back.)

        “Maybe it doesn’t matter, because we reasonable adults have both the chicken and the egg all at once.”

        Yeah, it’s very difficult to separate nature from nurture. We do come with an innate “operating system” but once the rational mind becomes a significant part of the picture, I agree, I tend to consider it in toto.

        “Maybe the point-by-point ‘looking at’ comes in when the holistic or natural ‘seeing’ fails? That would seem most efficient. And maybe your point about facial recognition and learning to “look at” particular details fits in here?”

        Yes, exactly.

        I think there is a key question: Are the two on a spectrum or are they distinct? Is the holistic match just the algorithmic match done so fast it “feels” holistic? Or is one truly a least free energy analog system while the other is a deliberate process?

        What confounds the analysis is that the holistic method can be modeled with an algorithmic process, and if that process was fast enough it would be fairly indistinguishable from an analog system. That suggests a spectrum.

        (In a way, the whole hard AI turns on whether a digital model can be indistinguishable enough from our physical analog models (brains) to replicate consciousness.)

        My gut sense is that the holistic method is a distinct analog system. Our brains functioning at their natural level.

        For example, facial recognition. Some people have a knack for it — they have a well-tuned holistic analog system. Those of us who don’t can compensate using our intellect. (Sometimes practice improves the analog system. Much of athletics involves just that.)

        “It might be that our processes do transcend algorithms, but maybe they can be loosely translated to do some good? Or maybe it’s an all or nothing sort of deal?”

        They can and are. Computers are now better Chess players than Chess Masters, and one recently beat a Go Master at Go (a harder game). They’re even driving cars now. (I understand the only reason they don’t handle all aspects of flying passenger jets (including take off and landing) is that people would freak out if they knew. Pilots have been known to fall asleep and fly right past their destination.)

        Liked by 1 person

        • Wow, you’ve brought up a lot of interesting points.

          Intelligent design algorithms sound like a really ripe region. Where are they used now? How long does this evolutionary cycle take? (I imagine it depends on which tasks are assigned, so probably a dumb question.)

          “The four cigarette notches were an immediate signifier to me, but like I said, it’s possible ashtrays are becoming obsolete objects. (I was cleaning the garage and found a plastic holder/container for 5.25″ floppy disks. The genuinely floppy floppy disks. I wondered how many young people today would recognize what it was.)”

          I would think ashtrays are still commonly known, especially considering movies and such. Unless you don’t have access to shows like House of Cards, or don’t watch TV much, you might not recognize them immediately. On the other hand, maybe you don’t notice the ashtray in the movie…that’s always a possibility.

          Floppy disks might be another issue. I’d recognize them because I’ve seen them before. But I’m not sure about those kids (hehe, I get to say this now) who’ve never used or seen them.

          “I might use the word “class” but perhaps we mean the same thing.”

          Well, I think there are many ways of classifying something, and function is one of them…a big one, but still a type of classification amongst others. With AI there might have to be several kinds of classification. Function might be the hardest, and it strikes me as incredibly important…that’s probably why I’m using “function” in a way that’s synonymous with “class,” but I didn’t mean to.

          “For example, in your photo, there is no overt clue to the object’s function (which might explain how some didn’t parse it). The notches are a major clue, but they require existing knowledge to interpret.”

          Very true. I’d made the mistake in thinking it was perfectly obvious what the object was, and I wasn’t thinking about function in choosing that particular example. If it had a cigarette in it, that would’ve made the function clearer, but the point about self-same-ness might’ve been a bit cloudy. I could’ve used a chair instead, and that would’ve been a lot better. The only reason I chose the ashtray was because that was the object with which I’d been taught about Husserl. It just stuck, I guess.

          You mentioned “failure modes” and that’s something that could be a bit of a hurtle. I’m very interested in that topic. How does a broken object get recognized? At what point does it no longer get classified as the same thing? Wax, for instance…If we melt a candle and leave the wick in (or that metal piece that holds the wick), then we still recognize the candle as a candle, only it’s been used up. But take that metal piece out, melt the remaining wax and roll it into a different shape…voila, no longer a candle. Now it’s a thingamig, or something else. So there are thresholds for thing-ness, and this is so freaking complex. Not everything gets destroyed so easily. The monster truck rally with the fridge is one example of how we might be able to find the pieces of the fridge and call it a fridge, only smashed to bits. Some of us might not recognize it as a fridge, but others might see those parts and know where they belong.

          I can imagine this sort of exploration taking a very long time, but it’s a necessary one. Could be fascinating too.

          Taking Claire as our goal, we’d have to define when failure modes—I love that phrase—matter, and when they don’t.

          You’ve reminded me of some of the things I touched upon in my deleted draft. There’s no way I said anything terribly interesting there, but I’m glad you brought this up. This is where thresholds of experience come in. Intentionality defines those thresholds to a certain degree, and each one of us is unique. But there must be universal human thresholds as well. We need special instruments to see certain objects in space, but people who are not blind can see stars, generally. And then there are things we CAN see if we want to or if our attention is drawn to those things, but if we were to be drawn into that level of detail at all times, we’d go nuts. So we make mistakes sometimes based on our need to not go nuts looking at details. Yet many of those details rectify the error, and all we have to do is be made aware of the error.

          And to think, this “failure mode” exploration would be only one kind of threshold study. There’d have to be a whole slew of others.

          More on what I mean by context…This example might not be great, but it’s on my mind: I just took Geordie for a walk and noticed that some kid took to writing dumb stuff on the sidewalk with chalk (“I hate Jack.” Plus other stuff that was pretty much illegible. Real poetry.) Anyways, I glanced up at the driveway and noticed more writing. I didn’t pay close attention to it because Geordie was yanking me down the street, but I did happen to notice that a sentence started with “Phoenix is…” It was spelled correctly, and the handwriting was definitely very young, the same handwriting as “I hate Jack and blah blah blah” and the same colored chalk—pink. All of this led me to think, “Wow, cool name. Some little girl on the block is named Phoenix.” Then I thought, “How do I know that’s a name?” Then I thought, “Because if that sentence referred to the city of Phoenix, it wouldn’t be spelled correctly. I have a hard time spelling that one, and I’m way older, a writer, pretty good at spelling and live in Tucson, a city in which Phoenix is often referenced. Could it be some precocious kid who can spell “Phoenix” and also writes “I hate Jack blah blah blah”? I think not. If it had been written in different handwriting, I might have looked closer or at least withheld judgement.

          That’s an example of what I mean by context. NOW…I could get my lazy butt out there and double check. I could go back to that house a minute away and read the rest of the sentence. And suppose it says, “Phoenix is a large city in Arizona”? I doubt that so much that I think I’ll stay put. The weird thing is, Phoenix is not a common name, so you’d think that would make my nearly instantaneous interpretation a bit harder, but it didn’t. On the other hand, what if it had said, “Saguaro is…” Would I have said, “Wow! What a cool name.” Not likely. I’d think, “Wow! Some kid knows how to spell that! Or mommy helped…”

          “Is function something we can imagine for an object?”

          Definitely. That’s what makes this whole matter so difficult. But if you were in a mine and found the ashtray-shaped rock, you wouldn’t think: “I’ve found an ashtray!” You’d think, “I found a rock that looks like an ashtray. I think I’ll take it home and use it as such. That’ll be a cool conversation piece.” Context makes an ashtray-shaped rock not yet an ashtray. But your friend comes over and says, “Cool ashtray!”

          “I think there is a key question: Are the two on a spectrum or are they distinct? Is the holistic match just the algorithmic match done so fast it “feels” holistic? Or is one truly a least free energy analog system while the other is a deliberate process?”

          Very good question. That one feels like a question outside of phenomenology. On the other hand, I can see a continuum here too, but it would be only in theory (but possibly a phenomenologically-constituted theory?)…if the horizon is infinite, and if we can be drawn into things at many levels, why not say it’s a continuum? And yet reflection on experience on a broad scale tells us these are two distinct modes. One is easy and breezy, the other is analytical. We once discussed the subconscious and dreaming…the way the “size” of the sub or what some call the un-conscious could be “seen” in outline by bits of it surfacing, even while the majority of it resides in the unknown. I think this aspect of our experience could enter into the picture here. We wake up one night and remember some bizarre dream about some random thing, like, say, the symbol of an element on the periodic table or the back of a particular remote control missing, the component that holds the batteries in. Then we remember that we saw that same remote control earlier in the week, and we’d replaced the back on that remote while watching another episode of House. We wouldn’t have remembered such a detail had it not been for the dream. At the time we didn’t think much of it…maybe while replacing the back, we were thinking of potato chips and telling someone that “ANA” stands for “antinuclear antibodies.” So at some level, we conclude we took in that information about the remote control without caring much about it. Maybe the remote control battery backing entered the intentional stream for such a short duration that it made little impact, then got recycled into a dream (as in the remote control example) or it entered the stream so long ago we don’t remember it. (As you know, I had that dream about the element “selenium” although when I woke up I didn’t think “Se” or “selenium” existed…I assumed I’d made it up. Then I actually googled it and was surprised to find that it was real element, and felt a bit disconcerted by that.) So perhaps these processes are on a sort of continuum?

          “My gut sense is that the holistic method is a distinct analog system. Our brains functioning at their natural level.”

          That was my gut sense too. I have to admit, I do still think it’s true that these two levels of experience are distinct at some level and for the most part. Plus, the idea that we take in everything on a very microscopic scale and very quickly add up all these bits of data seems improbable. I can see two levels on a continuum, but not so much if one level is made up entirely of the bits of data that can’t be intended without a microscope or other instrument beyond our natural capabilities. So back to the chalk writing example, I might have noticed the rest of the sentence subconsciously, and maybe tonight I’ll dream about it and find it says, “Phoenix is a city in Arizona.” Then I’ll get up and walk down the street and double check my dream. But I won’t be able to dream of how many hairs are on Geordie’s body (at least, not with accuracy, I hope…that’d be freaky). I can’t believe that would enter into my awareness at any level. (Remember, we’re still talking phenomenologically…who knows what science will have to say about it. Although if scientists do discover that we take in such detailed microscopic information and process it all at lightning fast speed, I’m likely to be incredulous. I’ll need a great deal of evidence to support that.)

          Great questions. Lots to ponder.

          Liked by 1 person

          • “Intelligent design algorithms sound like a really ripe region. Where are they used now? How long does this evolutionary cycle take? (I imagine it depends on which tasks are assigned, so probably a dumb question.)”

            The answer is a long-ish sidebar, so I’ll write another comment so we can focus on phenomenology here. (Also, I quoted a number of bits to which I could only say “I agree!” or “Exactly!” I removed them for space, so I’ll say here that “I agree!” and “Exactly!”

            “I would think ashtrays are still commonly known, especially considering movies and such.”

            Their presence in movies and tv is down the last couple of decades. Movies even come with an end credit disclaimer explaining how they felt the evil cigarette smoking was so critical to the plot (or historical necessity) that they just had to include it, but it mustn’t be read as any kind of positive take on smoking. 🙂

            I do agree that creating an ashtray recognition region in your mind probably doesn’t happen very well when the training input is just two-dimensional images of ashtrays. (Literally mere shadows of light on the cave wall! 🙂 ) Maybe if you’d seen a documentary on ashtrays or something…

            “I think there are many ways of classifying something, and function is one of them…”

            Yes, that’s what I was getting at! Function is just one way we recognize objects. I suppose the idea is that we can see something, implicitly ask “What’s it for?” or “What does it do?” and upon recognizing that we recognize the object.

            In a sense, as I’ve said, the four notches — which are a functional part of the ashtray — were a “dead giveaway” so it would seem that a recognition of function led to a positive recognition of the object. On the one hand, the notches are just visual features, but knowing what they’re for makes identification certain.

            From an AI perspective, recognizing the function of an object (rather than its appearances) seems extremely difficult. It’s very possible that the function of an object is a looked up property only accessible once the object is recognized (by its appearance). Once the system knows what it is, it knows what it’s for (or what it does).

            Or a really good AI system might go something like: Hmmm… What’s that? It’s round (but many thing are). It’s transparent, it might be made of glass (is it a round window?). No panes, not mounted in a wall, too small; not a window. Not shaped like a lens; not a lens. It has an inside (but no lid; is it a bowl?). It’s very flat and shallow for a bowl; likely not one. A plate or dish of some kind? Rather thick for a dish and the rim is huge. What are those notches for? Spoons? Some missing part? Does the (missing) top latch into those? They’re rounded notches; check size parameters, what’s that size, what fits there? Pens fit; a place to put a pen? … Cigarettes fit; a place to put a cigarette? A-Ha! Multiple notches (for multiple cigarettes) fits; shallow bowl (for ashes) fits; (non-flammable; classy) glass fits; round fits.

            AI says: “Ashtray?”

            But just consider all the facts and analysis of facts necessary to support that chain of logic.

            Versus a “cloud” of many ashtray images forming a matchable region of visual space, which isn’t easy, but is mostly a matter of processing power.

            “Wax, for instance…If we melt a candle and leave the wick in (or that metal piece that holds the wick), then we still recognize the candle as a candle…”

            Good example! The knowledge we bring to the table means so much. Knowing the properties of wax lets one see any lump of wax as (at least potentially) a candle. (I’ve made candles from non-candle lumps of wax! 🙂 ) Recognizing the wick or that metal bit identifies the wax as a once-candle, and that requires knowing what those bits are! (As opposed to, say, losing a piercing during an aggressive waxing. Ouch!)

            ” Some of us might not recognize it as a fridge, but others might see those parts and know where they belong.”

            Exactly. (As an aside, physicists at CERN try to identify new particles by looking at the debris they get by smashing known particles together. It’s been compared to determining the design of an unknown watch by looking at the pieces left over after smashing it violently against a brick wall. The point is that this requires serious knowledge about the phenomenology of smashing things.)

            “Taking Claire as our goal, we’d have to define when failure modes—I love that phrase—matter, and when they don’t.”

            It’s a great concept. Engineers are all about failure modes! We want to design systems with as few of them as possible! As a usage note: a failure mode is the ‘how’ of a failure not the result. A failure mode of glass is to shatter on sufficient impact. A failure mode of a lightbulb is to burn out, but it also has a failure mode of shattering on impact. Another failure mode is an ‘open’ in the internal circuit resulting in a dead bulb.

            “This is where thresholds of experience come in.”

            Very much so. “The more you know…” XD

            “So we make mistakes sometimes based on our need to not go nuts looking at details.”

            And they’re not even always really mistakes, per se. The average over the right time interval of a noisy process can be a better representation of its values and has the benefit of being noise-free. As you say, not going nuts over details that don’t matter.

            “I did happen to notice that a sentence started with “Phoenix is…” It was spelled correctly, and the handwriting was definitely very young,…”

            I’m smiling at how you don’t believe an Arizona school child would have been forced to learn to spell the spelling of the state’s capital city. Has education really gotten that bad? 😀

            (I will admit I recall the spelling by thinking of the word as pronounced “Foe-ee-nix”. And these days, someone named Phoenix seems fairly normal. This could clearly go either way!)

            Here’s another scenario for you: She (going off the pink chalk) moved here recently from Phoenix, which her folks hated and always complained about, so she picked up their bad attitude about the city.

            What a pity we don’t know the rest of the sentence! We may never know the truth!! XD

            “That’s an example of what I mean by context.”

            An excellent one! We both parsed it slightly differently!

            “But if you were in a mine and found the ashtray-shaped rock, you wouldn’t think: ‘I’ve found an ashtray!’ You’d think, ‘I found a rock that looks like an ashtray…'”

            This is why the ashtray might not be the best example. In this case, those two thoughts are essentially the same for me. An “ashtray” is anything well-suited to being an ashtray. I do not require the object have been intentionally manufactured for that purpose. (You touched on beer bottles as ashtrays.)

            That my friend would recognize the rock as an ashtray in the right context, to me, says a lot about the inherent ashtray properties of the rock. (Interesting. There is an intentional aspect, an appearances aspect, and a functional aspect. They all seem to matter. This sounds important!)

            That said, I wouldn’t think the accidental rock was an intentional ashtray (two-outta-three 🙂 ).

            “That one feels like a question outside of phenomenology.”

            (In response to my positing two modes: instinctive and intentional.) I know what you mean, but my recognition of ashtrays feels not a process (unless that process is far below my perceptive horizon, and that would be outside phenomenology, obviously). OTOH, facial recognition of actors is clearly a step-by-step process I do consciously. Exactly at the phenomenological level (if I’m using that correctly) it seems different. (It would require looking under the hood to see them as the same.)

            “Plus, the idea that we take in everything on a very microscopic scale and very quickly add up all these bits of data seems improbable.”

            Depends on what you mean by microscopic scale. I agree we don’t count dog hairs, and unless we get close, we can’t even see individual hairs. (Literally. The visual system cannot resolve such tiny objects.)

            But something like a face (or an ashtray, for that matter) has a lot of small-scale detail we don’t (usually) think about or quantify consciously. But in the same way a key fits a lock “all at once” a perception can find an “all at once” match holistically.

            An analogy might be a large landscape of hills and valleys. A perception is a small piece that fits some region of that landscape. A process approach picks one hill on the piece, tries to find a match, and then sees if the rest matches bit by bit (moving on if it doesn’t). A holistic approach (not possible with digital computers) in a sense spreads the piece over the landscape allowing it to “fit” into places where it matches. Close-but-mis-fits cause a “Huh?” moment that makes us think about what we’re seeing.

            “Although if scientists do discover that we take in such detailed microscopic information and process it all at lightning fast speed, I’m likely to be incredulous.”

            Likewise! I’m not suggesting microscopic information is part of this. It’s very close to the difference between a physical “analog” process and a calculated “digital” process.

            Here’s another metaphor: A network of open irrigation troughs spread throughout a field. The field has irregular large-scale rises and dips (as fields do), so the trough walls must be tall in the low areas but they can be short in the high areas. (Because when the troughs are all filled, the surface of the water is perfectly flat and doesn’t follow the land. Low areas will be deep; high areas will be shallow. If we flooded the land to make a lake, the water would find the same flat level, and there would be deep and shallow areas.)

            A digital process takes measurements at points and calculates a height variation map — essentially an inverse topographical map of the land. The resolution of the map depends on the distance between measurements.

            An analog process floods the land (or the troughs) and — presto — the flat surface pops out due to the natural behavior of the water. There’s no measuring or calculating as we usually interpret those actions. The water in the troughs is the inverse map.

            The digital process can take measurements at closer and closer places until it approaches the same resolution as the analog process (which effectively has atomic resolution), but the calculation burden becomes enormous and much of the calculation turns out to be unnecessary.

            (Those who believe in hard AI believe the analog action of the water is actually a calculated digital process (perhaps massively parallel, as if each molecule of water calculated for itself). Its resolution is so small that it appears as an analog process.)

            As far as our own recognition, as you point out, these seem distinctly different and anything that unifies them on a spectrum is almost certainly beyond our phenomenological horizon.

            So we’ll treat them as distinct. XD

            Like

            • Ditto on the “I agrees”…

              “From an AI perspective, recognizing the function of an object (rather than its appearances) seems extremely difficult. It’s very possible that the function of an object is a looked up property only accessible once the object is recognized (by its appearance). Once the system knows what it is, it knows what it’s for (or what it does).”

              That’s a good point. A kind of annoying one, but I see what you mean. Any ideas on how we could flip this object recognition system inside out at certain crucial moments, maybe? Say, when the object isn’t identifiable “in the cloud”? Because what you’re saying makes a lot of sense, but I hope it isn’t the end game. It would be nice if the function recognition works faster when objects can be first named, but isn’t the ONLY way function can be accessed. Maybe the failures of our cloud lookup systems could be bypassed by having a separate function-determining mode? Maybe the material is taken into account, along with shape and context, etc. (in which the cloud lookup is still happening, but “talking to” the other and the other is making guesses? And then we have a slowing down, but maybe whatever is picked up in that processing can get recycled and remembered?

              I of course have no idea what I’m talking about or whether this is possible or desirable.

              “I’m smiling at how you don’t believe an Arizona school child would have been forced to learn to spell the spelling of the state’s capital city. Has education really gotten that bad?😀”

              It has, I’m afraid. And there’s no checking my theory…it actually rained last night and is raining intermittently today, plus a neighbor came over to walk Geordie (he’s become her motivation to exercise, and I wouldn’t want get in the way of that). There’s the possibility that the chalk didn’t wash away, but I have to admit that my laziness trumps my desire for knowledge. I know. 😦

              “Depends on what you mean by microscopic scale. I agree we don’t count dog hairs, and unless we get close, we can’t even see individual hairs. (Literally. The visual system cannot resolve such tiny objects.)

              But something like a face (or an ashtray, for that matter) has a lot of small-scale detail we don’t (usually) think about or quantify consciously. But in the same way a key fits a lock “all at once” a perception can find an “all at once” match holistically.”

              If that small-scale detail is accessible, as in, if we pay close enough attention, I’d count that as not microscopic. But if we have to use special instruments to get at it, I’d call it microscopic.

              “An analog process floods the land (or the troughs) and — presto — the flat surface pops out due to the natural behavior of the water. There’s no measuring or calculating as we usually interpret those actions. The water in the troughs is the inverse map.”

              Eureka! (Sorry. I just had to say that.)

              Yeah, the metaphor sounds like it works. I think we’re on the same phenomenological horizon. 🙂

              Of course, hard AIers might be right, but I’d say the burden of proof is on them.

              Liked by 1 person

              • “Any ideas on how we could flip this object recognition system inside out at certain crucial moments, maybe?”

                In my earlier comment I tried to write a ‘stream of consciousness’ of an AI trying to figure out what the (ashtray) object was functionally. As you saw, it requires a huge “database” of background knowledge.

                Even humans can be challenged to identify the function of an unknown object out of context. As we saw, several didn’t identify the ashtray.

                Let’s imagine Claire operates in a universe where smoking is so rare that she’s never seen an ashtray. In her current job, the humans (despite the visible disgust of their friends and neighbors) do smoke and have ashtrays.

                In situation ‘A’ Claire sees two ashtrays on the coffee table. They are identical (looking like your picture) and spotlessly clean. No cigarettes are in evidence; there is no cigarette smell (Acme Cyber-Mades™ have an excellent sense of smell, of course).

                In situation ‘B’ one of them is filled with ashes and there is an open pack of cigarettes on the table.

                What chain of logic would be required for Claire to deduce the function (and hence identity) of the mystery objects? Phenomenologically speaking, naturally! 🙂

                What if you were Claire and had never seen an ashtray?

                Is it necessary to have basic background knowledge of cigarettes and their use (it seems so, although situation ‘B’ has some obvious dots that can be connected — Claire might deduce the cigarettes were incense or sacrifices).

                I really do think we’re talking some pretty serious AI here. We’re talking about inductive, or even abductive, reasoning. Computers (so far) are best at deductive reasoning.

                “I of course have no idea what I’m talking about or whether this is possible or desirable.”

                A computer capable of abductive reasoning is the dream of many AI researchers! So, yes, desirable. 🙂

                Possible… we’re working on it. Neural networks show promise.

                “If that small-scale detail is accessible, as in, if we pay close enough attention, I’d count that as not microscopic.”

                Agreed. And I’m not talking about anything below our ability to perceive. As I understand the rules of phenomenology, everything must be perceived (or be capable of being perceived) consciously.

                So facial recognition, for example, doesn’t depend on the number of eyebrow hairs (which change anyway), but does depend on small differences in shape and geometry. It’s like we form a mold in our minds that the face fits. We recognize the mold with the best fit to the face we’re seeing.

                “Eureka! (Sorry. I just had to say that.)”

                ROFL! But did you go running down the street naked? 😀

                “Of course, hard AIers might be right, but I’d say the burden of proof is on them.”

                Absolutely! And most of the evidence suggests it’s a heavy burden.

                Liked by 1 person

                • “I really do think we’re talking some pretty serious AI here. We’re talking about inductive, or even abductive, reasoning. Computers (so far) are best at deductive reasoning.”

                  Definitely! Now I wonder if that deductive reasoning could be turned into something messy and likely to cause some error which can then be fine tuned. So for instance, situation A: Claire sees two thingamigs that’re clean. No need to recognize what they are or what their function is. She only needs to know what material they’re made of and that they’re clean. Suppose the table on which they’re placed needs to be cleaned…she’d need to know how to move them without breaking them. Weight can tell her a great deal, but material is pretty important. So she uses her “thingamajig” lookup device for materials. Her “search” is narrowed to certain criteria, not “WTF is this? How does it work?” She “sees” these are glass or plastic or whatever the case may be. She moves them to a location where they won’t get damaged. (A huge feat for a robot, I know, but let’s assume she’s got this aspect down.) Or she simply lifts them one by one, although this method is not a good cleaning strategy unless there aren’t that many objects on the table. I think for a robot it makes sense to just remove all objects of a certain size and weight from the table first and place them elsewhere, then clean.

                  In situation B, maybe Claire can google or lookup the surrounding objects. If not (since we’re supposing no one smokes and she can’t look it up) she’ll revert back to the “material” criterion and decide what to do with it. Ashes = dirty, although we’ll now have to distinguish between cigarette ashes and mom’s ashes. Yikes.

                  Okay, so in this case, we see we need to program Claire so that she doesn’t open things unless we tell her to. Human remains aren’t kept in open containers, so the ashtray would get cleaned since it’s an open container with something that seems like dirt, and not in the fridge, and not food. She’ll need to know the difference between dirty dirt and dirt dirt. No emptying plants into the trash. If there’s a plant in it, don’t clean it.

                  Or how about this: Suppose in the nascent stage of introducing Claire to the house, we have her remember all thingamajigs that she thinks could be dirty, and she takes a photo of each one but doesn’t mess with it. At the end of the day we review her photos and “tell her” which things are dirty and which things are not to be messed with. From then on, when she encounters an ashtray, she knows how to proceed with it.

                  Deducing the ashtray is not possible as far as I can tell, not without knowledge of function. (Which requires knowledge of smoking.) But suppose Claire can do her cloud lookup, she might find older images (Humphrey Bogart comes to mind) and might be able to collect information in situation B that way. Still, that’s a guess based on the proximity of surrounding objects. Suppose instead of a cigarette, there’s a quarter sitting there? Then she’s kind of out of luck. She does a lookup of images of quarters next to other objects and comes to the conclusion that she’s looking at a “piggy bank” and places the quarter in the ashtray. Or maybe not. Maybe she sees the quarter is next to an object that doesn’t have other quarters, pennies, dimes, etc in it, and so leaves the ashtray alone? That would be ideal. She recognizes that the thingamajig isn’t something she can identify, that it looks dirty, that the quarter doesn’t help her identify it, and so she takes a photo for us to review.

                  This is fun. 🙂

                  Like

                  • “I wonder if that deductive reasoning could be turned into something messy…”

                    There is “fuzzy logic” that starts to get away from simple true/false logic. That’s a little messy, but not really introducing errors. Which is easy enough (just inject some noise), but the tuning it usefully is a trick.

                    “So for instance, situation A: Claire sees two thingamigs that’re clean. No need to recognize what they are or what their function is.”

                    Yep. From her point of view, they could be knickknacks or art objects or some sort of weird person souvenir. As with any objects “she” might be inclined to dust them off and would have no problem moving them to allow cleaning the table. “She” does the same thing with the crap on the fireplace mantle.

                    “Ashes = dirty, although we’ll now have to distinguish between cigarette ashes and mom’s ashes. Yikes.”

                    Yep. The real world is filled with exceptions. That’s exactly what makes it so complicated. As you say, a closed container would be an important clue. But there are ashtrays with covers, so we’re back to exceptions.

                    “From then on, when she encounters an ashtray, she knows how to proceed with it.”

                    Yep, exactly. And exactly how a human maid might learn.

                    “But suppose Claire can do her cloud lookup, she might find older images (Humphrey Bogart comes to mind) and might be able to collect information in situation B that way.”

                    Just so, but think of the intellectual processing that implies. A more reasonable solution would involve accessing a massive database of objects (manufacturers could contribute images of their products). Even if that exact model isn’t in the database, there would likely be ones similar.

                    (Something to consider is that even well-educated experienced humans can struggle to identify the function of an unknown object. Does a baseball bat, out of context, signify its purpose? Many objects offer few clues.)

                    I agree a good strategy is doing nothing when identification is uncertain and asking for human review at the next opportunity. (And then she can interlink that new knowledge to all the other Claires.)

                    Liked by 1 person

                    • Ah cool idea about linking to other Claires. Then we have a way to take out some human error as well. Human review could be filled with all kinds of nonsense that Claire wouldn’t understand, and so now we’d have human backups to human backups, like Wikipedia sort of.

                      “That’s not a baseball bat. That’s a pinch hitter! Shh…don’t tell the other Claires.” 🙂

                      Liked by 1 person

    • I forgot to mention, I want a gym shirt that says “least free energy.” Which I of course won’t wear to the gym, or if I do, I’ll be moving very slowly on the treadmill and mostly expending energy by yelling out answers to Wheel of Fortune, because it doesn’t count unless everyone else hears.

      Liked by 1 person

  5. Wonderful discussion. I had never thought about the importance of philosophy in terms of developing AI. Interesting to see that you are getting back to essences. Per Aristotle that does segway into an objective morality of sorts e.g. the essence of a bird is that it can fly, the wing is damaged so this is ‘objectively’ bad for the bird as birds ‘ought’ to be able to fly. Ooops, accepting that bird’s have an essence enables us to transcend the fact/value dichotomy.

    Liked by 1 person

    • “I had never thought about the importance of philosophy in terms of developing AI.”

      I don’t think many do. You rarely hear about it outside the realm of AI ethics, and phenomenology is often used to counter the idea that we can create conscious artificial intelligence. I’ve just had this feeling for a long while that there’s a connection between the two that could be used, but it’s hard to make that connection when you only know about one side of the equation. So now I’m just sort of grappling with thoughts in the hopes that someone else will see what I’m getting at, and hoping that I’m getting at something.

      Aristotelian ethics has always struck me as incredibly prescient. There’s that tie to health, which you mention, and it’s something we don’t ignore in our actions (whether we buy into essences or not). Health is one of those wiggly concepts, but not nearly as hard to accept as essences, and so the former’s easier to talk about. We can mostly buy into the idea that being healthy is a good thing…that it’s the common denominator. Even Sam Harris believes that. Then we should—you would think—ask, “What is health?” And you arrive at Aristotle again in some form or another, especially given that he’s fairly relativistic…the good for me is different from the good for you, though there is a sweet spot for each of us. He’d call it virtue. We’d probably feel more comfortable with “healthy.”

      Liked by 1 person

  6. Hey Tina,

    Thanks for continuing this phenomenology series. It’s super interesting and I’d never really looked into it before I started reading your blog.

    Your ashtray example got me thinking about unities and I have an example I’d like to run by you. Let’s assume you are a miner and you are taking your jackhammer to a granite block. One of the chunks that falls off is a disc that’s concave on one side. Maybe it even has little divots around the edges that would fit your cigarettes neatly. How likely would you be to perceive it as an ashtray? I almost certainly would not perceive it as such, at least not unless I really looked at it. What’s going on, then?

    I kept thinking it’s a matter of artifact intentionality. When I see an ashtray I’m able to identify the unity because I know that somebody, somewhere intended that chunk of glass to be an ashtray. In other words, I’m applying my theory of mind to the ashtray on the assumption that the ashtray is an artifact of somebody else’s intentionality.

    This seems to apply to a lot of natural things. Where one person sees a mountain with two peaks, another person might see two mountains (mindless things) in close proximity but very few people are confused as to whether two cars (relics of some engineers’ intention) scraping each other are secretly one thing. Where you might perceive a grain of sand (no mind to have a theory of) as the unity, I might perceive the beach to be the unity but we are certainly going to agree that a whale (which has a mind) is a unity.

    This is a convoluted way of asking if we might be better off attacking our AI eidos problem from the standpoint of a theory of mind. Perhaps mind and relics of mind-intentionality form one basis of the phenomenonological system.

    Liked by 1 person

    • Great questions. Yes, the way we come across an object and whether or not it appears to us as something manmade for a specific purpose, or just as some piece of rock that happens to be shaped like something manmade, are definitely two very different things. Of course, this doesn’t really pose a problem since we have separate categories for the two sorts of thing. It’s not likely I’d find a piece of rock in a mine and suddenly get confused about whether or not I’d come across an ashtray, but if this same piece of rock were in someone’s home with a cigarettes in it, I might have a different view of the object. There’s where context plays a key role, as well as function.

      The grain of sand qua unified object and the beach qua unified object all depend on which you’re attending to at the moment. Either can be “seen” (eidetically) and so we have layers upon layers of unities.

      This analysis can get extreme. We find ourselves looking for things like atoms, and these are unified objects. Then we can choose to delve deeper. We keep going, finding turtles all the way down. Husserl says that the phenomenal world is infinite—the horizon is infinite—and yet we have this strange way of perceiving transcendence within immanence at each level, and the intentional structure of our interactions with phenomena plays a key role here. Imagine if we had to add up each grain of sand on the beach in order to arrive at “beach.”

      With AI, we’d definitely want to have careful distinctions between objects that look alike. Much of this can be discovered in their function, which may be discovered through the context, but there might have to be more involved.

      Liked by 1 person

      • Wow, it seems like no matter how much phenomenology tries, it’s still tied up in ontology and epistemology. I also wonder if it might not be considered a subject for theorists of mind to investigate.

        I’m confusing myself … 😛

        Liked by 1 person

        • Well, epistemology for sure. It never broke away from that…but as for ontology, it depends on which phenomenologist you’re dealing with. And philosophers who talk about phenomenology often make things even more confusing, especially after reading both Husserl and Heidegger and then jumbling up the terminology of both philosophers.

          It’s pretty easy to get confused. It’s a confusing subject. I’m probably not helping either. 🙂

          Liked by 1 person

  7. “Intelligent design algorithms sound like a really ripe region. Where are they used now? How long does this evolutionary cycle take? (I imagine it depends on which tasks are assigned, so probably a dumb question.)”

    (There are no dumb questions when you don’t know the answer.)

    They’re mainly used in research at this point. I’m not aware of any commercial use (although that doesn’t mean they haven’t made their way there). It’s a challenge to parameterize a problem such that genuinely different approaches are possible. (Not without constructing different algorithms.)

    The time it takes for an interesting (and useful) result depends, as you suspect, on the problem domain. Computers operate very quickly, so lots of iterations are possible, but to the extent human review is involved that can slow down development.

    So there are a lot of variables. It can be hours or days. In some cases, even longer. If a human review is done once a day, and it takes hundreds of attempts to evolve a useful result, we’re talking a good part of a year.

    What might be interesting is the phenomenology involved. Kind of back to our airplane conversation between a computer scientist and a philosopher.

    In one real-life case, the task was learning to play a simple video game. The phenomena are what’s happening on the screen; the actions (behaviors) are the possible control inputs of the game (called “gestures”); the goal is winning the game.

    Initially, there is no strategy, and the algorithm plays the game randomly. But some random play gives better results, so the process explores things in that direction.

    Ultimately it becomes a perfect player. By exploring possibilities it finds the place among them where a “perfect player” lives. Interestingly, no human programmer ever wrote a single line of game strategy code. No human ever tried to find the “rules of perfect play.”

    The nature of the stored rules found may even be hard to describe in any effective way. They can be copied as a whole to a new system, giving the new system the same ability, but the actual data might be so diffuse, holistic, and holographic, that it would be impossible to identify any single rule.

    Ultimately what comprises these “rules” is a set of related numbers, each number the strength (“weight”) of a single parameter. Think of them as coordinates into a hugely-dimensional space (one dimension for each parameter).

    All those coordinates mark a point in that space where the perfect player lives. How do you find a single rule among all those coordinates? All the other rules are also encoded in the same numbers.

    As a concrete example, something I’ve been dabbling with lately is a simulation of a one-player blackjack game. I want to see for myself the effect of various card-playing and betting strategies.

    So I wrote a basic game framework that “knows” how to play blackjack. There’s a “dealer” and a “player” each with a “card hand” (dealt from a “card deck”). The phenomena are the cards in each hand and the player’s bet; these are the “sense data” of the simulation. The framework can handle doubling down and splitting pairs.

    Dealer play is regimented per usual casino rules (dealers could easily be robots), but parameters set hard and soft stay limits. (Casinos always stay on a hard 17, but vary on soft 17; generally downtown Vegas dealers hit on soft 17.)

    Betting strategy is hard-coded into the player (using the 1-2-3-5 system), but can be varied if I want to try something else. (There is a flag parameter that reduces all bets to 1 if I want to examine raw win-lose rate.)

    But the real meat is the several tables of numbers that control the player’s strategy. These use the phenomena of the dealer’s face card and the player’s hand to determine a response: hit, stay, double-down, split-pair. The framework “works” the player until he stays or busts.

    Then the dealer plays and pays off the bets.

    The whole point is that I can play many thousands of hands (in seconds) to see what happens. (The current strategies result in about an 80% “increase” (ha!) in the player’s bank. Play long enough and the bank always drops to zero. The house always wins. But short term gains are possible. The trick, as always, is stopping when you’re ahead.)

    Next I can vary some of the numbers to see if I get better results. (I’ve only just finished the simulation. Now that it’s generating results in final form, I need to go over them to ensure the framework works correctly.)

    In this case, one actually an pick the rules out of the data (because in this case the rules were the starting point from which the tables were made). Here’s one of the tables (rules for soft hit or stay):

    SoftHitStay = [
      [2,2,2,2,2,1,1,1],
      [2,2,2,2,3,1,1,1],
      [2,2,2,2,3,3,1,1],
      [3,3,3,3,3,3,1,1],
      [3,3,3,3,3,3,1,1],
      [3,3,3,3,3,3,1,1],
      [2,2,2,2,2,1,1,1],
      [2,2,2,2,2,1,1,1],
      [2,2,2,2,2,2,1,1],
      [2,2,2,2,2,2,1,1],
    ]

    The blue number (rule) is: “If the dealer shows a 2, and we have an ace plus a count of 4, then hit.”

    The red number is: “If the dealer shows a 6 and we have an ace plus a count of 5, then double-down.”

    The row is the dealer face card (ace-10), the column is the player’s count (2-9), the table numbers represent: Stay (1), Hit (2), Double-Down (3). There is another similar table for hard hands (no ace) and a third table that encodes pair splits.

    Just thought you might like seeing a real-life example. This isn’t a generic algorithm, per se, although I can vary its parameters manually. I offer it more as a simple example of a simple-minded simulation with phenomena and rules.

    Like

    • All this and I don’t really know the rules of blackjack. Well, I do in a very loose way…I’m better at telling others when to take the money and run, but they never listen. I guess forty bucks isn’t good enough…better than nothing!

      I’m missing a lot here without knowledge of programming, but I think I get the idea. So it sounds like the “perfect player” is a very complicated set of interdependent rules that our little human minds wouldn’t be able to grasp, even though we “created” it?

      And so if this is the case for a simple(r) blackjack situation, how do we expect to find relevant phenomenological rules for something as complicated as Claire?

      Liked by 1 person

      • “I’m missing a lot here without knowledge of programming, but I think I get the idea.”

        It does make it more of a challenge to talk about (which is why I’ve been including so much detail).

        “So it sounds like the “perfect player” is a very complicated set of interdependent rules that our little human minds wouldn’t be able to grasp, even though we “created” it?”

        Well,… it depends. 🙂

        For blackjack, in this case the rules were the starting point. One can memorize them and use them at the blackjack table.

        (The summer I lived in Vegas, it was one of our sources of entertainment. A pot of $40 would last at least 40 hands and, on average, 80. Casinos serve free drinks while you play, and we’d pick a table near the lounge act and get music while we drank and played. Even if we lost the whole $40 (rare), it was just the cost of the evening.)

        That said, the rules were created in the first place by running lots and lots of blackjack games with different rules to see which gave the best result. This is less like genetic evolution and more like making different keys to find the one that best fits the lock. In theory a human could do it alone, but it would take years.

        (A genetic approach would start with completely random rules and play them against each other to determine winners.)

        Bottom line is that those tables fully encode the rules (however they were derived) and can be understood by humans.

        But no so, I think, with the perfect video game player. “Rules” in the sense used in blackjack (e.g. “always split aces”) are essentially mathematical and easy to codify. The “rules” that make a perfect video game player are much more subtle. Most real life tasks would be in this domain.

        “And so if this is the case for a simple(r) blackjack situation, how do we expect to find relevant phenomenological rules for something as complicated as Claire?”

        Exactly. And I’m afraid the more we talk about this, the less application it seems to have. Phenomenology seems to be the end point of consciousness.

        If our phenomenology is conditioned by our consciousness then it would seem to require that consciousness (or at least its machinery) to exist at all.

        Liked by 1 person

        • I misunderstood you. I thought the optimal blackjack player could not be understood by us, but simply did it’s thing on its own. Now the whole thing makes a lot more sense and seems more useful. 🙂

          “The “rules” that make a perfect video game player are much more subtle. Most real life tasks would be in this domain.”

          I wondered if the eidetic reduction could be a head start—we supply some rules—in combination with a set of specified goals (house cleaning)…and maybe we wouldn’t try to create the perfect housekeeper? Does that change things?

          Like

    • “Are we getting hung up on function?” Maybe. Function isn’t a necessary part of phenomenology. I don’t even remember it being discussed…that was something I thought was important because I saw it as a way for AI to recognize something novel (like an ashtray that doesn’t look like any other ashtray). But that’s all on me. 🙂 We can leave it aside.

      “What value do mere appearances have?” I’m not sure I get the question. What did you have in mind?

      Like

      • Appearances (which are apparent) versus function (which often isn’t). It seems to me that understanding function requires a vast amount of background knowledge, and even humans struggle with doing it accurately.

        Liked by 1 person

  8. I mentioned above that a computer program recently beat a Go master. That’s a significant milestone, as Go is considered utterly intractable for lookup-based algorithms. The possible game space is much larger than for chess (which is large on its own). The way they did it is interesting and seems to apply to this discussion.

    Here’s a section I copied out of a good article about the AlphaGo algorithm. It ties in with our discussion about how an algorithm can learn a process for which there are no coded “rules” just a network of many, many tuned parameters.

    Here’s the section [emphasis mine]:

    To begin, AlphaGo took 150,000 games played by good human players and used an artificial neural network to find patterns in those games. In particular, it learned to predict with high probability what move a human player would take in any given position. AlphaGo’s designers then improved the neural network by repeatedly playing it against earlier versions of itself, adjusting the network so it gradually improved its chance of winning.

    How does this neural network — known as the policy network — learn to predict good moves?

    Broadly speaking, a neural network is a very complicated mathematical model, with millions of parameters that can be adjusted to change the model’s behavior. When I say the network “learned,” what I mean is that the computer kept making tiny adjustments to the parameters in the model, trying to find a way to make corresponding tiny improvements in its play. In the first stage of learning, the network tried to increase the probability of making the same move as the human players. In the second stage, it tried to increase the probability of winning a game in self-play. This sounds like a crazy strategy — repeatedly making tiny tweaks to some enormously complicated function — but if you do this for long enough, with enough computing power, the network gets pretty good. And here’s the strange thing: It gets good for reasons no one really understands, since the improvements are a consequence of billions of tiny adjustments made automatically.

    After these two training stages, the policy network could play a decent game of Go, at the same level as a human amateur. But it was still a long way from professional quality. In a sense, it was a way of playing Go without searching through future lines of play and estimating the value of the resulting board positions. To improve beyond the amateur level, AlphaGo needed a way of estimating the value of those positions.

    To get over this hurdle, the developers’ core idea was for AlphaGo to play the policy network against itself, to get an estimate of how likely a given board position was to be a winning one. That probability of a win provided a rough valuation of the position. (In practice, AlphaGo used a slightly more complex variation of this idea.) Then, AlphaGo combined this approach to valuation with a search through many possible lines of play, biasing its search toward lines of play the policy network thought were likely. It then picked the move that forced the highest effective board valuation.

    So AlphaGo uses a huge parameter space, and at any given point in a game it uses the pattern of the game at that moment as a kind of coordinate to a point in that parameter space. Think of it as a large landscape — all possible board positions in all possible games are points in this landscape.

    What AlphaGo was trained to do is know the best direction to go from any given point. So, for any given position in any given game, AlphaGo knows the best way to play that position simply because it knows the “landscape.”

    What’s crucial is that AlphaGo doesn’t know why — there are no rules it can reference, just the landscape created via all those played games (not unlike a human player would have, but sharper and more distinct). All it knows is that “from here” it’s best to “go that away” (so to speak).

    What trained AlphaGo is exposure to tens of thousands of well-played games — which seems like a phenomenological thing. So perhaps in this sense, phenomenology can help in unpacking human experience in ways that allow application to training AI.

    Liked by 1 person

    • The landscape analogy, that makes a lot of sense. The use of well-played games creates a rule in a way. We make the decision not to create an AlphaGo that plays a really bad game. The amazing thing is not giving AlphaGo the rules of the game. That seems so counter-intuitive.

      I’m confused about something. In the third stage, does AlphaGo play against itself again? What is the “policy network”?

      “What trained AlphaGo is exposure to tens of thousands of well-played games — which seems like a phenomenological thing.”

      It does seem phenomenological, especially the “well-played” part. Experience is notoriously difficult to understand and easy to do. We don’t know what “well-played” is for us yet. This is where phenomenology could, maybe, come in.

      Thanks for the article and explanation. I’ll have something to ponder for a while…

      Like

      • The policy network is the neural net + what it’s learned about playing Go so far. That is, the “policies” for playing Go. (What I’m curious about is how they merge what the two “players” each learn.)

        Liked by 1 person

  9. Pingback: The Natural Attitude | Diotima's Ladder

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.