44 thoughts on “AI Ethics

  1. Having difficult posting on the original site so here is my comment:

    Excellent post. Thorough and well-written. Imagine walking past a scarecrow and seeing a young man thrashing the scarecrow with a whip. You see the anger and viciousness in the man as he dismembers the scarecrow limb by limb with increasing satisfaction. The scarecrow is neither visibly human nor conscious but we still find the man’s behavior disturbing. While this is not an argument for granting rights to scarecrows it does indicate that we it would not be socially acceptable to see anything being treated badly if it looks and behaves even remotely human.

    Liked by 1 person

    • Excellent example! I kind of wish I could go back to the article and add it in (with your permission and credit of course.)

      I’ve noticed others have had problems with commenting on the LL…I’ll have to let Steven know.

      Thanks for reading it!

      Liked by 1 person

  2. An excellent essay! I love the point that this will not be a reasoned conclusion, but an intuitive one. It reminds me of an experiment someone performed a couple of years ago. They made small robots look like cute animals. They then brought in test subjects. The subjects knew that the cute robots were robots. They were allowed to play with them for a while. Then they were instructed to physically destroy the robots. Most of the subjects outright refused to do so. In other words, it doesn’t take much to trigger our intuition of another sentience.

    Speaking more broadly, what tempts us to think there might be another sentience there. I think that the more a machine behaves like a living thing, the more we’ll be tempted. Does it try to survive? Does it look for food (or energy)? In other words, is it a survival machine, like us? On the other hand, the less it acts like a survival machine, but like a navigation, analytical, transportation, construction, or other type of machine, the less we’ll be inclined to view it as any kind of fellow being.

    Good to hear from you. I was just thinking about you this week, noticing that I hadn’t seen you online in a while. The spinal tap being normal is good news, isn’t it? Or does it still leave things uncertain, particularly regarding MS?

    Like

    • Thanks so much! I know so little about AI and I was curious to hear what you thought of our article. I was nervous about moving into this unknown territory, and your opinion matters a lot to me.

      I vaguely remember hearing about that experiment. I have such anthropomorphic tendencies I’d probably stuff one of those robots in my purse to save it from the evil scientists!

      I was thinking about you too. How’s the shoulder doing? Is it getting better?

      As far as I’m concerned, I don’t have MS. The neuro said there’s still a remote possibility that it’s in the early stages, but I haven’t actually spoken to him yet. (More waiting…I get to see him on my May 8th, my birthday. Happy birthday to me!) However, I did speak to my PCP and his first reaction was, “Great! You don’t have MS.” I think the neurologist’s reaction was careful…MS technically can’t be ruled out until something else has been confirmed. But when two MRIs and a spinal tap come back normal, the possibility that I have MS is extremely unlikely.

      The PCP referred me to Mayo clinic. I still don’t have an appointment, and I’ve heard it can take months to get in. I’ve also heard that they might repeat all those tests. I’m kind of at a loss here, I really don’t know what to do or if I even want to go. I’m still considering my options. I’m particularly eager to hear what the neurologist thinks. If I have to get any more invasive tests for MS, I’m really gonna drill him to get him to convince me that it’s necessary.

      I keep asking my PCP if it could be psychological, and he keeps saying it’s possible, but my symptoms are “atypical.” He reminded me that my anti-seizure med is also for anxiety, so if that’s the problem, I’m already doing something about it. I’ll repeat this questioning with the neuro and see what he thinks.

      The news that I don’t have MS is great, but this electric shock thing is driving me nuts. I hope it’ll just go away on its own before I go to Mayo. It’s not showing signs of letting up, though. I have to admit, I haven’t been online because I’m a bit down in the dumps lately.

      On the other hand, I have Geordie to cheer me up. Just a few minutes ago we hunted down a fly together. I usually just point out where it is and watch him run around like crazy (he has an amazing ability to stand up on his hind legs on the edge of the bed and jump, then land back on his hind legs on the bed), but this time got up off my lazy duff and killed it, and he ate it.

      Like

      • The truth is, hardly anyone really *knows* much about AI. Most of those in AI research are involved in trying to find solutions for problems like how to recognize voice commands in a noisy room, or still recognizing a cup even though someone has laid a hand on it, etc. But more generally, we don’t understand the brain well enough for anyone to be a real expert in artificial general intelligence. It means that anyone using rigorous logic (as you guys did) can add to the conversation.

        The shoulder is doing okay. I’ve been mostly resting it for the last couple of months. But it’s about time for me to try exercising it again (very lightly). It still hurts to reach behind in any way, part of which I fear that shot they gave me might have actually aggravated. I’m hoping the exercises help with that. I’m thinking I’ll give the exercises about 4-6 weeks before I go back to the doctor again.

        Well, I’m glad to hear that MS is mostly ruled out. At least you can put fears of living with that aside. The electric shock thing sounds frustrating. I don’t know what to tell you on whether or not to go to the Mayo. I’m somewhat disillusioned with medicine right now. If you didn’t have out of pocket expenses, I’d say go, just in case they can find something to help. But with those expenses, I think I’d want some kind of assessment on how likely anything is to be found.

        Glad you found Geordie. How’s he doing? Fully recovered from the snake bite?

        Like

        • Sorry to hear that your shoulder’s still causing you problems. Hopefully the exercises will help. Are you thinking of surgery if they don’t?

          I guess we’re both in the same boat with medicine. I think I’ll probably end up going to Mayo for my husband’s sake so we can say we did everything we could, but even he acknowledges that it could be for nothing. If they make me do another spinal tap, I’ll probably have a panic attack out of fear of having a panic attack.

          Geordie’s doing great. He’s fully recovered! Back to his old self. I can’t believe how quickly he recovered. The only change I see in him is that he sticks closely by me ever since. He used to go into the other room when we watched TV because the noise bothers him, but now he sits under my chair. He still wants to chase lizards, but when I give him a little yank on the leash, he’s not as persistent in fighting me.

          Another weird thing…he caught a fly and was in the process of killing it. I didn’t know what he was messing with, so I said, “Geordie, stop, what’ve you got?” (in a scared tone), and he stared at this fly buzzing around, then looked back up at me, waiting for my approval. He actually waited for me to cross the room and bend over to inspect it! When I said, “Oh okay. You can have it,” he ate it. His fly-killing instinct is pretty strong, so I was surprised that he waited for my approval. Perhaps it was only because he knew the fly wasn’t gonna get away from him. Who knows.

          Like

          • They can probably give you something to reduce the probability of a panic attack. I was once offered valium prior to going into the MRI machine if I had any history of claustrophobia attacks, which I didn’t.

            That’s interesting with Geordie. I once came home to find my Geordi pawing a moth on the floor, observing its writhing, pawing it again, observing, and so on. Taking pity on the moth, I stomped it dead. Geordi looked up at me with a, “you took my toy away” type of expression. It suppose she would have eaten it eventually.

            Like

            • Oh, good point about the valium. I’ll definitely ask for that if they make me do another spinal tap. I never got offered anything for the MRI. (I wouldn’t have needed it. I have no problems with MRIs. I actually fell asleep in there when I did the one without the injection, believe it or not.)

              Perhaps my Geordie was playing with the fly too. Who knows how long he’d been torturing it before I realized what was going on.

              I’m a little worried about scorpions. He’d certainly go after one, and that wouldn’t be good. I haven’t seen one in the house for a long time, but they generally come in during the summertime. And of course, Geordie will find it first.

              So with your shoulder, do you think it’s going to require surgery? Or is it hard to say at this point?

              Like

              • My experience is that dogs usually bark like mad when they find anything like a scorpion. I think if Geordie finds one, you’ll know about it.

                On the shoulder, I think it’s still too early to tell. To some extent, it might come down to what I’m willing to live with. I may have to decide how important is being able to do certain things with my shoulder. If surgery guaranteed success, it might be a different story, but I’ve seen too many people get other surgeries with mixed results. We’ll see what happens if the exercises don’t come through.

                Like

                • I hope he does bark. I don’t know what he’ll do. These scorpions that get in the house are bark scorpions, often small. They look like insects, so I hope he doesn’t feel like chasing one. They don’t move very much, so hopefully that will prevent him from becoming interested.

                  I know what you mean about the surgeries giving mixed results. I don’t blame you for being cautious going into that, especially if you think you might be able to live with it. Good luck with that! I hope it improves and you don’t have to suffer with it.

                  Liked by 1 person

  3. Congratulations to you and Steve on this excellently conceived article Tina. It would be interesting to hear your thoughts on whether or not the conferring of so-called ‘human rights’ is purely a matter of logic and reason when it comes to practical, rather than legal, applications.

    For example, many of us would find it difficult to, say, violently abuse an AI animal even when we consciously accepted that it was indeed an AI – there’s a natural reticence, an instinct if you will, that the human nervous system indicates and which runs in parallel to that reticence, thus attenuating our logically-granted freedoms (to abuse). Even if you were asked to take one of your most treasured precious stones, and then force it into one of Geordie’s poops (I’ll clean it up afterwards), I suspect you would feel uncomfortable doing so because you love the stone as it is, not for its sentience or lack of it.

    Anyway, good news on the spinal tap it would seem?

    Like

    • I’d stick a stone in some of Geordie’s poop if there were some reason for it. (I hope you mean after he’s crapped it out, right?) I wouldn’t like breaking up the poop like that, though…Geordie’s poop is pretty valuable. That’s why I follow him around picking it up all the time. It must be so infuriating to him after he’s so carefully placed it on top of a coyote’s poop, but I think he understands how precious it is to me.

      Like

      • What if the ‘reason’ were simply to test your response Tina? Maybe you know that you would be indifferent? What do you think? Then again, it seems that Geordie’s poop may be more precious to you than your most treasured precious stone, so perhaps the test is unfit. I felt the same way about Nellie’s poop; she was my baby, and as we all know, parents do so love their babies poop.

        P.S. ‘Poop’ sounds very un-English to my ear, ‘crap’ less so, and ‘shit’ the most fitting term, although I was reluctant to use it here.

        Like

        • Yeah, Geordie’s poop trumps my most precious stone! Actually, Geordie doesn’t “poop,” he “makes a poopie.” Does that sound un-English? Well, he’s very special. The rest of us are stuck in the realm of “taking a shit.” (Although I tend to leave them, preferring to take Geordie’s poopies instead.)

          Liked by 1 person

    • Just realized I got caught up in being funny and forgot to answer your last question. Yes, good news. It’s almost certain that I don’t have MS. The bad news is we still don’t know what it is and Mayo clinic sort of scares me. Hopefully there will be a better option…I’ll find out when I go to the neurologist’s for my next appointment. I’m also annoyed that he won’t just pick up the phone and tell me what’s going on.

      What’s it like where you are? Do doctors make you drive across town to tell you five minutes of information? My primary care doctor will call me, but never ever ever will my neurologist talk to me over the phone. I once called the office on a weekend to beg for some kind of pill to make it stop, and he happened to be on call. That was the one time I’ve talked to him on the phone. I felt like I was talking to God.

      I wish I lived in Cornwall and had Doc Martin living nearby. He may seem like a jerk, but I like that he runs around town saving lives.

      Like

      • Same problem here; they like to tease you with test results, and if you ask the receptionist over the ‘phone what the doctor’s notes say, they do so love to toy with you: “Well, I can’t really say, but the doctor must see you; how soon can you get here?” The God Syndrome certainly applies here in England too with those that we call ‘consultants’ and ‘registrars’ (one below the former), and many feel they must sustain a sort of mystique. I’m just reading Erving Goffman’s book ‘The Presentation of Self in Everyday Life’ and he covers this ground in there, though it’s all a bit dated.

        I used to live in North Cornwall; it’s a sensational place, or much of it is anyway. And if the doctor’s away, there’s always a local witch who’ll do for you: http://www.museumofwitchcraft.com/

        Like

        • Ahhhh! I’m so jealous. If I ever get a chance to go to Cornwall, I’ll have to give you a shout to get some insider’s tips. That’s top on my list of places to go.

          At this point, I’ll take the witch. I assume she’ll answer her phone?

          Should I just go ahead and make a cloutie? Not sure it’ll work in the desert…or if I want to wait that long for it to deteriorate. Or can I just spray it with a garden hose from time to time? Or does that not count? What if Geordie tears it apart, will that count as natural?

          Liked by 1 person

  4. I love Doc Martin too!! And for the same reasons as you.
    More good news without resolution… still I think it’s good news. And great to hear of Geordie’s doings.
    I agree with SelfAware’s remarks on intuition. The basis for our moral feelings is not reason but intuition born of the fact that we are social animals. We are biased toward treating agents who appear to be self-aware as anthropomorphic agents, hence worthy of ethical recognition. And I think that’s a good thing.

    Liked by 1 person

    • Doc Martin is one of my favorite shows. I want to go to Cornwall so badly…it looks so beautiful. (Although I hear Port Isaac is pretty touristy now. Still, there’s the rest of it…)

      I agree it’s a good thing too. Suppose we weren’t the way we are and we spent a lot of time looking each other up and down to try to figure out if there’s consciousness in there. Jeez that sounds tiring.

      On the other hand, I’ve been letting my husband watch all his Nazi movies and documentaries lately. I can’t watch historical footage of concentration camp victims. That’s a foregone conclusion for me, and I make him promise me we won’t watch that. However, a little snippet appeared in one and as I was leaving the room, I thought of my article and wondered where was that human bias then? How did that happen?

      I definitely used my intuition as the measuring stick in that article. But some people…who knows what’s going on there.

      Liked by 1 person

      • Right, sometimes culture and social intuition (“follow the leader”) is strong enough to overcome that basic recognition of other human beings. Especially if we perceive them as “not of our tribe.”
        I’m with you–I can’t watch that kind of thing. After a long time I watched “Schindler’s List” and was glad I did, but it was difficult. I’m going to Amsterdam this summer but have no plans to see Ann Frank’s house. It’s just too sad.
        One of my favorite things about Doc Martin is the intro where they show the scenes of Port Isaac. Cornwall’s definitely on my list, but I can hardly think of a place in the UK or Ireland that isn’t 🙂

        Like

        • Schindler’s List was okay for me. I can watch movies about it, but something about the historical footage overtakes me. And I never cry at movies. Actually it was this one time that sticks in my mind. I saw some horrendous footage and I had this long existential despair kind of cry. I don’t regret that, but now that I’ve seen it, I really don’t want to relive that over and over. So now I ask my husband not to pick anything with historical footage of concentration camps. (We watch a TON of WWII stuff. He’s really into it. I don’t know how he can stomach it, being Jewish.)

          You’re going to Amsterdam? Fun! Have you been there before?

          I did see Ann Frank’s house in Amsterdam. It was of course very touristy, so that took some of the sadness out of it. Actually, I didn’t leave with a good feeling, and that was mostly due to some tourists being obnoxious (probably stoned). I don’t know if I’d recommend going. I think it’s worth it in one sense—I mean, how often do you get to see something like that? But in another sense, I found it kind of disappointing. I was prepared to leave sad, but instead I was both mad and sad. Mostly mad. Still, don’t let that deter you if you change your mind. It’s an important place.

          Yeah, I love that beginning of Doc Martin. Also, I love seeing the views out of ordinary people’s houses. It cracks me up. Here’s some guy who’s supposed to be poor or working class with this million dollar view out his window. I saw some places in Gaspe, Quebec like that…just little run down houses right next to the water. Nobody seemed to care about that view. A lot of them had lived there all their lives, so the water was just there, nothing special for them. It’s one of those places I would move to in a heartbeat if it weren’t for the brutal winters.

          Totally agree about visiting all those places. I’d love to go to Scotland too.

          Have fun in Amsterdam! I love the bikes all over the place. Another fond memory—there’s a chess board painted on the ground somewhere (I can’t remember where) and I saw two little kids playing chess with these giant chess pieces.

          Watch out for the brownies. Just saying.

          Liked by 1 person

          • Your husband is a history buff! It’s a bug, like being a fan. But a good one to have.
            This will be our first trip to Amsterdam. I don’t expect to see much of my hubby since he will be cycling around the Zuider Zee. I will be attending a miniature book convention and meeting some Dutch friends who are Ciarán Hinds fans 🙂
            I am always attracted to a water view, and envy those who can enjoy one every day. Especially if it is a lake. Those coastal towns have to endure some severe storms. And I always wonder about Port Isaac–don’t they have any cloudy days? LOL.

            Like

            • I’m so excited for you! Your trip sounds like so much fun. Please be sure to take plenty of photos for us!

              So funny about the Port Isaac thing…I had exactly the same thought. I wondered how much money was spent in all those grey times between filming. Think of how tedious it must be to wait for the sun to come out!

              I have more of an ocean attraction…especially if it’s cliffs and poetic moodiness and all that stuff. Although, truth be told, I’m really a sun lover too, so I have a bit of internal conflict here.

              Liked by 1 person

  5. Hello Tina,

    Glad to see that you posted an article!

    This article is an interesting exploration of a very fascinating and important topic.

    I think the notion that what beings or things that we grant rights to might be more of a matter of what are prepared to live with, rather than related to the fact that a particular set of beings meets an objective conception of sentience quite powerful. This case is made abundantly clear when we consider the rights of severely mentally disabled human beings who seem to lack self-awareness. On a robust conception of sentience that requires self-consciousness and possible the capacity for autonomy these people may not meet the mark, and yet the idea of saying that they have no rights, or the rights of a pig or dog seems absurd. In this particular case while we may suspect that these people are not sentient the idea of stripping other human beings of the same rights of others is one that is so horrifying that we cannot live with the choice of doing so. There is a conflict here between granting all humans equal rights based on their belonging to our biological species, and granting these rights based on sentience, or a capacity for autonomy. It seems plausible to think that this conflict is an instance of a more general conflict between more specifically religious conceptions of humanity’s dignity, for example the idea that we are made in the image of god, versus more secular conceptions based on sentience, sapience or rationality.

    Like

    • Thanks so much for reading the article!

      Great points. I agree about that conflict between religious conceptions and secular conceptions. I’ve noticed in AI discussions, people tend to focus on the capabilities of AI such as consciousness, self-awareness, etc., and so I wanted to bring the discussion out of that sphere to a certain extent, while still acknowledging its importance. The religious conception of what ought to be treated ethically is sometimes too narrow (what? My Geordie Bear’s not going to heaven?) and also relies on a supposed knowledge of the soul. (Personally, I have no problem with soul talk, but I wouldn’t put it in a philosophical paper as an assumption, especially not in this context.) The secular conceptions obviously don’t want to get into talking about souls, but when it comes to AI, there’s still a lot of talk of AI “having” sentience, etc. Turing’s test touches on this difficulty, which is really the problem of other minds. His test does this wonderful thing—it bypasses the problem, while at the same time exposing it. In exposing it in the context of AI, he opened up this wonderful philosophical conversation in a new territory.

      I still feel like our article lacks some nuance in the biological/artificial distinction, but I hoped that the refocus on what counts for us—what knowledge we actually need to possess in order to bestow rights—could be brought to the fore. I think it does come down to a strange mix of intuition, empirical observation (not in the strict scientific sense), and assumptions.

      Actually, I can see Heidegger coming into the mix here in his treatment of the problem of other minds. I didn’t want to go that far. Maybe that will be for another day…come to think of it, I bet it’s already been done. I was surprised to find quite a few articles on AI (though not necessarily on ethics) which looked at various issues through Husserl and Kant.

      Your point about mentally disabled humans! Ugh, I wish I’d included that example.

      Like

  6. I’m a little late to the party. I’ve been putting this off until I had time to sit down and really get into it. You’re probably gonna wanna go get a cup of coffee…

    “Commander Data, an extremely advanced cyborg,…”

    One of the things that was important about Cmndr Data was that he was unique (or so they thought) and could not be reproduced. As a machine, if large armies of Data machines were possible, that would change the equation signficantly.

    It occurs to me that one of the reasons we revere humans so is their accumulated experience (or in the case of new humans, their potential to accumulate experience). There is also the “creator” and “genius” factors. Humans, uniquely in the animal kingdom, create new things based on their imagination. Some humans, the geniuses, make extraordinary strides or contributions in human progress.

    There are, as far as we know, no animal Newtons or Einsteins or Noethers. Or Mozarts or Da Vincis.

    It may be that much of what we revere is that potential for progress and art. The rest of us are just the chaff required to produce the occasional wheat kernal. The millions of sperm cells that don’t fertilize the egg.

    “1. AI that looks and behaves exactly like a dog vs. a natural dog. Which one most deserves to be treated ethically?”

    Ethics covers a lot of territory (and culture — some cultures consider dogs food). My initial reaction is that dogs deserve some level of ethical treatment as higher living beings (not as high as us, but much higher than, say, insects). An AI that “that looks and behaves exactly like a dog” at the very least could be valuable property, so at a minimum some sort of property ethics would apply.

    Take this to another level. What if such AI dogs reached a level of cheap manufacture and could be easily obtained. There are currently toys that — very crudely — look and act like animals, but they are, property-wise, of little value.

    Or course, on a planet with over seven-billion people, one might argue that humans are mass-produced and of little value. But our saving grace is those contributors.

    A key question may be: Will AI be a contributor or merely a tool? Cmndr Data was clearly a contributor!

    “2. AI that behaves exactly like us vs. a natural dog. Which one most deserves to be treated ethically?”

    Well, I love nearly all dogs more than nearly all people, so… XD

    I’m not sure this changes the equation much for me. It depends on the value of the AI. Same answer for #3. (Incidentally, I reject the “most deserves” aspect. I don’t think the cases are sufficiently comparable, and I don’t see it as a zero-sum situation.)

    “4. AI that looks and behaves like us vs. us. Which one most deserves to be treated ethically?”

    Depends on the AI. If we’re talking about 20,000 units of idential humanoid AI (at low, low prices with overnight delivery), then I’m not sure I’m going to worry too much about their “humanity” any more than I worry about my laptop.

    And depends on the extent that we deem them capable — or not — of original contribution, of original thinking (for some value of “thinking”). Will they be partners (which dogs are) or tools (which laptops are).

    “…the answer would depend on how the AI was created, what it’s made of.”

    To my mind, what’s more important is what it’s capable of. Partners — contributors — are important and valuable. Hammers and laptops much less so.

    “If one argues that AI composition is irrelevant then we risk the paradox of treating AI dogs as natural dogs.”

    Because?

    What makes a “natural” being more ethically important than a “mechanical” one? This almost invokes the idea of a soul (and there’s no way I believe Dennett means that).

    What really is the significant difference (minus a soul) between a being made of carbon-oxygen-hydrogen and some other elements versus a being made of silicon-iron-copper and some other elements?

    Humans have a long history of demonstrating their (at least occasional) value. Dogs, one might argue, also have a history of proving their value (and, frankly, unit for unit, have probably done a better job of it).

    AI doesn’t even exist, yet, and we don’t know to what extent it ever really will.

    “What underpins this principle of likeness is not something reasoned; reasoning comes after the fact and presents itself as if it were there all along.”

    Yes, but reasoning can be demonstrated to be faulty or correct, and enlightened people not afraid to change their mind can make course corrections. This is the foundation of science, which proceeds despite scientists.

    “In the case of dogs and other creatures, we don’t reason about their abilities and then assume consciousness.”

    The people who study them do! 😛

    But the real point here is the general view, and people already treat their cars and stuffed animals and animated toys (let alone pets) as having some parity. That’s obviously going to be even more so as machines become ever more life-like. (Check out Japanese sex robots some time.)

    “Humans are no stranger to fear, intolerance, oppression and even slavery.”

    That’s a really important point. Looking human-like can actually act against machines on at least a couple of levels.

    There is the “uncanny valley” syndrome that can trigger highly negative reactions. Animators found that making their human or animal creations too lifelike (but not lifelike enough) turned many viewers off.

    There is also the human ability to use perceived differences to treat other groups as less than human. Slavery can be economically attractive, and it still exists in various forms (particularly sexually) yet today.

    And there’s always the Terminator or Matrix nightmare… that machines become so power and independent they decide to enslave us! o_O

    Like

    • Hey, sorry I won’t have time to respond to all of this, but a quick response…

      “What makes a “natural” being more ethically important than a “mechanical” one? This almost invokes the idea of a soul”

      The point I was making there was that we are more likely to believe that a natural creature has the ability to feel and experience things like pain and pleasure. With a created creature, we are more likely to doubt that the inner experience is even there. I think we’d require and expect more of AI than we do with natural creatures before we’d be willing to treat them ethically. I don’t mean to say that we need to know whether or not we or they have souls, but just whether our inner experiences are likely to be the same. Logically, there’s no access to that inner experience, so the two—artificial and natural—are equal. But that bias towards natural creatures is real and I think it makes a certain amount of sense. So the significant difference is that behavior is not all that matters. We have assumptions about what causes that behavior (I grant that these are mere assumptions) and those can’t be easily dismissed.

      The “uncanny valley” syndrome is interesting. I think it speaks to that bias toward things natural. It’s as if we’re okay with anthropomorphizing when we know we’re doing it, when it’s explicit. But when presented with something very similar but not similar enough, we’re rubbing up against that natural/artificial divide too hard. We want our robots to look like robots, then we can pretend they’re humans…with the awareness that they’re not. But once they look too much like us, it just creeps us out. It would have been an interesting point to explore in our paper.

      Liked by 1 person

      • “I think we’d require and expect more of AI than we do with natural creatures before we’d be willing to treat them ethically.”

        A good point; I think you’re probably right. AI will have to prove itself.

        “So the significant difference is that behavior is not all that matters.”

        I’d go along with that in so far as the biases people have. I think that’s something we need to compensate for when seriously discussing the ethics of AI. Specifically we will need to focus on behavior and discount biological-mechanical distinctions, given that AI proves itself either through analysis or practice.

        We keep coming to the same roadblock, don’t we. We don’t know if mechanical minds are even possible, and we don’t yet have a theory of mind. (If I — and many others — are right that mechanical minds are a non-starter, then the whole discussion is moot!)

        “But once they look too much like us, it just creeps us out.”

        We often seem to jealously guard the “human territory” sometimes. I wonder if the Uncanny Valley syndrome will decrease as humans get more used to simulated humans. The phenomenon was noted some time ago when animation wasn’t perfect, and I wonder if recent animations have gotten real people more used to seeing simulated people.

        Or maybe animation has gotten so good we’re past the Valley. (All I know is that I’m waiting for the first new movie starring Humphrey Bogart and Lauren Bacall! Let’s throw in David Niven and Peter Sellers, too, just for fun.)

        Like

        • Yeah without that theory of mind, it’s gonna be hard to determine such matters. I’m not sure we’ll ever agree about these hard philosophical questions, which is why I wanted to just bypass those and talk about what we already do, how we already react and intuit.

          I wonder too about that Uncanny Vally syndrome. Who knows what we’ll end up with! For my part, I’d feel better with a dog-like robot with human intelligence (and doggie sweetness) rather than a human-like robot with all our human vices. It’d be nice if the dog -robot had a cute little pot belly. 🙂

          Liked by 1 person

          • Heh! Robot pets would be a lot less susceptible to illness (and snakebite)… but might that make us care less for them? People currently get attached to mechanical objects, but most probably don’t fret as much when their objects are in the repair shop (other than, perhaps, monetarily) as when they do when their pets are in the hospital. I know I spent many anxious moments regarding Sam, but had few concerns about my car in the shop.

            It’s interesting how some authors have anticipated even these conversations. I was reminded here somehow of Isaac Asimov’s The Caves of Steel (1953) and The Naked Sun (1955). Both feature R. Daneel Olivaw — the “R” stands for Robot. He’s a fully humanoid robot who can pass for human. He comes to Earth seeking Elijah Baley, a detective, he hopes will help him solve a murder case.

            The context is a distant future where Spacers have left Earth and developed their own societies. Vast amounts of available space on various worlds allow them to live at extreme distances from each other. High technology — including fully lifelike humanoid robots — provide for their needs, and they’ve developed major (disabling) phobias regarding contact with other humans.

            Earthlings have retreated into vast underground cities (“caves of steel”), live in almost hive-like close conditions, and have major (disabling) phobias about open spaces. And humanoid robots. By law and urgent custom, such robots are prohibited.

            Which makes Daneel’s mission dangerous and which makes Elijah terrified of leaving Earth to help him.

            They’re centerpieces in Asimov’s famous “Robot” short stories and novels, from whence comes his “Three Laws of Robotics.”

            The tie-in here is that Data’s “positronic brain” is a direct reference to Asimov’s robots — who had “positronic” brains.

            Like

              • I’ve said this before, I think, but one really can’t call oneself a true science fiction fan without reading Asimov, Clarke, and Heinlein. (And a few individual works, such as Dune, so you’ve at least gotten a start. 🙂 )

                Not to say there aren’t other giants in the field, but the Holy Triumvirate had a lot to do with defining SF. It’s almost surprising how many bases they touched first.

                It’s like how one can’t be a true mystery fan unless one has read Hammett and Chandler and Christie and A.C. Doyle. (Probably also Dorothy Sayers and Rex Stout, as well.)

                But you totally MUST read Asimov! XD

                Liked by 1 person

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.