5 thoughts on “HOMO IN MACHINA: ETHICS OF THE AI SOUL – A Response

  1. I was looking through the original collaboration and I think you guys did a good job defining how the empathy side of ethics would apply to AI. Human being are incredibly empathetic. We can and do empathize with pet rocks and the like. What stimulates us to empathize like this is of course a very interesting question because we are also really good at defeating empathy – ask anyone who has ever participated in a war or witnessed a capital punishment.

    This brings me to wondering what the point of ethics is in the first place. My answer is survival. Societies or individuals who behave in ethical ways are more likely to survive than societies or individuals who don’t. This, it seems to be, makes an objective ethics actually possible. Good is that which allows people and institutions to endure, bad is that which doesn’t.

    As such, I wonder if the non-empathy side of ethics isn’t going to play at least as important a role in AI ethics as discussions of rights or how cute the robot is.

    Like

    • “As such, I wonder if the non-empathy side of ethics isn’t going to play at least as important a role in AI ethics as discussions of rights or how cute the robot is.”

      I guess it sort of depends on what the AI is. I didn’t even come close to that question (which is way beyond my scope). I wonder if that non-empathy side that currently exists towards natural humans and creatures may have some role in the Uncanny Valley that Wyrd brought up:

      http://en.wikipedia.org/wiki/Uncanny_valley

      We’re okay with robots when they’re clearly robots. But when they become too similar to us, but not similar enough, we are repulsed. Could these same impulses be behind racism and other forms of discrimination? Who knows?

      Like

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.