5 thoughts on “HOMO IN MACHINA: ETHICS OF THE AI SOUL – A Response”
I was looking through the original collaboration and I think you guys did a good job defining how the empathy side of ethics would apply to AI. Human being are incredibly empathetic. We can and do empathize with pet rocks and the like. What stimulates us to empathize like this is of course a very interesting question because we are also really good at defeating empathy – ask anyone who has ever participated in a war or witnessed a capital punishment.
This brings me to wondering what the point of ethics is in the first place. My answer is survival. Societies or individuals who behave in ethical ways are more likely to survive than societies or individuals who don’t. This, it seems to be, makes an objective ethics actually possible. Good is that which allows people and institutions to endure, bad is that which doesn’t.
As such, I wonder if the non-empathy side of ethics isn’t going to play at least as important a role in AI ethics as discussions of rights or how cute the robot is.
“As such, I wonder if the non-empathy side of ethics isn’t going to play at least as important a role in AI ethics as discussions of rights or how cute the robot is.”
I guess it sort of depends on what the AI is. I didn’t even come close to that question (which is way beyond my scope). I wonder if that non-empathy side that currently exists towards natural humans and creatures may have some role in the Uncanny Valley that Wyrd brought up:
We’re okay with robots when they’re clearly robots. But when they become too similar to us, but not similar enough, we are repulsed. Could these same impulses be behind racism and other forms of discrimination? Who knows?
I was looking through the original collaboration and I think you guys did a good job defining how the empathy side of ethics would apply to AI. Human being are incredibly empathetic. We can and do empathize with pet rocks and the like. What stimulates us to empathize like this is of course a very interesting question because we are also really good at defeating empathy – ask anyone who has ever participated in a war or witnessed a capital punishment.
This brings me to wondering what the point of ethics is in the first place. My answer is survival. Societies or individuals who behave in ethical ways are more likely to survive than societies or individuals who don’t. This, it seems to be, makes an objective ethics actually possible. Good is that which allows people and institutions to endure, bad is that which doesn’t.
As such, I wonder if the non-empathy side of ethics isn’t going to play at least as important a role in AI ethics as discussions of rights or how cute the robot is.
LikeLike
“As such, I wonder if the non-empathy side of ethics isn’t going to play at least as important a role in AI ethics as discussions of rights or how cute the robot is.”
I guess it sort of depends on what the AI is. I didn’t even come close to that question (which is way beyond my scope). I wonder if that non-empathy side that currently exists towards natural humans and creatures may have some role in the Uncanny Valley that Wyrd brought up:
http://en.wikipedia.org/wiki/Uncanny_valley
We’re okay with robots when they’re clearly robots. But when they become too similar to us, but not similar enough, we are repulsed. Could these same impulses be behind racism and other forms of discrimination? Who knows?
LikeLike
Hi Tina, it looks like your original article http://theleatherlibraryblog.com/homo-in-machina-ethics-of-the-ai-soul/ is gone. What happened to http://theleatherlibraryblog.com/? Do you still have that article in some form?
LikeLiked by 1 person
I’m not sure what happened there. Maybe Steven just got tired of blogging? Well, here’s the article on the IEET site:
https://ieet.org/index.php/IEET2/more/20150427umbrello
Thanks for pointing that out.
LikeLiked by 1 person
Thank you.
LikeLiked by 1 person