HOMO IN MACHINA: ETHICS OF THE AI SOUL – A Response

Many thanks to Nannus for giving us this thoughtful reply. Please check out his response!

Creativistic Philosophy

File:Vienna - Vintage Franz Zajizek Astronomical Clock machinery - 0518.jpg

This article started as a comment to http://theleatherlibraryblog.com/homo-in-machina-ethics-of-the-ai-soul/ (by Steven Umbrello and Tina Forsee) that grew so long that it could not be posted there, so instead I decided to put it here as an article on my own blog and just comment that article by putting a link. This article, therefore, should be read in the context of the article it is commenting on.

Why is there ethics at all? Why is nothing wrong with taking a hammer and beating a stone, but everything wrong with taking a hammer and beating a human being? The reason is simple: the human being would suffer, the stone would not. The subjective experience of the human being is what makes ethics necessary.

We assume that animals, at least the more sophisticated ones, also have such a subjective side, so we would extend some of the ethical principles to them as well. There…

View original post 1,771 more words

Advertisements

2 thoughts on “HOMO IN MACHINA: ETHICS OF THE AI SOUL – A Response

  1. I was looking through the original collaboration and I think you guys did a good job defining how the empathy side of ethics would apply to AI. Human being are incredibly empathetic. We can and do empathize with pet rocks and the like. What stimulates us to empathize like this is of course a very interesting question because we are also really good at defeating empathy – ask anyone who has ever participated in a war or witnessed a capital punishment.

    This brings me to wondering what the point of ethics is in the first place. My answer is survival. Societies or individuals who behave in ethical ways are more likely to survive than societies or individuals who don’t. This, it seems to be, makes an objective ethics actually possible. Good is that which allows people and institutions to endure, bad is that which doesn’t.

    As such, I wonder if the non-empathy side of ethics isn’t going to play at least as important a role in AI ethics as discussions of rights or how cute the robot is.

    Like

    • “As such, I wonder if the non-empathy side of ethics isn’t going to play at least as important a role in AI ethics as discussions of rights or how cute the robot is.”

      I guess it sort of depends on what the AI is. I didn’t even come close to that question (which is way beyond my scope). I wonder if that non-empathy side that currently exists towards natural humans and creatures may have some role in the Uncanny Valley that Wyrd brought up:

      http://en.wikipedia.org/wiki/Uncanny_valley

      We’re okay with robots when they’re clearly robots. But when they become too similar to us, but not similar enough, we are repulsed. Could these same impulses be behind racism and other forms of discrimination? Who knows?

      Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s