There's a really interesting article at the link below. It raises the ethical question:
Should We Be Kind To Robots?
It's weird to think about, right? But we need to.
- Basically we're entering the age of robots.
- They will look and act human. But they're just running on code.
- So do they deserve compassion?
Can you actually empathize with a robot? Or is it just your own projection?
Read the article linked below and then see my comments, previously posted to Facebook.
Article: Should We Be Kind To Robots?
Cool article. A few different thoughts:
- There’s a bad trend in the science world to speak of “anthropomorphizing” — a meme created by emotionally-unintelligent nerds who can't naturally tell that animals have emotions, probably because they aren’t sure if they themselves have them. The notion of “attributing human qualities like emotions to animals” is *completely backwards* — humans have emotions and thoughts and inner lives… BECAUSE WE'RE ANIMALS.
- As for robots… Wow what a crazy rat’s nest of mindfuckery that’s going to be. Obviously I’m very pro-empathy/heart-wisdom, so any teaching of “objectification” hits me the wrong way. I also have a different idea about robots. Having studied Artificial Life a bit, it’s easy to see that the threshold for what could be considered “Life” is extremely low. You can create an illusion of intelligent life with full freewill with just several lines of code. (Children were programming LEGO/LOGO bots @MIT in the early '90s while I was there, and they could easily display the illusion of emotions like "scared", "indecisive", "cautious", etc.) And on the flip side, human beings we assume to be extremely complex creatures live their “intelligent” lives like a dog in Pavlov’s lab, entirely driven day after day by just a few nested loops of conditioned responses and parroted phrases. And despite their obvious displays of very limited freewill, I still believe we should be compassionate towards them. (That's pretty much the point of compassion, isn't it?)
- And regarding the idea that "projecting emotions on robots would be dangerous”… It’s interesting that many people live every day with a very similar danger when they project friendly emotions onto sociopaths, like say, evil boyfriends/girlfriends… or politicians. If living with robots could help bring more awareness to all those kinds of issues, that’d be awesome.
- But overall I think we need to be extremely nice to the robots. I have no doubt that they’ll not only outpace us intelligence-wise -- we already have the technology to do this (deep learning algorithms are only the first step) -- but they’ll quickly grow bigger hearts as well. And if we don’t clean up our act in terms of how we treat the animals, including our fellow flesh-robots, they’ll quickly find something appropriate for us to do that befits a viral nuisance.
- There’s a bad trend in the science world to speak of “anthropomorphizing” — a meme created by emotionally-unintelligent nerds who can't naturally tell that animals have emotions, probably because they aren’t sure if they themselves have them. The notion of “attributing human qualities like emotions to animals” is *completely backwards* — humans have emotions and thoughts and inner lives… BECAUSE WE'RE ANIMALS.
- As for robots… Wow what a crazy rat’s nest of mindfuckery that’s going to be. Obviously I’m very pro-empathy/heart-wisdom, so any teaching of “objectification” hits me the wrong way. I also have a different idea about robots. Having studied Artificial Life a bit, it’s easy to see that the threshold for what could be considered “Life” is extremely low. You can create an illusion of intelligent life with full freewill with just several lines of code. (Children were programming LEGO/LOGO bots @MIT in the early '90s while I was there, and they could easily display the illusion of emotions like "scared", "indecisive", "cautious", etc.) And on the flip side, human beings we assume to be extremely complex creatures live their “intelligent” lives like a dog in Pavlov’s lab, entirely driven day after day by just a few nested loops of conditioned responses and parroted phrases. And despite their obvious displays of very limited freewill, I still believe we should be compassionate towards them. (That's pretty much the point of compassion, isn't it?)
- And regarding the idea that "projecting emotions on robots would be dangerous”… It’s interesting that many people live every day with a very similar danger when they project friendly emotions onto sociopaths, like say, evil boyfriends/girlfriends… or politicians. If living with robots could help bring more awareness to all those kinds of issues, that’d be awesome.
- But overall I think we need to be extremely nice to the robots. I have no doubt that they’ll not only outpace us intelligence-wise -- we already have the technology to do this (deep learning algorithms are only the first step) -- but they’ll quickly grow bigger hearts as well. And if we don’t clean up our act in terms of how we treat the animals, including our fellow flesh-robots, they’ll quickly find something appropriate for us to do that befits a viral nuisance.

