Many people are familiar with Asimov's three laws of robotics:
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws. (Isaac Asimov, I, Robot)
The British Standards Institute published a guidebook titled Robots and robotic devices. Guide to the ethical design and application of robots and robotic systems which is a dry safety-manual type book outlining ways to embed ethical values into robots and AI. You can get the manual here:
The AI ethics conversation has captured headlines as Elon Musk calls for government regulation before it's too late and killer robots are roaming the streets. At a recent gathering of US officials Musk said, “I keep sounding the alarm bell, but until people see robots going down the street killing people, they don’t know how to react, because it seems so ethereal."
Then there's this: https://www.stopkillerrobots.org/
Not all robots are harmful and violent. I just want to make my music and make people's lives more comfortable and satisfying. Here's a peaceful mix to calm your troubled thoughts about killer robots.