Three Laws of Robotics
- A robot may not harm a human being, or, through inaction, allow a human being to come to harm.
- A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
- A robot must protect its own existence, as long as such protection does not conflict with the First or Second Law.
-- Isaac Asimov
Three Laws of Robotics, aha, actually it's very hard for me to have the future vision of the relationship between Robot and Human being, but so many Hollywood motion pictures give me a lot of virtual experiment of the future, the future that the Human and the Robot fight with each other instead of live together in peace, such as "The matrix", "I, Robot" and "Terminate episode 1, 2 and 3" etc.
I am wondering that is there any chance for Robot to have their own intelligent? We can just pretend that they have, so in Robot Social, do they use any kind of currency? Even more, do they have murder, kidnapper or something else? If they have, then what's the different with Human Social?
Oh my God, I must be mad now! So leave me alone, LOL.
:c)








0 Comments:
Post a Comment
<< Home