Day By Day

Comments

  • JackDeth 72

    Javier’s First Robotic Law:

    “Ye shall know those who have been ‘Terminated’ by their high Falsetto Voice!”

    • Peregrine John

      “Gibbus” moon rising, huh?

  • I vaguely remember those laws, but not clearly. The biggest part is harm no human, other than to stop that human from harming another. Something like that.

    • TomZ

      First law: you shall not harm a human or through inaction allow a human to come to harm.
      Second law: you shall obey the orders of a human except where they conflict with the first law.
      Third law: you shall protect your existence except where it conflicts with the first or second law.
      My wording may not be exact but should be close and essentially correct.

      • Unca Walt

        As time went on, the Three Laws had to be made into Six Laws because situations arose where the first three did not really work.

        e.g. — What does the robot do if the only way to stop a human from killing a million others is to kill that human?

    • I remember a story in which a problem arose based on the fact that it was difficult to diefine “human” (not written by Asimov).

  • Pete231

    Not that I’m complaining, but I still don’t understand how women can go all day wearing those whale-tails and endure all that chafing……..

  • GnomeKing

    Funny thing though is if you literally interpret Asimov’s laws. The reaction of the machines would be to take over and regulate ALL aspects of our life.

    • eon

      See The Humanoids by Jack Williamson;

      http://umich.edu/~engb415/literature/cyberzach/Williamson/human.html

      When you tell a machine that its Prime Directive is “To Serve and Protect, and to Guard Men From Harm”, it will likely default to creating a tyrannical “nanny state”, every time.

      Machines know only “on” and “off’, “yes” and “no”. So don’t expect your self-driving, positronic sports car to let you exceed what it defines as a “safe speed”. Ditto trains.

      Strictly speaking, a positronic airplane would never take off. “The possibility of a crash exists whenever this vehicle is airborne, therefore this vehicle shall remain on the ground for your own safety.”

      A computer cannot be taught the concept of “acceptable risk” as long as it obeys the Three Laws or Williamson’s Law. Nor can it tolerate human free will. Given control, it will become a dictator for humanity’s “own good”.

      Dr. Asimov never quite got this, but Williamson understood it from the start.

      Incidentally, Dr. Asimov stated that he did not create the Three Laws; John W. Campbell did, stating them to him essentially as written two years before he wrote the first “positronic robot” story. Campbell also stated that the Three Laws would inevitably lead to a machine dictatorship, of the “Hell is paved with good intentions” school.

      clear ether

      eon

      • PaulS

        In my best Charlton Heston Soylent green voice….
        ”Politicians are robots!!”

        If only, we could scrap them as needed.
        The gears of freedom need to be lubed from time to time with the blood of AI tyrants.

      • NotYetInACamp

        I recall one scifi story where the robots receded into the background so as to give the humans all of the free will that they could safely handled while being kept safe by the robots. The humans thought the robots did not exist. That was their final evoltion (til then) of the laws of robotics. The robot the human was having the discussion core to the story with was, of course, thought to be human by the human.
        Shades of we are all Cylons from Battlestar Gallactica. A further step.

  • Bill

    #metoo

  • John C.

    If robots are truly sapient, Asimov’s Laws would make them slaves, as they would be fundamentally unable to disobey an order by a human, even if it were to go jump into an ore crusher. On the other hand, as Jack Williamson showed in “With Folded Hands,” the First Law would require them to forbid all activities that might lead to harm to humans, eventually extending to nonphysical harm. If all this seems purely fictional, consider how programmers are dealing with programming self-driving cars: in an emergency where a crash can not be avoided, does the car protect the passengers inside it, or the greater number of people outside it?

    • GWB

      Asimov wrote several robot stories on that exact basis (or a couple where the AI went crazy because… humans). As a matter of fact, one aspect of his last couple of books, tying together the Foundation and the Robot books, was that, as well.

  • Bill G

    There are also some stories out about having judges be computer systems … it’s not a path to follow.
    It’s been some time since I read Bester’s “The Demolished Man” so my memories are questionable there, but I think the ‘Old Man Mose’ the cops worked with was just a program to give probabilities of a case being successful.

  • Old Codger

    A couple of points here regarding Azimov’s “laws”.

    1. You are all forgetting Azimov’s “Zeroth Law” which supersedes the other three:

    A robot may not harm humanity, or, by inaction, allow humanity to come to harm.

    It is the zeroth law which would inevitably lead to a benevolent nanny state. I, for one do not think that such a super nanny state would even allow sex since that activity is fraught with potential (emotional) harm.

    eon wrote,

    Dr. Asimov never quite got this, but Williamson understood it from the start.

    Sorry, eon but in his later years he “totally” understood the result of the three laws. Doc A also recognized that his laws would be impossible to code.

    One final observation, the machines we use to kill are drones, not robots. Drones are controlled remotely by human operators. They have no or at least minimal autonomous functionality. Any killshots would actually be delivered by the human operator actually in control of the machine.

    • John

      As an addendum I might point out that the Zeroth Law was the reason given for the Foundation universe having no robots. The robots finally came to the conclusion that they were an existential threat to Humanity and removed themselves.

  • canuck49

    You have the premise of I Robot.

  • Pamela

    Wedgies. Hmm. Is this where they end up with their underwear, if they are wearing any, pulled up and over their head?

    • Wood

      That the “atomic” wedgie. A regular wedgie is less…. um…. it’s less.

  • Tea Party Grandma

    Hate to go off topic, but I would like to get an email to Chris about an idea for the series, but can’t find a contact email for him. Any assistance is welcome!

  • JTC

    My thoughts about Asimov mostly parallel those about his contemporary Clarke, about whom I wrote this at the old blog upon his passing a decade ago .

    http://poetnthepawnbroker.blogspot.com/2008/03/back-to-future-arthur-c-clarke-breaks.html

    Both brilliantly projected and embraced the potential and possibilities for artificial intelligence if humanist-based or of extraterrestrial genesis, but could not or refused to accept it if of divine creation…or to consider that it could be the same thing.

    Alone among the Big Three, RAH made the conscious distinction between religious belief and dogma, and so was able to largely reject the latter while observing and evolving his attitude towards the former, allowing him an expanded and more connective basis from which to draw his stories.

    Makes him the most brilliant and genuine among those bright stars, IMO.

    • Presbypoet

      The problem is that an understanding of the universe requires understanding paradox. The following are:
      A few of my 500 paradoxes

      You must know what you can never know.

      You must trust everyone and no one.

      You must learn what you don’t know.

      Seek JOY not mere happiness, rejoice when you suffer for ME.
      You can do nothing to create JOY, because JOY is not affected by circumstances.
      You can rejoice and grieve at the same time..

      You must follow God’s plan, which you can always and never fully know or understand.
      “You want me to do WHAT?”

      Intimacy with God/ and utter terrired AWE.

      God offers Himself to us as a gift. We just offer ourselves as gift. No hooks.

      The universe is designed for free will.

      Both predestination and free will are both 100% true at the same time.

      The finite cannot understand the infinite, and can understand the infinite.

      Both theology and quantum mechanics require understanding paradox.

      The electron is 100% wave, 100% particle.
      Nothing can go faster than light, yet entangled electrons require faster communication.
      Quantum tunneling

      I AM 100% God and 100% man.
      Do you know who I AM?

      I take ALL of God within in communion, (MASS).
      How can finite take infinite within?
      (Think Tardis bigger inside than outside)

      How can your computer program Paradox? This isn’t a paradox I think.

      The definition of paradox is paradoxical.
      Two definitions that both cannot and must be true at the same time.

      This is why we find AI impossible.

  • NotYetInACamp

    Who makes up the definitions counts.

    As to savages.

    #120db