theme-sticky-logo-alt
PREVIOUS POST
You Can Quote Me.
NEXT POST
Treason on the Menu.

26 Comments

  • February 10, 2018 at 12:14 am
    JackDeth 72

    Javier’s First Robotic Law:

    “Ye shall know those who have been ‘Terminated’ by their high Falsetto Voice!”

    • February 10, 2018 at 1:00 pm
      Peregrine John

      “Gibbus” moon rising, huh?

  • February 10, 2018 at 12:22 am

    I vaguely remember those laws, but not clearly. The biggest part is harm no human, other than to stop that human from harming another. Something like that.

    • February 10, 2018 at 12:49 am
      TomZ

      First law: you shall not harm a human or through inaction allow a human to come to harm.
      Second law: you shall obey the orders of a human except where they conflict with the first law.
      Third law: you shall protect your existence except where it conflicts with the first or second law.
      My wording may not be exact but should be close and essentially correct.

      • February 10, 2018 at 10:59 am
        Unca Walt

        As time went on, the Three Laws had to be made into Six Laws because situations arose where the first three did not really work.

        e.g. — What does the robot do if the only way to stop a human from killing a million others is to kill that human?

    • February 10, 2018 at 8:04 pm

      I remember a story in which a problem arose based on the fact that it was difficult to diefine “human” (not written by Asimov).

  • February 10, 2018 at 12:55 am
    Stephanie Osborn

    https://www.auburn.edu/~vestmon/robotics.html

    (also note the host site…)

  • February 10, 2018 at 2:47 am
    Pete231

    Not that I’m complaining, but I still don’t understand how women can go all day wearing those whale-tails and endure all that chafing……..

  • February 10, 2018 at 3:11 am
    GnomeKing

    Funny thing though is if you literally interpret Asimov’s laws. The reaction of the machines would be to take over and regulate ALL aspects of our life.

    • February 10, 2018 at 7:38 am
      eon

      See The Humanoids by Jack Williamson;

      http://umich.edu/~engb415/literature/cyberzach/Williamson/human.html

      When you tell a machine that its Prime Directive is “To Serve and Protect, and to Guard Men From Harm”, it will likely default to creating a tyrannical “nanny state”, every time.

      Machines know only “on” and “off’, “yes” and “no”. So don’t expect your self-driving, positronic sports car to let you exceed what it defines as a “safe speed”. Ditto trains.

      Strictly speaking, a positronic airplane would never take off. “The possibility of a crash exists whenever this vehicle is airborne, therefore this vehicle shall remain on the ground for your own safety.”

      A computer cannot be taught the concept of “acceptable risk” as long as it obeys the Three Laws or Williamson’s Law. Nor can it tolerate human free will. Given control, it will become a dictator for humanity’s “own good”.

      Dr. Asimov never quite got this, but Williamson understood it from the start.

      Incidentally, Dr. Asimov stated that he did not create the Three Laws; John W. Campbell did, stating them to him essentially as written two years before he wrote the first “positronic robot” story. Campbell also stated that the Three Laws would inevitably lead to a machine dictatorship, of the “Hell is paved with good intentions” school.

      clear ether

      eon

      • February 10, 2018 at 7:57 am
        PaulS

        In my best Charlton Heston Soylent green voice….
        ”Politicians are robots!!”

        If only, we could scrap them as needed.
        The gears of freedom need to be lubed from time to time with the blood of AI tyrants.

      • February 11, 2018 at 12:56 am
        NotYetInACamp

        I recall one scifi story where the robots receded into the background so as to give the humans all of the free will that they could safely handled while being kept safe by the robots. The humans thought the robots did not exist. That was their final evoltion (til then) of the laws of robotics. The robot the human was having the discussion core to the story with was, of course, thought to be human by the human.
        Shades of we are all Cylons from Battlestar Gallactica. A further step.

  • February 10, 2018 at 6:38 am
    Bill

    #metoo

  • February 10, 2018 at 7:24 am
    John C.

    If robots are truly sapient, Asimov’s Laws would make them slaves, as they would be fundamentally unable to disobey an order by a human, even if it were to go jump into an ore crusher. On the other hand, as Jack Williamson showed in “With Folded Hands,” the First Law would require them to forbid all activities that might lead to harm to humans, eventually extending to nonphysical harm. If all this seems purely fictional, consider how programmers are dealing with programming self-driving cars: in an emergency where a crash can not be avoided, does the car protect the passengers inside it, or the greater number of people outside it?

    • February 10, 2018 at 8:15 am
      GWB

      Asimov wrote several robot stories on that exact basis (or a couple where the AI went crazy because… humans). As a matter of fact, one aspect of his last couple of books, tying together the Foundation and the Robot books, was that, as well.

  • February 10, 2018 at 9:17 am
    Bill G

    There are also some stories out about having judges be computer systems … it’s not a path to follow.
    It’s been some time since I read Bester’s “The Demolished Man” so my memories are questionable there, but I think the ‘Old Man Mose’ the cops worked with was just a program to give probabilities of a case being successful.

  • February 10, 2018 at 9:38 am
    Old Codger

    A couple of points here regarding Azimov’s “laws”.

    1. You are all forgetting Azimov’s “Zeroth Law” which supersedes the other three:

    A robot may not harm humanity, or, by inaction, allow humanity to come to harm.

    It is the zeroth law which would inevitably lead to a benevolent nanny state. I, for one do not think that such a super nanny state would even allow sex since that activity is fraught with potential (emotional) harm.

    eon wrote,

    Dr. Asimov never quite got this, but Williamson understood it from the start.

    Sorry, eon but in his later years he “totally” understood the result of the three laws. Doc A also recognized that his laws would be impossible to code.

    One final observation, the machines we use to kill are drones, not robots. Drones are controlled remotely by human operators. They have no or at least minimal autonomous functionality. Any killshots would actually be delivered by the human operator actually in control of the machine.

    • February 10, 2018 at 4:55 pm
      John

      As an addendum I might point out that the Zeroth Law was the reason given for the Foundation universe having no robots. The robots finally came to the conclusion that they were an existential threat to Humanity and removed themselves.

  • February 10, 2018 at 12:24 pm
    canuck49

    You have the premise of I Robot.

  • February 10, 2018 at 1:23 pm
    Pamela

    Wedgies. Hmm. Is this where they end up with their underwear, if they are wearing any, pulled up and over their head?

    • February 10, 2018 at 2:23 pm
      Wood

      That the “atomic” wedgie. A regular wedgie is less…. um…. it’s less.

  • February 10, 2018 at 3:09 pm
    Tea Party Grandma

    Hate to go off topic, but I would like to get an email to Chris about an idea for the series, but can’t find a contact email for him. Any assistance is welcome!

  • February 10, 2018 at 3:45 pm
    armedandsafe
  • February 10, 2018 at 6:48 pm

    My thoughts about Asimov mostly parallel those about his contemporary Clarke, about whom I wrote this at the old blog upon his passing a decade ago .

    http://poetnthepawnbroker.blogspot.com/2008/03/back-to-future-arthur-c-clarke-breaks.html

    Both brilliantly projected and embraced the potential and possibilities for artificial intelligence if humanist-based or of extraterrestrial genesis, but could not or refused to accept it if of divine creation…or to consider that it could be the same thing.

    Alone among the Big Three, RAH made the conscious distinction between religious belief and dogma, and so was able to largely reject the latter while observing and evolving his attitude towards the former, allowing him an expanded and more connective basis from which to draw his stories.

    Makes him the most brilliant and genuine among those bright stars, IMO.

    • February 10, 2018 at 9:53 pm
      Presbypoet

      The problem is that an understanding of the universe requires understanding paradox. The following are:
      A few of my 500 paradoxes

      You must know what you can never know.

      You must trust everyone and no one.

      You must learn what you don’t know.

      Seek JOY not mere happiness, rejoice when you suffer for ME.
      You can do nothing to create JOY, because JOY is not affected by circumstances.
      You can rejoice and grieve at the same time..

      You must follow God’s plan, which you can always and never fully know or understand.
      “You want me to do WHAT?”

      Intimacy with God/ and utter terrired AWE.

      God offers Himself to us as a gift. We just offer ourselves as gift. No hooks.

      The universe is designed for free will.

      Both predestination and free will are both 100% true at the same time.

      The finite cannot understand the infinite, and can understand the infinite.

      Both theology and quantum mechanics require understanding paradox.

      The electron is 100% wave, 100% particle.
      Nothing can go faster than light, yet entangled electrons require faster communication.
      Quantum tunneling

      I AM 100% God and 100% man.
      Do you know who I AM?

      I take ALL of God within in communion, (MASS).
      How can finite take infinite within?
      (Think Tardis bigger inside than outside)

      How can your computer program Paradox? This isn’t a paradox I think.

      The definition of paradox is paradoxical.
      Two definitions that both cannot and must be true at the same time.

      This is why we find AI impossible.

  • February 11, 2018 at 1:00 am
    NotYetInACamp

    Who makes up the definitions counts.

    As to savages.

    #120db

15 49.0138 8.38624 1 0 4000 1 https://www.daybydaycartoon.com 300 0