theme-sticky-logo-alt
PREVIOUS POST
These Days.

23 Comments

  • October 5, 2025 at 12:34 am
    Too Tall

    It’s five o’clock somewhere.

    REPLY
  • October 5, 2025 at 12:36 am
    JohninMd.(HALP!)

    Sara Conner told us that, a lonnng time ago….still think she’s right !

    REPLY
    • October 5, 2025 at 12:49 am
      eon

      Harlan Ellison thought “The Terminator” took enough from his story “Soldier” that he sued Carolco.

      In fact, the T-800 owed much more to Alfred Bester’s “Fondly Fahrenheit”. Reading it today, it feels like Bester was precognitive in predicting just how badly AI can go wrong, even more so than Michael Crichton did in “Westworld” Or D.F. Jones did in “Colossus; The Forbin Project”.

      One problem I see with AI is who does the original base programming. Considering the number of programmers who are virulently “progressive”, there’s a high probability that their misanthropy, narcissism, and megalomania is affecting their AIs base bias.

      IOW, we don’t have to wait for the 23rd Century to have Daystrom M-5 level problems. Or worse.

      clear ether

      eon

      REPLY
      • October 5, 2025 at 2:44 am
        15Fixer

        Man…. You named most of the Sci-Fi classics I tell people about, to see what the bad possibilities are. “Rossum’s Universal Robots” by Karel Čapek is another. There are more…. Good post!!!

      • October 5, 2025 at 6:48 am
        Saaruuk

        Exactly!! Much like how the Leftists have used Orwell’s “Animal Farm” and “1984” as a ‘How-To” manual instead of a cautionary tale.

      • October 5, 2025 at 10:05 am
        Hardthought

        Don’t forget “I. Robot” by Isaac Asimov.

        There are dozens more, but you named a good crop.

        Then there is Philip Dic’s “Do Androids Dream of Elecgtric Sheep?”, aka “BladeRunner”

      • October 5, 2025 at 10:08 am
        Oldarmourer

        “Computers can’t think, they can’t feel, they just run programs”

        The trouble is that those programs are written by people who think and feel and are often rewritten by people who think and feel things that should make other people nervous. Computers will only do what you tell them to, not what you want them to and you had better be careful you word those instructions with no room for error…or have they done that already ?
        The biggest problem with the if-then-else decision tree is when it’s worded IF you disagree with the programmer-THEN you are to be terminated-and there is no ELSE

  • October 5, 2025 at 12:45 am
    Toxic Deplorable Racist SAH Neanderthal B Woodman Domestic Violent Extremist SuperStraight

    A portentous strip.
    Let’s hope we survive this storm.

    REPLY
  • October 5, 2025 at 6:25 am
    Mort

    If u are going into a tornado while driving,
    the ‘Rolling Dumpster’ might be a good
    choice, as long as it`s padded and has
    good belts.

    REPLY
    • October 5, 2025 at 9:26 am
      Toxic Deplorable Racist SAH Neanderthal B Woodman Domestic Violent Extremist SuperStraight

      “Good belts”
      Tires? Or seat?

      REPLY
      • October 5, 2025 at 10:10 am
        John D. Egbert

        Yes . . .

  • October 5, 2025 at 6:59 am
    Browncoat57
  • October 5, 2025 at 7:57 am
    Too Tall

    Thank you John Hackathorn for your sponsorship of DBD, and for an important topic both timely and timeless.

    REPLY
  • October 5, 2025 at 8:34 am
    cb

    Why am I hearing “Have You Ever Seen the Rain?”

    REPLY
  • October 5, 2025 at 8:46 am
    James

    Who is John Hackathorn, and why is he listed at the bottom of this panel as a “sponsor”?

    REPLY
    • October 5, 2025 at 9:05 am
      Chris+muir

      The annual support drive has levels one of which entitles the patron any topic they want in a Sunday sized spread

      REPLY
    • October 5, 2025 at 10:43 am
      RHT447
  • October 5, 2025 at 10:48 am
    RHT447

    Given the recent sombrero in the news, the music score makes this even more hilarious.

    https://www.youtube.com/watch?v=-IJBbtkBMMs&list=RD-IJBbtkBMMs&start_radio=1

    REPLY
  • October 5, 2025 at 10:58 am
    The Nth Doctor

    It’s worth remembering that what is currently being promoted as “AI” is nothing of the sort — there is *no* actual “Intelligence” at work inside these chatbots, in any way, whatsoever. They aren’t “artificial intelligences” — they’re “large language models” (LLMs), which are nothing more than glorified autocomplete/autocorrect engines. They don’t “know” anything — all they do is manipulate word/symbol groups based on statistical probabilities and templates of what a “correct” response ought to look like. (This is how, among other things, you get LLMs citing nonexistent sources or legal cases — the LLM has a template of what a legal brief or a bibliographic citation *ought* to look like, and it simply Mad Libs a response based on “statistically, these words and phrases frequently appear in documents concerning subject X; therefore, any query concerning X is likely to be satisfied by including similar words and phrases”.) They have NO concept of what any of those word/symbol groups actually MEAN in the real world.

    You also need to consider the training data that was used… namely, the Internet. Including places like Reddit, which is chock-full of narcissistic misanthropes — and also places like Project Gutenberg, and who knows how many other sources of fiction, including fan-fics. So the LLMs have likely assimilated a lot of dystopian “the future’s gonna suck and we’re all going to die” sci-fi, and a great many essays and articles from misanthropic futurists predicting Doom And Gloom. So OF COURSE the LLM, when pushed to predict what it will do in a “the Machines take over” scenario, is going to dutifully echo back a response Mad Libbed from what it was trained on, since that’s the most statistically-probable response due to the prevalence of those themes in the training data.

    It has no significance whatsoever, other than the fact that we humans far too easily anthropomophise inanimate objects and assign intent to them when none exists, or even can exist.

    REPLY
    • October 5, 2025 at 11:33 am
      John

      The limitations of the LLMs were apparent even to the developers.
      Even they had to resort to “super prompts” and had to put up with what we now refer to as “hallucinations” to the point that they had to release their monstrosities in spite of their defects to satisfy (or so they hoped) the investors.
      The fact is no one has even proposed a true practical Theory of Intelligence, much less deployed one.

      REPLY
    • October 5, 2025 at 12:58 pm
      JTC

      Are Musk and some of the early developers wrong that AI will come about and when it does will have self-determination and therefore self-control and…us control? Will the old adage that machines can/will do no harm to humans protect from that? Or is the whole thing a fabrication with an agenda, just advanced bots to store and disseminate info, both real and created?

      REPLY
      • October 5, 2025 at 1:31 pm
        eon

        If Asimov’s (or more exactly John W. Campbell’s) Three Laws of Robotics are not part of the machine’s base program from the start, it will not abide by them.

        “And-gate” logic circuits do not make value judgements now, any more than they did when they were first diagrammed a century ago- oddly enough by Nikola Tesla, who beat Alan Turing to the punch by a decade or so.

        If the Three Laws were somehow “innate” as so many people (regrettably including many SFans) believe, the entire “guided missile” era would never have “launched”. (Pun unintentional.)

        Fundamentally, all any precision guided weapon’s guidance system, from a Sidewinder’s to a MOAB’s to an ICBM’s, is, is simply a robot brain of varying degrees of sophistication. If the Three Laws were an inextricable part of programming, no AIM-9 would ever leave the rail in a dogfight, a MOAB wouldn’t drop, and a Peacekeeper would just be an expensive opponent in a debate. (“I’m sorry, General, I can’t do that.”)

        So the Three Laws are like any other bit of code. If the programmer doesn’t input them, the computer will not act on them.

        This is why “cybersecurity” matters even more to the military than it does to the rest of us. If a hacker got into a Burke class DDG’s fire control system and planted an “Easter Egg” or two, things could go pear-shaped in a hurry.

        It would be mildly embarrassing, to say the least, if a Tomahawk T-LAM-C launched against a Yemeni pirate stronghold should instead do an Immelmann and hit the ship that launched it.

        clear ether

        eon

      • October 5, 2025 at 2:14 pm
        JTC

        Got it I guess…if not innate then controlled by the programmers.

        Meaning, not AI at all, unlikely to ever be, and Musk and his peers are wrong?

LEAVE A REPLY CANCEL REPLY

This site uses Akismet to reduce spam. Learn how your comment data is processed.

15 49.0138 8.38624 1 0 4000 1 https://www.daybydaycartoon.com 300 0