Harlan Ellison thought “The Terminator” took enough from his story “Soldier” that he sued Carolco.
In fact, the T-800 owed much more to Alfred Bester’s “Fondly Fahrenheit”. Reading it today, it feels like Bester was precognitive in predicting just how badly AI can go wrong, even more so than Michael Crichton did in “Westworld” Or D.F. Jones did in “Colossus; The Forbin Project”.
One problem I see with AI is who does the original base programming. Considering the number of programmers who are virulently “progressive”, there’s a high probability that their misanthropy, narcissism, and megalomania is affecting their AIs base bias.
IOW, we don’t have to wait for the 23rd Century to have Daystrom M-5 level problems. Or worse.
Man…. You named most of the Sci-Fi classics I tell people about, to see what the bad possibilities are. “Rossum’s Universal Robots” by Karel Čapek is another. There are more…. Good post!!!
October 5, 2025 at 6:48 am
Saaruuk
Exactly!! Much like how the Leftists have used Orwell’s “Animal Farm” and “1984” as a ‘How-To” manual instead of a cautionary tale.
October 5, 2025 at 10:05 am
Hardthought
Don’t forget “I. Robot” by Isaac Asimov.
There are dozens more, but you named a good crop.
Then there is Philip Dic’s “Do Androids Dream of Elecgtric Sheep?”, aka “BladeRunner”
October 5, 2025 at 10:08 am
Oldarmourer
“Computers can’t think, they can’t feel, they just run programs”
The trouble is that those programs are written by people who think and feel and are often rewritten by people who think and feel things that should make other people nervous. Computers will only do what you tell them to, not what you want them to and you had better be careful you word those instructions with no room for error…or have they done that already ?
The biggest problem with the if-then-else decision tree is when it’s worded IF you disagree with the programmer-THEN you are to be terminated-and there is no ELSE
The Good, the Bad, and the Ugly. Best soundtrack ever.
October 5, 2025 at 10:58 am
The Nth Doctor
It’s worth remembering that what is currently being promoted as “AI” is nothing of the sort — there is *no* actual “Intelligence” at work inside these chatbots, in any way, whatsoever. They aren’t “artificial intelligences” — they’re “large language models” (LLMs), which are nothing more than glorified autocomplete/autocorrect engines. They don’t “know” anything — all they do is manipulate word/symbol groups based on statistical probabilities and templates of what a “correct” response ought to look like. (This is how, among other things, you get LLMs citing nonexistent sources or legal cases — the LLM has a template of what a legal brief or a bibliographic citation *ought* to look like, and it simply Mad Libs a response based on “statistically, these words and phrases frequently appear in documents concerning subject X; therefore, any query concerning X is likely to be satisfied by including similar words and phrases”.) They have NO concept of what any of those word/symbol groups actually MEAN in the real world.
You also need to consider the training data that was used… namely, the Internet. Including places like Reddit, which is chock-full of narcissistic misanthropes — and also places like Project Gutenberg, and who knows how many other sources of fiction, including fan-fics. So the LLMs have likely assimilated a lot of dystopian “the future’s gonna suck and we’re all going to die” sci-fi, and a great many essays and articles from misanthropic futurists predicting Doom And Gloom. So OF COURSE the LLM, when pushed to predict what it will do in a “the Machines take over” scenario, is going to dutifully echo back a response Mad Libbed from what it was trained on, since that’s the most statistically-probable response due to the prevalence of those themes in the training data.
It has no significance whatsoever, other than the fact that we humans far too easily anthropomophise inanimate objects and assign intent to them when none exists, or even can exist.
The limitations of the LLMs were apparent even to the developers.
Even they had to resort to “super prompts” and had to put up with what we now refer to as “hallucinations” to the point that they had to release their monstrosities in spite of their defects to satisfy (or so they hoped) the investors.
The fact is no one has even proposed a true practical Theory of Intelligence, much less deployed one.
Are Musk and some of the early developers wrong that AI will come about and when it does will have self-determination and therefore self-control and…us control? Will the old adage that machines can/will do no harm to humans protect from that? Or is the whole thing a fabrication with an agenda, just advanced bots to store and disseminate info, both real and created?
If Asimov’s (or more exactly John W. Campbell’s) Three Laws of Robotics are not part of the machine’s base program from the start, it will not abide by them.
“And-gate” logic circuits do not make value judgements now, any more than they did when they were first diagrammed a century ago- oddly enough by Nikola Tesla, who beat Alan Turing to the punch by a decade or so.
If the Three Laws were somehow “innate” as so many people (regrettably including many SFans) believe, the entire “guided missile” era would never have “launched”. (Pun unintentional.)
Fundamentally, all any precision guided weapon’s guidance system, from a Sidewinder’s to a MOAB’s to an ICBM’s, is, is simply a robot brain of varying degrees of sophistication. If the Three Laws were an inextricable part of programming, no AIM-9 would ever leave the rail in a dogfight, a MOAB wouldn’t drop, and a Peacekeeper would just be an expensive opponent in a debate. (“I’m sorry, General, I can’t do that.”)
So the Three Laws are like any other bit of code. If the programmer doesn’t input them, the computer will not act on them.
This is why “cybersecurity” matters even more to the military than it does to the rest of us. If a hacker got into a Burke class DDG’s fire control system and planted an “Easter Egg” or two, things could go pear-shaped in a hurry.
It would be mildly embarrassing, to say the least, if a Tomahawk T-LAM-C launched against a Yemeni pirate stronghold should instead do an Immelmann and hit the ship that launched it.
clear ether
eon
October 5, 2025 at 2:14 pm
JTC
Got it I guess…if not innate then controlled by the programmers.
Meaning, not AI at all, unlikely to ever be, and Musk and his peers are wrong?
October 5, 2025 at 3:19 pm
eon
@JTC
Let’s say that they’re a little over-optimistic.
I blame Stanley Kubrick and Steven Spielberg. The movie A.I; Artificial Intelligence (2001), made by Spielberg but based on a screenplay by Kubrick, gave everybody a very unrealistic vision of “AI” and how soon it would happen.
No, there won’t be AI surrogate children- or “Gigolo Janes”- running around any time soon. The best you’re going to get is a LLM based program running what amounts to a sophisticated “Baby Thataway”.
For certain purposes such might be useful- for certain values of “useful”.
But they are not going to replace humans, no matter what Dr. Howard Rheingold said forty years ago.
More critically than the historical and/or scific lessons (I loved Asimov’s work but gave him a back-handed epitaph at the old dead blog for reasons), based on recent achievements, is what Musk et al says, and I would use the term pessimistic for his stated attitude as they are pretty clear that they expect and warn that the battle for (or against) “technological singularity” is at least somewhat imminent.
I don’t know but certainly they can’t be misconstruing programmer input regardless of sophistication, with actual “thought” processes based on input but going beyond the program and to full on autonomous. Who knows when/if our understanding of what machines (or man) is capable of is or is not final until it happens?
I know diddly about programming other than GIGO. But, spending a lifetime fixing/building stuff, I think what might be real handy, is infecting robots/Ai’s with a virus. Mainly squirrels and mice. Y’all see what those little bastids do to a wiring harness?
Hello eon…
In “I, Robot,”, even when the robot told the woman what she wanted to hear, so as to “not harm a human,” by telling the lie the robot ended up hurting her, thus ending its’ existence. Damned if you do, damned if you don’t. ETHICS needs to restored to discussion. Just because you “might” be able to do something, doesn’t mean you ‘should’ do it. Love reading your messages!!!
32 Comments
It’s five o’clock somewhere.
Sara Conner told us that, a lonnng time ago….still think she’s right !
Harlan Ellison thought “The Terminator” took enough from his story “Soldier” that he sued Carolco.
In fact, the T-800 owed much more to Alfred Bester’s “Fondly Fahrenheit”. Reading it today, it feels like Bester was precognitive in predicting just how badly AI can go wrong, even more so than Michael Crichton did in “Westworld” Or D.F. Jones did in “Colossus; The Forbin Project”.
One problem I see with AI is who does the original base programming. Considering the number of programmers who are virulently “progressive”, there’s a high probability that their misanthropy, narcissism, and megalomania is affecting their AIs base bias.
IOW, we don’t have to wait for the 23rd Century to have Daystrom M-5 level problems. Or worse.
clear ether
eon
Man…. You named most of the Sci-Fi classics I tell people about, to see what the bad possibilities are. “Rossum’s Universal Robots” by Karel Čapek is another. There are more…. Good post!!!
Exactly!! Much like how the Leftists have used Orwell’s “Animal Farm” and “1984” as a ‘How-To” manual instead of a cautionary tale.
Don’t forget “I. Robot” by Isaac Asimov.
There are dozens more, but you named a good crop.
Then there is Philip Dic’s “Do Androids Dream of Elecgtric Sheep?”, aka “BladeRunner”
“Computers can’t think, they can’t feel, they just run programs”
The trouble is that those programs are written by people who think and feel and are often rewritten by people who think and feel things that should make other people nervous. Computers will only do what you tell them to, not what you want them to and you had better be careful you word those instructions with no room for error…or have they done that already ?
The biggest problem with the if-then-else decision tree is when it’s worded IF you disagree with the programmer-THEN you are to be terminated-and there is no ELSE
A portentous strip.
Let’s hope we survive this storm.
If u are going into a tornado while driving,
the ‘Rolling Dumpster’ might be a good
choice, as long as it`s padded and has
good belts.
“Good belts”
Tires? Or seat?
Yes . . .
https://www.daybydaycartoon.com/comic/off-script/
Thank you John Hackathorn for your sponsorship of DBD, and for an important topic both timely and timeless.
Why am I hearing “Have You Ever Seen the Rain?”
Who is John Hackathorn, and why is he listed at the bottom of this panel as a “sponsor”?
The annual support drive has levels one of which entitles the patron any topic they want in a Sunday sized spread
Is John related to Ken?
https://www.youtube.com/watch?v=FF0qH_zvfdU
Given the recent sombrero in the news, the music score makes this even more hilarious.
https://www.youtube.com/watch?v=-IJBbtkBMMs&list=RD-IJBbtkBMMs&start_radio=1
I haven’t seen this in years!! So good!!
The last time I saw it was posted on a thread about an escapee cow living with the bison.
https://www.independent.co.uk/news/world/europe/cow-escape-farm-live-bison-herd-poland-bialowieza-forest-belarus-a8177876.html
Chris deserves anything from the low-budget days of Ennio Morricone (RIP). That man was musical genius walking among us.
The Good, the Bad, and the Ugly. Best soundtrack ever.
It’s worth remembering that what is currently being promoted as “AI” is nothing of the sort — there is *no* actual “Intelligence” at work inside these chatbots, in any way, whatsoever. They aren’t “artificial intelligences” — they’re “large language models” (LLMs), which are nothing more than glorified autocomplete/autocorrect engines. They don’t “know” anything — all they do is manipulate word/symbol groups based on statistical probabilities and templates of what a “correct” response ought to look like. (This is how, among other things, you get LLMs citing nonexistent sources or legal cases — the LLM has a template of what a legal brief or a bibliographic citation *ought* to look like, and it simply Mad Libs a response based on “statistically, these words and phrases frequently appear in documents concerning subject X; therefore, any query concerning X is likely to be satisfied by including similar words and phrases”.) They have NO concept of what any of those word/symbol groups actually MEAN in the real world.
You also need to consider the training data that was used… namely, the Internet. Including places like Reddit, which is chock-full of narcissistic misanthropes — and also places like Project Gutenberg, and who knows how many other sources of fiction, including fan-fics. So the LLMs have likely assimilated a lot of dystopian “the future’s gonna suck and we’re all going to die” sci-fi, and a great many essays and articles from misanthropic futurists predicting Doom And Gloom. So OF COURSE the LLM, when pushed to predict what it will do in a “the Machines take over” scenario, is going to dutifully echo back a response Mad Libbed from what it was trained on, since that’s the most statistically-probable response due to the prevalence of those themes in the training data.
It has no significance whatsoever, other than the fact that we humans far too easily anthropomophise inanimate objects and assign intent to them when none exists, or even can exist.
The limitations of the LLMs were apparent even to the developers.
Even they had to resort to “super prompts” and had to put up with what we now refer to as “hallucinations” to the point that they had to release their monstrosities in spite of their defects to satisfy (or so they hoped) the investors.
The fact is no one has even proposed a true practical Theory of Intelligence, much less deployed one.
Are Musk and some of the early developers wrong that AI will come about and when it does will have self-determination and therefore self-control and…us control? Will the old adage that machines can/will do no harm to humans protect from that? Or is the whole thing a fabrication with an agenda, just advanced bots to store and disseminate info, both real and created?
If Asimov’s (or more exactly John W. Campbell’s) Three Laws of Robotics are not part of the machine’s base program from the start, it will not abide by them.
“And-gate” logic circuits do not make value judgements now, any more than they did when they were first diagrammed a century ago- oddly enough by Nikola Tesla, who beat Alan Turing to the punch by a decade or so.
If the Three Laws were somehow “innate” as so many people (regrettably including many SFans) believe, the entire “guided missile” era would never have “launched”. (Pun unintentional.)
Fundamentally, all any precision guided weapon’s guidance system, from a Sidewinder’s to a MOAB’s to an ICBM’s, is, is simply a robot brain of varying degrees of sophistication. If the Three Laws were an inextricable part of programming, no AIM-9 would ever leave the rail in a dogfight, a MOAB wouldn’t drop, and a Peacekeeper would just be an expensive opponent in a debate. (“I’m sorry, General, I can’t do that.”)
So the Three Laws are like any other bit of code. If the programmer doesn’t input them, the computer will not act on them.
This is why “cybersecurity” matters even more to the military than it does to the rest of us. If a hacker got into a Burke class DDG’s fire control system and planted an “Easter Egg” or two, things could go pear-shaped in a hurry.
It would be mildly embarrassing, to say the least, if a Tomahawk T-LAM-C launched against a Yemeni pirate stronghold should instead do an Immelmann and hit the ship that launched it.
clear ether
eon
Got it I guess…if not innate then controlled by the programmers.
Meaning, not AI at all, unlikely to ever be, and Musk and his peers are wrong?
@JTC
Let’s say that they’re a little over-optimistic.
I blame Stanley Kubrick and Steven Spielberg. The movie A.I; Artificial Intelligence (2001), made by Spielberg but based on a screenplay by Kubrick, gave everybody a very unrealistic vision of “AI” and how soon it would happen.
No, there won’t be AI surrogate children- or “Gigolo Janes”- running around any time soon. The best you’re going to get is a LLM based program running what amounts to a sophisticated “Baby Thataway”.
For certain purposes such might be useful- for certain values of “useful”.
But they are not going to replace humans, no matter what Dr. Howard Rheingold said forty years ago.
cheers
eon
Very well said and reasoned! Thank you.
More critically than the historical and/or scific lessons (I loved Asimov’s work but gave him a back-handed epitaph at the old dead blog for reasons), based on recent achievements, is what Musk et al says, and I would use the term pessimistic for his stated attitude as they are pretty clear that they expect and warn that the battle for (or against) “technological singularity” is at least somewhat imminent.
I don’t know but certainly they can’t be misconstruing programmer input regardless of sophistication, with actual “thought” processes based on input but going beyond the program and to full on autonomous. Who knows when/if our understanding of what machines (or man) is capable of is or is not final until it happens?
I know diddly about programming other than GIGO. But, spending a lifetime fixing/building stuff, I think what might be real handy, is infecting robots/Ai’s with a virus. Mainly squirrels and mice. Y’all see what those little bastids do to a wiring harness?
All of this AI STUFF is eerily reminding me of a 70s movie called “Colosuss: The Forbin Project ………………..
Hello eon…
In “I, Robot,”, even when the robot told the woman what she wanted to hear, so as to “not harm a human,” by telling the lie the robot ended up hurting her, thus ending its’ existence. Damned if you do, damned if you don’t. ETHICS needs to restored to discussion. Just because you “might” be able to do something, doesn’t mean you ‘should’ do it. Love reading your messages!!!