Login
You're viewing the queer.cool public feed.
  • Apr 5, 2026, 10:05 AM

    >In October, Ford CEO Jim Farley said he
    >expects AI to replace “literally half” of all
    >white-collar positions, Yahoo Finance reported.

    CEOs spouting absolute shit about things they know nothing about seems de rigour.

    The C-suite REALLY WANT to lay off as many software developers as they can, they pretty much ALWAYS have, we cost a lot and take a long time to implement 'simple' ideas.

    What they miss is that the complexity is real, they just don't see it because they don't want to.

    My experience of AI is that the code it produces is absolute trash, and without expert guidance it goes very wrong very quick.

    AI is good at finding bugs, investigating things and bouncing ideas.

    It is NOT good at anything creative.

    And given that it's a huge statistical regression machine ALL THIS MAKES SENSE!

    The average of creative work is at best mediocrity, or dare I say it in software, appallingly bad.

    You can't work around that. You can't magic it away. This is a fundamental limitation.

    💬 1🔄 1⭐ 1

Replies

  • Apr 5, 2026, 10:11 AM

    Also it begs the question - why not management? Why not upper management?

    What exactly is so special about a CEO that it cannot be replicated by an LLM? There is a huge corpus on the kinds of nonsense they come out with, and it's ok if it's wrong sometimes, probably more right than most CEOs.

    The same can be said even more so about the majority of (though importantly, not all) middle management positions.

    I've been blessed (until recently) to have a fantastic, irreplaceable boss.

    But in previous jobs I've had some truly appalling, nasty, horrible ones, ones for which a chat bot would be a welcome replacement.

    But of course, in all this, we start to notice power centres, and how it is one rule for the rest of us and another for them.

    However despite that, the nature of reality intercedes - if an LLM can't do something, no amount of firings and hubris will make it do that.

    Many, many companies (+ the economy as a whole) are going to learn lessons the hard way.

    💬 2🔄 0⭐ 0
  • Apr 5, 2026, 10:16 AM

    Of course the AGI fantasies of moronic, NON-TECHNICAL corporate sociopaths like the disgusting Sam Altman or the even more vile, even more stupid excuse for a human being, Elon Musk are total nonsense.

    LLMs are playing clever language and statistical games. They are, in essence, a dumb process dramatically scaled up.

    And, in contrast to my previous view of them, they DO produce VERY useful results that can be a force multilpier.

    But it is essentially applying pattern matching behaviour, at speed.

    So while it applies, say, a mediocre level of investigating a kernel splat, it is:

    a. with an enormous breadth of statistical 'knowledge' (slurped up from mailing list)
    b. far faster than a human could do it

    So it produces EXCEPTIONAL results. Breathtaking sometimes.

    That is not the same as a creative activity such as programming, where things are truly novel.

    There its strengths become weaknesses.

    The nature of the approach does not permit 'new' or 'creative' 'thought's.

    💬 2🔄 1⭐ 1
  • Apr 5, 2026, 10:19 AM

    And for AGI you truly would require that.

    The real world is full of 'fat' tails (see blog.piekniewski.info/2024/04/ ) - and those simply DO NOT work with LLMs or deep learning approaches.

    They can't They are antithetical to them.

    So the broader 'replace people' stuff, for creative work, simply cannot work.

    Empower people - yes, perhaps even lay off because 1 person can do more now - yes.

    Replace lower level white collar jobs that are, say, simplistic pattern matching - sure, and that IS a moral hazard (do we really want to be making more people unemployed?)

    But replacing the more creative fields like doctors, architects, programmers (yes I am not impartial to that one), etc.? No, not with the current technology.

    💬 1🔄 0⭐ 0
  • Apr 5, 2026, 10:23 AM

    So I've moved my position from 'total horseshit, they're full of shit' to 'powerful tool, they're full of shit'.

    But it is a MAJOR move, but hey - when the facts defeat my view of things I update to reflect the facts.

    Have done many times, not only in respect to this but... yeah ask anybody who knows me :) [I can be hotheaded yes, but I correct, eventually :>)]

    However the fundamental limits of these things are what they are.

    I am no expert, so perhaps I am missing something that somehow overcomes the fat tails/no dynamics (briefly - it can only infer, not develop models of the world) arguments, but given Piekniewski's blog (who is an expert) I doubt it.

    We are in for some rough chop.

    In the end the technology will settle, we'll probably have open source implementations just as good as proprietary ones (assuming nobody pulls legal strings to prevent, which is sadly quite possible) and people will accept the tools are good at X and bad at Y.

    Until then expect more idiocy...

    💬 1🔄 0⭐ 0
  • 💬 0🔄 0⭐ 0
  • Apr 5, 2026, 11:49 AM

    @ljs I let Claude generate the code for a couple of hobby projects, and I feel guilty because the code is of decent quality, a and it would take me weeks to do something similar (or rather, I'd never start those projects). Now I feel like writing an essay on moral aspects of vibe coding :))

    But in no way I let it in my paid job projects. The cost of a fuckup is too high, and I really need to know every detail of that code.

    💬 1🔄 0⭐ 0
  • Apr 5, 2026, 1:33 PM

    @bonkers well we each speak from our own perspectives :) every bit of kernel code it's made has been atrocious.

    And yes not understanding it in subtle details where it can and will get things wrong in non-obvious respects is another reason this is problematic

    💬 1🔄 0⭐ 0
  • Apr 5, 2026, 2:15 PM

    @ljs of course. The tasks that I gave it were not too new, and I'd do quite the same, looking for examples and synthesizing the solution from other projects. Whenever it's something less trivial, it sucks big time.

    💬 0🔄 0⭐ 0