Login
You're viewing the front-end.social public feed.

Replies

  • Apr 21, 2026, 9:15 PM

    (I care so much about this issue that I actually did some proofreading!)

    💬 2🔄 1⭐ 18
  • 💬 1🔄 0⭐ 2
  • 💬 0🔄 0⭐ 11
  • 💬 0🔄 0⭐ 0
  • 💬 1🔄 0⭐ 0
  • 💬 0🔄 0⭐ 0
  • 💬 0🔄 0⭐ 0
  • Apr 21, 2026, 10:41 PM

    @tante I don’t like to be glued to the screen this late, normally. But the quality of your piece is such that I couldn’t stop reading. Thanks (no, really).

    Bookmarked it so I can boost it tomorrow when normal people are awake.

    💬 0🔄 0⭐ 5
  • Apr 21, 2026, 10:55 PM

    @tante There's definitely a need for an actively _anti-fascist_ approach to tech, and in particular as you point out, software licensing. Restrictive free/open licensing has been eroded over the decades, and the time really is _now_ for an ecosystem attached to an explicitly political license.

    It's no accident that one of the uses of "AI" is to launder copyright violation, and that further erodes open source licensing. "The robot did it" gets you a copy of any licensed code, lawyer-free.

    💬 0🔄 3⭐ 3
  • Apr 22, 2026, 12:02 AM

    @tante AI is sadly learning from what it is consuming from the news. Because fascism is a growing trend, AI is learning about fascism at a greater speed than just about any other concept at this moment.

    What that means, at least for right now, is that Al is disproportionately learning that fascism is the way society works. Because AI learning systems have no moral compass nor have they been given a moral center, they spew what they’ve learned, good or evil.

    💬 0🔄 0⭐ 2
  • 💬 0🔄 0⭐ 1
  • Apr 22, 2026, 7:43 AM

    @tante Thank you for this great article. While I mostly agree, we all know that it’s unrealistic to destroy a technology because it’s misused or used by the wrong people if it also brings benefits to enough people. LLMs/machine learning won’t go away, just like the Internet won’t.

    I’m missing some advice to people in tech/software how to behave and act in terms of realistic goals when dealing with AI tech (LLMs/machine learning).

    Massive regulation and democratic control? Dangerous technology can be banned and ostracised.

    Or use it wisely and turn it against the fascist tendencies?

    I think we as an industry have to look at the business models of AI (exploitation, task automation vs. working time, distribution of profits and incomes etc.) and democratise the technology.

    💬 2🔄 0⭐ 2
  • Apr 22, 2026, 8:26 AM

    @amorgner @tante We as an industry need to be held accountable to begin with. What businesses get away with is unbelievable. Innovation most of the time seems to destroy the planet for capital gains, but it's all good until regulated? That's the wrong mindset, and it needs to stop.

    💬 1🔄 0⭐ 4
  • Apr 22, 2026, 8:37 AM

    @simondassow @tante I fully agree, but in reality, voluntary self-restriction doesn’t work. Of course the exploitation of humans and the planet is not good at all. But what do you suggest that really works? And isn’t regulation just another description of holding companies accountable?

    We as a society have to define (moral and legal) rules and punish those who break them. Not buying their products also seems to be an effective form of punishment.

    💬 1🔄 0⭐ 1
  • Apr 22, 2026, 9:38 AM

    @amorgner @tante As far as I can see, our current systems have no mandatory accountability, and lack necessary consequences in breach of it - it's a conceptual problem, which, unless addressed will never stop the loop of exploitation before regulation. In concrete terms that means companies can't just be virtual legal entities that worst case are just shut down, at best with financial downsides, while the damage is done. And it includes boards and shareholders as well next to customers.

    💬 1🔄 0⭐ 0
  • Apr 22, 2026, 10:01 AM

    @simondassow @tante Yes, companies are highly undemocratic. And even in politics we see an increasing (and disturbing) trend of doing whatever the lobbyists dictate without feeling guilty or responsible. Even without AI. My conclusion is: Vote left, fight back, create alternatives, lead by example. And I keep asking the critics for good advice on how to act. Difficult question it seems, I seldom get any answers.

    💬 0🔄 0⭐ 0
  • Apr 22, 2026, 8:33 AM

    @amorgner @tante

    > LLMs/machine learning won’t go away,
    > just like the Internet won’t.

    The more it is build into infrastructures, the harder it is to get rid of it but are plenty of technologies that went away, or at least receive so few funding that they are not further developed (and LLMs need to be trained and ran). So it is actually possible.

    💬 0🔄 0⭐ 0
  • Apr 22, 2026, 8:40 AM

    @simulo @tante I hope you’re right. I’m rather pessimistic that AI even in the broader meaning will disappear entirely, but a tad optimistic that we can make the technology (machine learning etc.) beneficial for the most.

    💬 0🔄 0⭐ 1
  • Apr 22, 2026, 8:01 AM

    @tante Thanks!

    I've also had this analogy in my head how fascism appealed to order, those idealized towns with small garden plots with no weeds.

    And now they appeal to fixing security bugs.

    It works because they're throwing so much resources at it that it *does* provide significant utility (as long as you don't ask too detailed questions about where those resources came from) so that refusing is a real disadvantage in the world that you get penalized for.

    I don't like it.

    💬 1🔄 0⭐ 1
  • 💬 1🔄 0⭐ 1
  • Apr 22, 2026, 8:07 AM

    @tante Yeah, I did see it. In general though even if adjusted for the disingenuous and exaggerated marketing, they ARE finding real issues - yes, humans could have found all of them too, but they're simply throwing billions at the field ...

    Refusing to fix a (real) security issue "just because" it's been reported based on fascist tech is ... not going to happen.

    And thus the normalization continues. They've found an offer projects can't refuse.

    💬 2🔄 2⭐ 2
  • Apr 22, 2026, 8:10 AM

    @larsmb Personally using LMs as fuzzer is probably one of the more reasonable uses (doesn't need a very large model as has been shown). It _is_ a pattern matching problem after all.
    It's just sad that nobody gets the money to do the manual checking and validating unless it is as LLM PR.

    💬 0🔄 1⭐ 0
  • Apr 22, 2026, 10:38 AM

    @larsmb @tante Also? What do you bet that at least one project that says "No AI allowed for any reason!" is gonna get DDOSed by those same AI-running beauzeaux who take credit for finding Firefox bugs?

    It's totally an organised crime racket, from top to bottom. #AI_mafia

    💬 0🔄 0⭐ 0
  • Apr 22, 2026, 8:08 AM

    @tante I'm trying to get how I feel about it. While I consider myself an antifascist and a democracy fan, while I haven't used AI in any way before fall of 2025, I'm now involved in a project which uses ai as a document recognition tool. Traditional ocr systems can't cleanly parse nested tables of pre-unnknown format. So we fell back to ai usage only for text extraction. We don't use it to generate bullshit, we do it to make people's lives easier (you now don't need to mechanically type in 100+ products and their codes into the CRM).

    I see why ai is fascist and agree with the points about fascism being embedded in its core. I wish that some time later people find a way to do the same thing we do (text recognition, data extraction and context understanding) using a different technology

    💬 1🔄 0⭐ 1
  • 💬 1🔄 0⭐ 1
  • 💬 1🔄 0⭐ 2
  • Apr 22, 2026, 8:19 AM

    @hipsterelectron @tante which is wrong in many ways, starting with somewhat sensitive data sent to a third party processor. There's no way for that system to run in an air gapped environment and it's a huge pain in the ass

    💬 0🔄 0⭐ 1
  • 💬 1🔄 0⭐ 2
  • 💬 1🔄 0⭐ 1
  • 💬 0🔄 0⭐ 0
  • Apr 22, 2026, 8:17 AM

    @tante thanks for this very good article.

    "“AI” is a political project" is a phrase that I will keep in mind for my next discussion in which I will (probably alone) argue why "AI" is one of the worst things lately.

    💬 0🔄 0⭐ 2
  • 💬 0🔄 0⭐ 0
  • 💬 1🔄 0⭐ 0
  • 💬 0🔄 0⭐ 0
  • 💬 0🔄 0⭐ 0
  • Apr 22, 2026, 10:05 AM

    @tante Dude, that essay is incredibly insightful. It might be a bit too optimistic to assume that people get the references to Hannah Arendt's definition of fascism, but one should be able to fully understand the point even without that connection.

    Great piece of writing and I've shared it with lots of people who I think need to read it. Let's hope that they do that instead of just asking some slop machine for a summary.

    💬 1🔄 0⭐ 1
  • Apr 22, 2026, 10:05 AM

    @zappes that was the intent: Not to lean to heavy into theory but provide a more self-contained piece.

    💬 1🔄 0⭐ 1
  • 💬 0🔄 0⭐ 0
  • Apr 22, 2026, 11:02 AM

    @tante Sigh, this has been written better before, though back then the target was computers and the screeds were written on spirit duplicators. Calling computers a fascist tool did not do anything to stop them from coming last time, it did however succeed in reducing the number of workers in the new it-related jobs that unionized. A massive self-own. Can we try to not make the same mistake again?

    💬 1🔄 0⭐ 0
  • 💬 0🔄 0⭐ 0
  • 💬 1🔄 0⭐ 0
  • 💬 0🔄 0⭐ 0
  • Apr 22, 2026, 12:38 PM

    @tante Truly great piece! I especially like that you mentioned how widespread it's getting in research and how that undermines the foundation of knowledge creation and transmission.

    One other problem with the AI is THE FUTURE narrative I feel is very important is how many people in the left embrace the criti-hype instance, assuming the text extruder is way more powerful/useful than it actually is, which makes it much easier to build this fatalist narrative.

    💬 0🔄 0⭐ 0
  • 💬 0🔄 0⭐ 0