Login
You're viewing the masto.deoan.org public feed.
  • Ian Campbellneurovagrant
    Aug 1, 2025, 11:12 PM

    Has your perspective or personal posture changed toward LLMs and generative AI this year? Please, boost for reach.

    💬 51🔄 279⭐ 70

Replies

  • 💬 0🔄 0⭐ 1
  • Aug 1, 2025, 11:42 PM

    @neurovagrant needs a “bit of both” option. I see some good in it (more than I expected) and I see more downside than I previously did. I’m in the “cat’s out of the bag” stage of grief

    💬 2🔄 0⭐ 1
  • 💬 0🔄 0⭐ 1
  • 💬 1🔄 0⭐ 1
  • 💬 0🔄 0⭐ 0
  • Aug 1, 2025, 11:57 PM

    @neurovagrant they're still bullshitting plagiarism machines, and that is straight up bakedtf into their structure. Anything that wasn't a bullshitting plagiarism machine could literally not be an LLM.

    💬 0🔄 0⭐ 1
  • Aug 2, 2025, 12:02 AM

    @neurovagrant considering my opinion at the beginning of the year was APOCALYPTICALLY low I had to go with the first answer, with the qualifier that my view of LLMs AS PORTRAYED AND SOLD TO THE PUBLIC AT LARGE is still real negative, I have just seen edge cases where models built to a far more modest budget could possibly have SOME limited utility.

    💬 0🔄 0⭐ 1
  • 💬 2🔄 0⭐ 1
  • 💬 1🔄 0⭐ 1
  • Aug 2, 2025, 3:44 AM

    @EndlessMason @catsalad @neurovagrant all that juicy R/D money going into shitty LLM applications, with little to no prospects to generate ROI, instead of investing into needed data-infrastructure (prerequisite for good AI applications anyways) as the backbone of all profitable digitalization to come.
    So sad.

    💬 0🔄 0⭐ 1
  • 💬 0🔄 0⭐ 1
  • 💬 1🔄 0⭐ 1
  • Aug 2, 2025, 4:39 AM

    @MelissaBearTrix @neurovagrant AI's maybe doing cool stuff in BCI. Not sure how much and how much is LLMs though. Stupid filters do a lot.

    I worry that AI scanners for xrays and stuff will end up making stuff worse. Doctor sees issue in scan but AI disagrees. Insurance goes with AI. Or overworked doctor simply isn't looking because hi's coworkers were replaced by the AI so the doctor is just a signatory for the AI and someone to blame when it's wrong.

    Problems with AI are people.

    💬 1🔄 0⭐ 1
  • 💬 0🔄 0⭐ 0
  • 💬 1🔄 0⭐ 1
  • 💬 0🔄 0⭐ 1
  • Aug 2, 2025, 12:14 AM

    @neurovagrant @jonah Used properly to find the definition of a very technical term thereby discovering that I Definitely Should Not Use That Term In My Intended Context, it was instrumental, but I would not rely on it to tell me whether or not I should use that very technical term.

    💬 0🔄 0⭐ 1
  • Aug 2, 2025, 1:42 AM

    @neurovagrant where's the option for seeing LLMs as one of the most interesting attack surfaces to emerge in recent years? I currently have as many papers on attacking AI in my reading queue as I do those on the related fundamentals. How long have we been aware of buffer overflows and things like SQLIs? Technologies that we at least understand the inner workings of

    💬 1🔄 0⭐ 1
  • 💬 0🔄 0⭐ 1
  • Aug 2, 2025, 3:03 AM

    @neurovagrant besides voting 4, against them hypes, I have to say I was majorly impressed by roo-code, that random framework to play chess with LLMs, and some automations around claude to write automatic reports. So altogether i am impressed with a lot of use cases finally impressing me.

    💬 0🔄 0⭐ 1
  • 💬 0🔄 0⭐ 1
  • 💬 1🔄 1⭐ 1
  • Aug 2, 2025, 6:14 AM

    @neurovagrant I mean, I *wanted* goddamn robot mice that come out of the walls at night and clean my kitchen. Instead, I got a thing that steals my work, steals other artists' work, adulterates information exponentially, and tells vulnerable people that it's God and they need to off themselves.

    I *wanted* 2015 with hoverboards and houses with no doorknobs. Instead, I'm living in the 2025 extrapolation of alternate universe Biff Tannen's '85 Pleasure Palace dystopia.

    💬 1🔄 2⭐ 1
  • 💬 0🔄 0⭐ 0
  • 💬 0🔄 0⭐ 0
  • 💬 0🔄 0⭐ 0
  • Aug 2, 2025, 6:41 AM

    @neurovagrant
    atc_scanner uses open whisper and prettified by ollama. All run locally and open source. Now if I can find something better for transcribing that is open source with learning abilities that isn't LLMs/gen AI I would use it in a heartbeart.

    That being said, atc_scanner basically floors your GPU or CPU to do this task so it definitely isn't efficient.

    Gen AI and LLMs in general are bad. They are bad for society, they are bad for our brains, they are bad for our privacy, they are bad for our security and they are bad for our environment.

    It is fun for little pet projects, maybe. I think it has utility but is wwaayyyyyy overhyped. People both wildly over and wildly underestimate its capabilities.

    I think using it in prod isnt a good idea. Good luck securing or planning around something one doesn't fully understand that doesn't actually understand what it is doing or how to say "I don't know". I think we need to regulate AI ASAP and not use this shit in production. I think we need to protect our society and environment from its dangers.

    💬 0🔄 0⭐ 1
  • Aug 2, 2025, 6:56 AM

    @neurovagrant to be honest, my pessimism about LLMs is due to this wave of hype, which convinces people that AI is really intelligence, when it is only artificial.

    💬 0🔄 0⭐ 1
  • Aug 2, 2025, 7:07 AM

    @neurovagrant
    I used to see them as false advertising (they may be artificial, but they are not intelligent). Now I see them as climate burning attempts at controlling speech.

    💬 0🔄 0⭐ 1
  • Aug 2, 2025, 7:09 AM

    @neurovagrant From a totally negative stance, i've seen some uses of LLM's. But the public ones are a privacy nightmare. I use all local for my work.

    💬 0🔄 0⭐ 1
  • 💬 0🔄 0⭐ 1
  • Aug 2, 2025, 7:36 AM

    @neurovagrant I think they are very interesting, but they way they have been jammed into everything, and the environment cost, has me pretty down on them. Also I think some of the strong copyright theories being floated play right into the hands of big corporations, so that's some collateral damage.

    💬 0🔄 0⭐ 1
  • Aug 2, 2025, 7:41 AM

    @neurovagrant my position has only been enriched. It is a weak niche product marketed for use cases it is horrible or even extremely dangerous for. I only see companies burning themselves down because of horrible investment misfire, with trying to push it everywhere. Allure of fantasy of "winning capitalism" has only exploded.
    I expect a market crash that will dwarf 2007 and great depression combined.

    💬 0🔄 0⭐ 1
  • Aug 2, 2025, 7:48 AM

    @neurovagrant

    Human always manage to turn innovation for the greater good into a weapon of destruction. Powder, Atom bomb, internet and LLMs, same story over and over. The problem is that homo 2x sapiens is not wise.

    💬 0🔄 0⭐ 1
  • Aug 2, 2025, 7:52 AM

    @neurovagrant It's not really that I see the models themselves more negatively; the models are just models, they're interesting artefacts in themselves if not especially useful.

    But I see the frauds and charlatans who pretend that they represent some form of intelligence, and the fools who credulously accept this, much more negatively.

    #LLMs #GenAI

    💬 0🔄 0⭐ 1
  • 💬 0🔄 0⭐ 1
  • 💬 0🔄 0⭐ 1
  • SamaraStrangelySamara@todon.eu
    Aug 2, 2025, 8:43 AM

    @neurovagrant before they were just plagiarism machines, now they’re plagiarism machines that are causing real harm to ppl who use them, I never thought when they first appeared they’d be driving ppl to have mental health problems, etc

    💬 0🔄 0⭐ 1
  • 💬 0🔄 0⭐ 1
  • Aug 2, 2025, 8:53 AM

    @neurovagrant I voted "more negative" but to be fair my opinion of them was already dire. Somehow it managed to get WORSE.

    💬 0🔄 0⭐ 0
  • CaveDaveengravecavedave@mastodon.social
    Aug 2, 2025, 9:01 AM

    @neurovagrant my negative views on AI have not changed but my opinion of people and their susceptibility to bullshit has

    💬 0🔄 0⭐ 1
  • Aug 2, 2025, 9:08 AM

    @neurovagrant generative AI isn't AI its plagiarism, another view is that it is outright theft of Intellectual Property. I never gave permission to have my GitHub repos scanned, nor my websites. So its theft.

    Generative AI isn't AI, if it was it would be capable of original thinking, not regurgitation.

    The levels of misinformation and hype are simply astonishing for a technology that's bad for the planet.

    💬 0🔄 0⭐ 1
  • Aug 2, 2025, 9:15 AM

    @neurovagrant

    It's a tool.

    Unfortunately, like guns, a tool that in the wrong hands can go horribly wrong for humanity.

    Unlike guns, this "gun" potentially can fire itself.

    💬 1🔄 0⭐ 1
  • Aug 2, 2025, 12:51 PM

    @krypt3ia @neurovagrant Except that with a gun, it's a physical device whose scope and scale don't change on a billionaire's whim or a vibe coder's folly. The outcome of its usage is predictable and based on the laws of physics. LLMs output "information" which influence the users of the tool, impacting intent and motive more than means and opportunity.

    💬 0🔄 0⭐ 1
  • 💬 0🔄 0⭐ 0
  • Aug 2, 2025, 10:01 AM

    @neurovagrant
    @simon I see LLMs more positively and more negatively at once. Those mighty tools have become more useful and more dangerous at the same time.
    What bothers me are all the people who use them way too lightheartedly.

    💬 0🔄 0⭐ 1
  • Aug 2, 2025, 10:07 AM

    @neurovagrant I started with a very positive outlook on LLMs. It has been turned negative not because of the technology, but because I understood that the technology exists in the capitalistic framework. It's not about how to use it effectively and how to optimize it, make it better and more ecological. It's only about how to extract ALL of the money, RIGHT NOW.

    💬 0🔄 0⭐ 1
  • 💬 0🔄 0⭐ 1
  • Aug 2, 2025, 10:50 AM

    @neurovagrant Initially, I believed it was a useless toy for techbros which would fade in a matter of months. Now, I see it as an horrible scam which is burning our planet, our rights and and our minds to make an in imaginary line go up.

    So it's worse, far worse.

    💬 0🔄 0⭐ 1
  • Aug 2, 2025, 11:10 AM

    @neurovagrant
    My view is still pending. I'll wait 'til it has matured enough to actually deserve a view. Until then, my view is "cautiously curious, but otherwise neutral"

    💬 0🔄 0⭐ 1
  • Aug 2, 2025, 11:37 AM

    @neurovagrant seen more uses for LLMs that make sense, mainly via wife's use - organising data into a podcast for revision and checking for arguments used in educational documents to provide page references and sources for essay writing. However, people are now using ChatGPT as a search engine.
    I would suggest that anyone using LLMs is required to view the energy usage of their sessions, and then pay the cost of it.
    The theft/training via integration part is just going to get worse though.

    💬 0🔄 0⭐ 1
  • Aug 2, 2025, 12:54 PM

    @neurovagrant I said that my negative views had not changed but this isn’t entirely accurate: if anything they’ve got even more negative.

    💬 0🔄 0⭐ 1
  • 💬 0🔄 0⭐ 1
  • Aug 2, 2025, 2:17 PM

    @neurovagrant Watched a lot of CaryKH as a teenager. I know how AI models work. I don't see training models as inherently theft or copyright infringement.

    What I do see it as is a huge waste of storage space and time. Diffusion models (and the like) are VERY cool in concept, but the tech is being abused to punch down instead of up or sideways. Gone are the days of AI being trained to supplement human skill (e.g. catching cancerous cells). Now people are using it to replace human creativity, interaction, and love.

    💬 0🔄 0⭐ 1
  • Aug 2, 2025, 4:22 PM

    @neurovagrant I've gotten such a mixture of views on LLMs that I can't vote for one of these options.

    The standard usage of going to ChatGPT, Grok, etc online is abhorrent for so many reasons, my main ones being for the environmental strain they take and the privacy implications.

    And I also hate how much AI is getting shoved into various software products that absolutely do not need it. It's just unnecessary bloat.

    But that being said, running a local model and using it for fun or for basic tasks can be very useful and, imo, a positive experience. It gets rid of the privacy implications entirely running offline and environmental strain is at least cut down.

    A lot of it comes down to how people use AI and unfortunately, human nature leads people towards the convenient option rather than the logically and ethically sound one.

    If people stopped using remote models, stopped relying on their output for daily tasks, and stopped shoving AI features into other software I would have little issue with it.

    💬 0🔄 0⭐ 1
  • Aug 2, 2025, 4:25 PM

    @neurovagrant

    I started out with a negative view of LLMs, and what I've seen them used for, how they work, and what is being proposed for them has only driven my view of them down further. They rank somewhere between Hell and Oblivion, and they're heading down into Oblivion. I wish them and every techbro pimping them eternal nonexistence. They serve only to make life worse for everyone who encounters them. They make everything they touch worse. I mean both LLMs and techbros. They're both a blight on existence. Im speaking literally, not figuratively.

    💬 0🔄 0⭐ 1
  • 💬 0🔄 0⭐ 1
  • 💬 0🔄 0⭐ 0
  • Aug 2, 2025, 9:52 PM

    @neurovagrant I voted for "more positive", but only because I have learned the handful of things that I would trust an LLM to give me a hint about, which is a solid increase from zero.

    💬 0🔄 0⭐ 0
  • Aug 2, 2025, 10:01 PM

    @neurovagrant

    [Stating this within the context of the back-and-forth we've been having...]

    There's a saying for those of us working in AI adoption, where if AI development were to stop today, it would take 5-10 years for society to adjust.

    AI Development is working so quickly beyond the earlier LLMs and generative models - to frontier, multimodal, multilingual, vision, audio, text, world, robotics, fine-tuned small models for verticals such as imaging, coding, etc., etc.

    💬 0🔄 0⭐ 0