Login
You're viewing the front-end.social public feed.

Replies

  • 💬 1🔄 0⭐ 0
  • Mar 27, 2026, 6:31 PM

    @demize they're so good. terry pratchett used to have like 5 levels of nested footnotes spanning 3 pages and i aspire to that level of whimsy

    💬 1🔄 1⭐ 0
  • 💬 0🔄 0⭐ 0
  • 💬 0🔄 0⭐ 1
  • 💬 0🔄 0⭐ 0
  • 💬 0🔄 0⭐ 0
  • 💬 1🔄 0⭐ 0
  • Mar 29, 2026, 6:25 AM

    @jneen My most recent project was a Python implementation of an aerosol scrubbing model that is implemented as part of larger R code. The R code is not documented to the level we need so the Python implementation was mainly an excuse to chase down references for everything left undocumented or unattributed in the R application. The original documentation was about 50 pages for the entire application; my documentation was about 200 pages for just one model (though strip out the example plots and source code and there's still at least 50 pages of design info and technical basis). In this case the documentation was far more valuable than the code because it cited chapter and verse where every equation, correlation, and piece of data came from. It's good having a pedantic technical reviewer that holds you accountable.

    So yeah, I appreciate the work that went into your post and the background detail. :)

    Aside: Is a commercial chatbot even capable of providing references for its work? My understanding is that all the attribution is laundered away when the LLM is constructed; all it can produce is obsequious hearsay...

    💬 0🔄 0⭐ 0
  • 💬 2🔄 0⭐ 0
  • Mar 29, 2026, 1:23 PM

    @arclight to your other question, part of the reason there's ambiguity on this is that LLMs can *claim* to provide references and introspect about its output, but that introspection and those references are still just... output

    💬 1🔄 0⭐ 0
  • Mar 29, 2026, 3:53 PM

    @jneen We've seen with the number of legal cases where lawyers have been caught out with fabricated cases as well as journal papers with fabricated references that the system will simply stick tokens together to meet its optimization threshold. So even if attribution hadn't been intentionally bleached away, nothing the chatbot emitted could be trusted unless there's some trustworthy deterministic (non-LLM) system that can verify the existence of citations and assess their relevance. *Everything* is a fabrication.

    What concerns me is not the LLM part of the chatbot - that's just a pile of linear algebra - it's the cobbled-together UI that responds like an obsequious servile intern, Stepford ELIZA on Prozac. That part of the system is built on 30+ years of dark pattern research to keep people spending tokens. Right, wrong, as long as users keep spending, the system is operating as designed. The only acceptance test is that line goes up.

    💬 2🔄 2⭐ 0
  • 💬 0🔄 0⭐ 0
  • 💬 0🔄 1⭐ 0
  • Mar 29, 2026, 4:05 PM

    @jneen I get extra twitchy about chatbot use because my job is software QA on nuclear safety analysis code (here's a decent technical basis for an earlier related code osti.gov/biblio/10200672/) We have enough problems with coarse models and missing or uncertain data, we don't need a machine confidently fabricating nonsense. I'm not going in front of a regulator to explain our answers are bullshit because someone trusted a chatbot to fill in the blanks. Health & safety of the people comes first, then environmental protection, then protection of equipment. It's simply unethical to use these systems in any part of the safety analysis or design or licensing process. There's too much at stake.

    💬 0🔄 2⭐ 4