Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many ā€œesotericā€ right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged ā€œculture criticsā€ who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

  • o7___o7@awful.systems
    link
    fedilink
    English
    arrow-up
    4
    Ā·
    edit-2
    5 hours ago

    Better Offline was rough this morning in some places. Props to Ed for keeping his cool with the guests.

  • HedyL@awful.systems
    link
    fedilink
    English
    arrow-up
    8
    Ā·
    15 hours ago

    I have been thinking about the true cost of running LLMs (of course, Ed Zitron and others have written about this a lot).

    We take it for granted that large parts of the internet are available for free. Sure, a lot of it is plastered with ads, and paywalls are becoming increasingly common, but thanks to economies of scale (and a level of intrinsic motivation/altruism/idealism/vanity), it still used to be viable to provide information online without charging users for every bit of it. Same appears to be true for the tools to discover said information (search engines).

    Compare this to the estimated true cost of running AI chatbots, which (according to the numbers I’m familiar with) may be tens or even hundreds of dollars a month for each user. For this price, users would get unreliable slop, and this slop could only be produced from the (mostly free) information that is already available online while disincentivizing creators from producing more of it (because search engine driven traffic is dying down).

    I think the math is really abysmal here, and it may take some time to realize how bad it really is. We are used to big numbers from tech companies, but we rarely break them down to individual users.

    Somehow reminds me of the astronomical cost of each bitcoin transaction (especially compared to the tiny cost of processing a single payment through established payment systems).

    • corbin@awful.systems
      link
      fedilink
      English
      arrow-up
      4
      Ā·
      4 hours ago

      I’ve done some of the numbers here, but don’t stand by them enough to share. I do estimate that products like Cursor or Claude are being sold at roughly an 80-90% discount compared to what’s sustainable, which is roughly in line with what Zitron has been saying, but it’s not precise enough for serious predictions.

      Your last paragraph makes me think. We often idealize blockchains with VMs, e.g. Ethereum, as a global distributed computer, if the computer were an old Raspberry Pi. But it is Byzantine distributed; the (IMO excessive) cost goes towards establishing a useful property. If I pick another old computer with a useful property, like a radiation-hardened chipset comparable to a Gamecube or G3 Mac, then we have a spectrum of computers to think about. One end of the spectrum is fast, one end is cheap, one end is Byzantine, one end is rad-hardened, etc. Even GPUs are part of this; they’re not that fast, but can act in parallel over very wide data. In remarkably stark contrast, the cost of Transformers on GPUs doesn’t actually go towards any useful property! Anything Transformers can do, a cheaper more specialized algorithm could have also done.

    • besselj@lemmy.ca
      link
      fedilink
      English
      arrow-up
      4
      Ā·
      16 hours ago

      My guess is that vibe-physics involves bruteforcing a problem until you find a solution. That method sorta works, but is wholly inefficient and rarely robust/general enough to be useful.

      • Mii@awful.systems
        link
        fedilink
        English
        arrow-up
        8
        Ā·
        14 hours ago

        If infinite monkeys with typewriters can compose Shakespeare, then infinite monkeys with slop machines can produce Einstein (but you need to pump in infinite amounts of money first into my CodeMonkeyfy startup, just in case).

      • Architeuthis@awful.systems
        link
        fedilink
        English
        arrow-up
        11
        Ā·
        15 hours ago

        Nah, he’s just talking to an LLM.

        ā€œI’ll go down this thread with [Chat]GPT or Grok and I’ll start to get to the edge of what’s known in quantum physics and then I’m doing the equivalent of vibe coding, except it’s vibe physics,ā€ Kalanick explained. ā€œAnd we’re approaching what’s known. And I’m trying to poke and see if there’s breakthroughs to be had. And I’ve gotten pretty damn close to some interesting breakthroughs just doing that.ā€

        And I don’t think you can brute force physics in general, having to experimentally confirm or disprove every random-ass intermediary hypothesis the brute force generator comes up with seems like quite the bottle neck.

        • besselj@lemmy.ca
          link
          fedilink
          English
          arrow-up
          7
          Ā·
          15 hours ago

          For sure. There’s an infinite amount of ways to get things wrong in math and physics. Without a fundamental understanding, all they can do is prompt-fondle and roll dice.

  • scruiser@awful.systems
    link
    fedilink
    English
    arrow-up
    11
    Ā·
    23 hours ago

    So recently (two weeks ago), I noticed Gary Marcus made a lesswrong account to directly engage with the rationalists. I noted it in a previous stubsack thread

    Predicting in advance: Gary Marcus will be dragged down by lesswrong, not lesswrong dragged up towards sanity. He’ll start to use lesswrong lingo and terminology and using P(some event) based on numbers pulled out of his ass.

    And sure enough, he has started talking about P(Doom). I hate being right. To be more than fair to him, he is addressing the scenario of Elon Musk or someone similar pulling off something catastrophic by placing too much trust in LLMs shoved into something critical. But he really should know better by now that using their lingo and their crit-hype terminology strengthens them.

    • Architeuthis@awful.systems
      link
      fedilink
      English
      arrow-up
      5
      Ā·
      15 hours ago

      using their lingo and their crit-hype terminology strengthens them

      We live in a world where the US vice president admits to reading siskind AI fan fiction, so that ship has probably sailed.

  • BigMuffN69@awful.systems
    link
    fedilink
    English
    arrow-up
    13
    Ā·
    edit-2
    2 hours ago

    Remember last week when that study on AI’s impact on development speed dropped?

    A lot of peeps take away on this little graphic was ā€œsee, impacts of AI on sw development are a net negative!ā€ I think the real take away is that METR, the AI safety group running the study, is a motley collection of deeply unserious clowns pretending to do science and their experimental set up is garbage.

    https://substack.com/home/post/p-168077291

    ā€œFirst, I don’t like calling this study an ā€œRCT.ā€ There is no control group! There are 16 people and they receive both treatments. We’re supposed to believe that the ā€œtreated unitsā€ here are the coding assignments. We’ll see in a second that this characterization isn’t so simple.ā€

    (I am once again shilling Ben Recht’s substack. )

    • YourNetworkIsHaunted@awful.systems
      link
      fedilink
      English
      arrow-up
      9
      Ā·
      18 hours ago

      While I also fully expect the conclusion to check out, it’s also worth acknowledging that the actual goal for these systems isn’t to supplement skilled developers who can operate effectively without them, it’s to replace those developers either with the LLM tools themselves or with cheaper and worse developers who rely on the LLM tools more.

      • BigMuffN69@awful.systems
        link
        fedilink
        English
        arrow-up
        3
        Ā·
        2 hours ago

        True. They building city sized data centers and offering people 9 figure salaries for no reason. They are trying to front load the cost of paying for labour for the rest of time.

    • TinyTimmyTokyo@awful.systems
      link
      fedilink
      English
      arrow-up
      10
      Ā·
      edit-2
      23 hours ago

      When you look at METR’s web site and review the credentials of its staff, you find that almost none of them has any sort of academic research background. No doctorates as far as I can tell, and lots of rationalist junk affiliations.

    • David Gerard@awful.systemsM
      link
      fedilink
      English
      arrow-up
      9
      Ā·
      1 day ago

      oh yeah that was obvious when you see who they are and what they do. also, one of the large opensource projects was the lesswrong site lololol

      i’m surprised it’s as well constructed a study as it is even given that

    • V0ldek@awful.systems
      link
      fedilink
      English
      arrow-up
      6
      Ā·
      11 hours ago

      It’s extremely annoying everywhere. GitHub’s updates were about AI for so fucking long that I stopped reading them, which means I now miss actually useful stuff until someone informs me of it months later.

      For example, did you know GitHub Actions now has really good free ARM runners? It’s amazing! I love it! Shame GitHub only bother’s to tell me about their revolutionary features of ā€œplease spam me with useless PRsā€ and… make a pong game? What? Why would I want this?

    • BlueMonday1984@awful.systems
      link
      fedilink
      English
      arrow-up
      14
      arrow-down
      1
      Ā·
      1 day ago

      Potential hot take: AI is gonna kill open source

      Between sucking up a lot of funding that would otherwise go to FOSS projects, DDOSing FOSS infrastructure through mass scraping, and undermining FOSS licenses through mass code theft, the bubble has done plenty of damage to the FOSS movement - damage I’m not sure it can recover from.

      • ________@awful.systems
        link
        fedilink
        English
        arrow-up
        9
        Ā·
        1 day ago

        I remember popping into IRC or a mailing list to ask subsystem questions to learn from the sources themselves how something works (or should work). Depending who what and where definitely had differing experiences but overall I felt like there was typically a helpful person on the other side. Nowadays I fear the slop will make people a lot less willing to help when they are overwhelmed with AI generated garbage patches or mails losing some of the rose-tinted charm of open source.

        • BlueMonday1984@awful.systems
          link
          fedilink
          English
          arrow-up
          11
          arrow-down
          1
          Ā·
          1 day ago

          The deluge of fake bug reports is definitely something I should have noted as well, since that directly damages FOSS’ capacity to find and fix bugs.

          Baldur Bjanason has predicted that FOSS is at risk of being hit by ā€œa vicious cycle leading to collapseā€, and security is a major part of his hypothesised cycle:

          1. Declining surplus and burnout leads to maintainers increasingly stepping back from their projects.

          2. Many of these projects either bitrot serious bugs or get taken over by malicious actors who are highly motivated because they can’t relay on pervasive memory bugs anymore for exploits.

          3. OSS increasingly gets a reputation (deserved or not) for being unsafe and unreliable.

          4. That decline in users leads to even more maintainers stepping back.

          • gerikson@awful.systems
            link
            fedilink
            English
            arrow-up
            11
            Ā·
            1 day ago

            yeah but have you considered how much it’s worth that gramma can vibecode a todo app in seconds now???

    • nightsky@awful.systems
      link
      fedilink
      English
      arrow-up
      15
      Ā·
      1 day ago

      rsyslog goes ā€œAI firstā€

      what

      Thanks for the ā€œfrom now on stay away from this foreverā€ warning. Reading that blog post is almost surreal (ā€œhow AI is shaping the future of loggingā€), I have to remind myself it’s a syslog daemon.

      • froztbyte@awful.systems
        link
        fedilink
        English
        arrow-up
        10
        Ā·
        1 day ago

        I would’ve stan’d syslog-ng but they’ve also been pulling some fuckery with docs again lately that’s making me anxious, so I’m very :|||||

  • nightsky@awful.systems
    link
    fedilink
    English
    arrow-up
    27
    Ā·
    2 days ago

    I need to rant about yet another SV tech trend which is getting increasingly annoying.

    It’s something that is probably less noticeable if you live in a primarily English-speaking region, but if not, there is this very annoying thing that a lot of websites from US tech companies do now, which is that they automatically translate content, without ever asking. So English is pretty big on the web, and many English websites are now auto-translated to German for me. And the translations are usually bad. And by that I mean really fucking bad. (And I’m not talking about the translation feature in webbrowsers, it’s the websites themselves.)

    Small example of a recent experience: I was browsing stuff on Etsy, and Etsy is one of the websites which does this now. Entire product pages with titles and descriptions and everything is auto-translated, without ever asking me if I want that.

    On a product page I then saw:

    Material: gefühlt

    This was very strange… because that makes no sense at all. ā€œGefühltā€ is a form (participle) of the verb ā€œfühlenā€, which means ā€œto feelā€. It can be used in a past tense form of the verb.

    So, to make sense of this you first have to translate that back to English, the past tense ā€œto feelā€ as ā€œfeltā€. And of course ā€œfeltā€ can also mean a kind of fabric (which in German is called ā€œFilzā€), so it’s a word with more than one meaning in English. You know, words with multiple meanings, like most words in any language. But the brilliant SV engineers do not seem to understand that you cannot translate words without the context they’re in.

    And this is not a singular experience. Many product descriptions on Etsy are full of such mistakes now, sometimes to the point of being downright baffling. And Ebay does the same now, and the translated product titles and descriptions are a complete shit show as well.

    And Youtube started replacing the audio of English videos by default with AI-auto-generated translations spoken by horrible AI voices. By default! It’s unbearable. At least there’s a button to switch back to the original audio, but I keep having to press it. And now Youtube Shorts is doing it too, except that the YT Shorts video player does not seem to have any button to disable it at all!

    Is it that unimaginable for SV tech that people speak more than one language? And that maybe you fucking ask before shoving a horribly bad machine translation into people’s faces?

    • antifuchs@awful.systems
      link
      fedilink
      English
      arrow-up
      5
      Ā·
      20 hours ago

      Ooooh that would explain a similarly weird interaction I had on a ticket-selling website, buying a streaming ticket to a live show for the German retro game discussion podcast Stay Forever: they translated the title of the event as ā€œBleib für immer am Lebenā€, guess they named it ā€œStay Forever Liveā€? No way to know for sure, of course.

    • HedyL@awful.systems
      link
      fedilink
      English
      arrow-up
      9
      Ā·
      2 days ago

      Is it that unimaginable for SV tech that people speak more than one language? And that maybe you fucking ask before shoving a horribly bad machine translation into people’s faces?

      This really gets on my nerves too. They probably came up with the idea that they could increase time spent on their platforms and thus revenue by providing more content in their users’ native languages (especially non-English). Simply forcing it on everyone, without giving their users a choice, was probably the cheapest way to implement it. Even if this annoys most of their user base, it makes their investors happy, I guess, at least over the short term. If this bubble has shown us anything, it is that investors hardly care whether a feature is desirable from the users’ point of view or not.

      • fullsquare@awful.systems
        link
        fedilink
        English
        arrow-up
        9
        Ā·
        2 days ago

        if it’s opt-out, it also keeps use of the shitty ai dubbing high thus making it an artficial use case. it’s like with gemini counting every google search as single use of it

    • Soyweiser@awful.systems
      link
      fedilink
      English
      arrow-up
      7
      Ā·
      2 days ago

      Ah, im not the only one, yes very annoying. I wonder if there isn’t also a setting they can ask the browsers about the users preferred language usage. Like how you can change languages on a windows install and some installers/etc follow that preferred language.

    • BlueMonday1984@awful.systems
      link
      fedilink
      English
      arrow-up
      8
      Ā·
      2 days ago

      Is it that unimaginable for SV tech that people speak more than one language? And that maybe you fucking ask before shoving a horribly bad machine translation into people’s faces?

      Considering how many are Trump bros, they probably consider getting consent to be Cuck Shittm and treat hearing anything but English as sufficient grounds to call ICE.

    • gerikson@awful.systems
      link
      fedilink
      English
      arrow-up
      7
      Ā·
      2 days ago

      I found out about that too when I arrived at Reddit and it was translated to Swedish automatically.

      • nightsky@awful.systems
        link
        fedilink
        English
        arrow-up
        7
        Ā·
        2 days ago

        Yes, right, Reddit too! Forgot that one. When I visit there I use alternative Reddit front-ends now which luckily spare me from this.

    • istewart@awful.systems
      link
      fedilink
      English
      arrow-up
      6
      Ā·
      2 days ago

      An underappreciated 8th-season Star Trek: TNG episode where Data tries to get closer to humanity by creating an innovative new metamaterial out of memories of past emotions

    • fullsquare@awful.systems
      link
      fedilink
      English
      arrow-up
      4
      Ā·
      edit-2
      2 days ago

      aliexpress did that since forever but you can just set display language once and you’re done. these ai-dubs are probably worst so far but can be turned off by uploader (it’s opt-out) (for now)

  • hrrrngh@awful.systems
    link
    fedilink
    English
    arrow-up
    18
    Ā·
    2 days ago

    Sanders why https://gizmodo.com/bernie-sanders-reveals-the-ai-doomsday-scenario-that-worries-top-experts-2000628611

    Sen. Sanders: I have talked to CEOs. Funny that you mention it. I won’t mention his name, but I’ve just gotten off the phone with one of the leading experts in the world on artificial intelligence, two hours ago.

    . . .

    Second point: This is not science fiction. There are very, very knowledgeable people—and I just talked to one today—who worry very much that human beings will not be able to control the technology, and that artificial intelligence will in fact dominate our society. We will not be able to control it. It may be able to control us. That’s kind of the doomsday scenario—and there is some concern about that among very knowledgeable people in the industry.

    taking a wild guess it’s Yudkowsky. ā€œvery knowledgeable peopleā€ and ā€œmany/most expertsā€ is staying on my AI apocalypse bingo sheet.

    even among people critical of AI (who don’t otherwise talk about it that much), the AI apocalypse angle seems really common and it’s frustrating to see it normalized everywhere. though I think I’m more nitpicking than anything because it’s not usually their most important issue, and maybe it’s useful as a wedge issue just to bring attention to other criticisms about AI? I’m not really familiar with Bernie Sanders’ takes on AI or how other politicians talk about this. I don’t know if that makes sense, I’m very tired

    • mountainriver@awful.systems
      link
      fedilink
      English
      arrow-up
      9
      Ā·
      1 day ago

      Not surprised. Making Hype and Criti-hype the two poles of the public debate has been effective in corralling people who get that there is something wrong with the ā€œAIā€ into Criti-hype. And politicians needs to be generalists so the trap is easy to spring.

      Still, always a pity when people who should know better fall into it.

  • self@awful.systems
    link
    fedilink
    English
    arrow-up
    16
    Ā·
    2 days ago

    404media posted an article absolutely dunking on the idea of pivoting to AI, as one does:

    media executives still see AI as a business opportunity and a shiny object that they can tell investors and their staffs that they are very bullish on. They have to say this, I guess, because everything else they have tried hasn’t worked

    • blakestacey@awful.systemsOP
      link
      fedilink
      English
      arrow-up
      19
      Ā·
      edit-2
      2 days ago

      We—yes, even you—are using some version of AI, or some tools that have LLMs or machine learning in them in some way shape or form already

      Fucking ghastly equivocation. Not just between ā€œLLMsā€ and ā€œmachine learningā€, but between opening a website that has a chatbot icon I never click and actually wasting my time asking questions to the slop machine.

      • BlueMonday1984@awful.systems
        link
        fedilink
        English
        arrow-up
        9
        Ā·
        2 days ago

        This is pure speculation, but I suspect machine learning as a field is going to tank in funding and get its name dragged through the mud by the popping of the bubble, chiefly due to its (current) near-inability to separate itself from AI as a concept.

      • antifuchs@awful.systems
        link
        fedilink
        English
        arrow-up
        7
        Ā·
        2 days ago

        It’s distressingly pervasive: autocorrect, speech recognition (not just in voice assistants, in accessibility tools), image correction in mobile cameras, so many things that are on by default and ā€œhelpfulā€

        • istewart@awful.systems
          link
          fedilink
          English
          arrow-up
          10
          Ā·
          2 days ago

          Apparently, for some corporate customers, Outlook has automatically turned on AI summaries as a sidebar in the preview pane for inbox messages. No, nobody I’ve talked to finds this at all helpful.

  • BlueMonday1984@awful.systems
    link
    fedilink
    English
    arrow-up
    15
    Ā·
    2 days ago

    The curl Bug Bounty is getting flooded with slop, and the security team is prepared to do something drastic to stop it. Going by this specific quote, reporters falling for the hype is a major issue:

    As a lot of these reporters seem to genuinely think they help out, apparently blatantly tricked by the marketing of the AI hype-machines, it is not certain that removing the money from the table is going to completely stop the flood. We need to be prepared for that as well. Let’s burn that bridge if we get to it.

    • ________@awful.systems
      link
      fedilink
      English
      arrow-up
      11
      Ā·
      edit-2
      2 days ago

      Reading through some of the examples at the end of the article it’s infuriating when these slop reports have opened and when the patient curl developers try to give them benefit of the doubt the reporter replies with ā€œyou have a vulnerability and I cannot explain further since I’m not an expertā€. Oh but for sure it’s broken and you are expert enough to know? One of the examples the reporter kept replying with how a strcpy() could be unsafe and the curl devs were kindly explaining that yes in general that function has potential for issues but their usage was not such a case. Reporter just repeats without paying attention. Insanity.

      I love working in systems writing C and assembly but I’ve grown many gray hairs over the years being yelled at that ā€œC is the worstā€ or ā€œlol memory bugā€ or the classic ā€œthis thing isn’t working perfectly for me so it must have been written in C and we need to rewrite it entirely in (alpha) language which is for sure better than the collective centuries of expertise in C existing nowā€. These LLMs sure do amplify these obnoxious voices because now the fancy chatbot says so.

      • BlueMonday1984@awful.systems
        link
        fedilink
        English
        arrow-up
        9
        Ā·
        1 day ago

        Reading through some of the examples at the end of the article it’s infuriating when these slop reports have opened and when the patient curl developers try to give them benefit of the doubt the reporter replies with ā€œyou have a vulnerability and I cannot explain further since I’m not an expertā€

        At that point, I feel the team would be justified in telling these slop-porters to go fuck themselves and closing the report - they’ve made it crystal clear they’re beyond saving.

        (And on a wider note, I suspect the security team is gonna be a lot less willing to give benefit of the doubt going forward, considering the slop-porters are actively punishing them for doing so)

        • ________@awful.systems
          link
          fedilink
          English
          arrow-up
          4
          Ā·
          1 day ago

          It’s unfortunate that the bug bounty payout removal is probably the best immediate remedy for some filtering but with curl being everywhere resume padders are still going to rush to generate slop reports or patches. I hope they are more fast and direct with communication as well. Their current patience and politeness is admirable.

    • BlueMonday1984@awful.systems
      link
      fedilink
      English
      arrow-up
      9
      Ā·
      2 days ago

      Shot-in-the-dark prediction here - the Xbox graphics team probably won’t be filling those positions any time soon.

      As a sidenote, part of me expects more such cases to crop up in the following months, simply because the widespread layoffs and enshittification of the entire tech industry is gonna wipe out everyone who cares about quality.

    • YourNetworkIsHaunted@awful.systems
      link
      fedilink
      English
      arrow-up
      12
      Ā·
      2 days ago

      I’m not comfortable saying that consciousness and subjectivity can’t in principle be created in a computer, but I think one element of what this whole debate exposes is that we have basically no idea what actions makes consciousness happen or how to define and identify that happening. Chatbots have always challenged the Turing test because they showcase how much we tend to project consciousness into anything that vaguely looks like it (interesting parallel to ancient mythologies explaining the whole world through stories about magic people). The current state of the art still fails at basic coherence over shockingly small amounts of time and complexity, and even when it holds together it shows a complete lack of context and comprehension. It’s clear that complete-the-sentence style pattern recognition and reproduction can be done impressively well in a computer and that it can get you farther than I would have thought in language processing, at least imitatively. But it’s equally clear that there’s something more there and just scaling up your pattern-maximizer isn’t going to replicate it.

  • Steve@awful.systems
    link
    fedilink
    English
    arrow-up
    12
    Ā·
    2 days ago

    My new video about the anti-design of the tech industry where I talk about this little passage from an ACM article that set me off when I found it a few years back.

    In short, before software started eating all the stuff ā€œdesignā€ meant something. It described a process of finding the best way to satisfy a purpose. It was a response to the purpose.

    The tech industry takes computation as being an immutable means and finds purposes it may satisfy. The purpose is a response to the tech.

    p.s. sorry to spam. :)

    vid: https://www.youtube.com/watch?v=ollyMSWSWOY pod: https://pnc.st/s/faster-and-worse/8ffce464/tech-as-anti-design

    threads bsky: https://bsky.app/profile/fasterandworse.com/post/3ltwles4hkk2t masto: https://hci.social/@fasterandworse/114852024025529148

  • gerikson@awful.systems
    link
    fedilink
    English
    arrow-up
    6
    Ā·
    edit-2
    2 days ago

    Haven’t really kept up with the pseudo-news of VC funded companies acquiring each other, but it seems Windsurf (previously been courted by OpenAI) is now gonna be purchased by the bros behind Devin.