🇨🇦🇩🇪🇨🇳张殿李🇨🇳🇩🇪🇨🇦

  • 46 Posts
  • 251 Comments
Joined 11 months ago
cake
Cake day: November 14th, 2023

help-circle




  • The assumption I’m making is that the total hardware and energy cost scales linearly with the API pricing.

    This is not a good assumption. NONE of the GPT plans make any amount of profit, so the pricing is not going to be linked to hardware and energy costs, but rather toward addicting people to the product so they can raise the prices into profitability at an impossible future when people can’t live without their shit products.

    Though what we should be thinking about is not just the cost in absolute terms, but in relation to the benefit.

    The benefit is zero, so the cost:benefit ratio is ∞.




  • (like people who get angry at something you say and go to your profile to systematically downvote everything you’ve done, or organized dogpile voting, or …)

    I actually saw a system once for dealing with that that I thought had serious potential. If you wanted to downvote someone, it cost you time. Every time you downvoted the system would pause you, rendering you unable to use it for a period of time. On your first downvote it was measured in milliseconds, but with every downvote you cast in a given time period (by default it was the day, I think?) the pause increased exponentially. So by your 20th downvote you were being frozen for a minute and by the time you hit your hundredth you were freezered for a week. (It was, actually, technically speaking, impossible to reach your hundredth as a result.)

    The idea behind this was that the community could downvote you to perdition if you were a jackass (since it would be a miniscule freeze time for them), but if you tried to counter that by downvoting everybody who downvoted you, you’d rapidly be frozen out of the community.

    Of course the problem with that was that it was based on the naive supposition that people wouldn’t coordinate downvoting circles; that you wouldn’t be able to arrange brigading and dogpiling. But I still think something interesting could be salvaged from the idea by people smarter than I am. After all the statistics are all there and it should be possible to identify voting circles, sock puppet accounts, and the like from statistical behaviour, no?







  • Sounds like all your problems are with capitalism and not LLMs but you can’t see that.

    Show me an anarchist use of LLMs that respects consent. I’ll wait. Indeed, since there are no such examples and thus this is an unfair challenge, I’ll loosen it: Just describe such an LLM: one that people will explicitly opt-in to instead of having to keep track of every two-bit, LLM-pumping moron that pops up so they can opt-out.

    That’s the foundation of the dismantling of the corpse of human creation, after all: the lack of consent mechanism. If you can conceive of a feasible way to provide said consent, then your system is just the looting of the corpse of human creativity.

    Get some empathy for people in different circumstances as you. You sound like a child.

    And you sound like a techbrodude (read: child) throwing a tantrum at people pointing out the absence of clothing on your emperor.

    We’re never gonna see eye to eye …

    This is true. Because you believe in idiotic bullshit and I don’t.


  • Like have you not written a stupid tedious email to someone you didn’t like that you couldn’t be bothered to put more than 2 seconds to prompt it to some one or thing else to deal with it for you?

    No, I haven’t. I call out bullshit in my job instead of acquiescing to it. I’m not sure when I last wrote an email at work at all, not to mention a stupid, tedious one.

    If there’s a part of your job that can be done by degenerative AI, change how your job works. If your boss won’t let you change the bullshit, change your job. I’ve been doing this since I was 15. It’s not that hard.

    Can you elaborate on how and the mechanisms by which this is happening as you see?

    Here, this may help you grasp it.

    Why do you see it that way?

    Because I looked into how it works and spotted the bit where it needs a huge volume of input data. That input data is going to be indiscriminately vacuumed up because it’s not feasible to check each piece for permission. (Or do you naively believe that if I put a disclaimer on, say, a blog saying “this material is specifically not permitted to be used as training material for AI projects” means that it won’t be Hoovered in with everything else?)

    And here’s some cool little factoid for you if you don’t believe that it’s being vacuumed up indiscriminately: Meta announced a new AI siphonbot and gave the information needed to block it. Two weeks after they started using it. And this is generally positive behaviour. Most of the AI bot-crawlers have been found out by sleuthing, not by an announcement. Even AI research teams at universities aren’t doing the basics of ethical conduct: getting consent.

    Do you not see any circumstances in which it could be useful?

    Yes. It’s very useful for non-creatives to pretend they’re actually creative when they send a machine to stitch together the corpse of human culture in entertaining new shapes rendered from rotting flesh. Personally, though, I can live without masterpieces like “Sonic the hedgehog gives birth to Borat” or whatever idiotic shit these keyboard monkeys think is art.

    Doesn’t mean the tech isn’t useful cause you’ve not seen it used for anything good.

    There is no use sufficiently good to justify the dismemberment and destruction of human culture. Sorry.



  • Is Lemmy even a good platform for discussion to begin with?

    No.

    Anything with simplistic popularity polls attached to literally everything people provide is pretty much automatically going to suck. Even if everybody is voting in good faith you’re just going to get an echo chamber. Once you factor in that a very large number of people don’t vote in good faith (like people who get angry at something you say and go to your profile to systematically downvote everything you’ve done, or organized dogpile voting, or …) you begin to see the real problem lurking behind the obvious one.

    Lemmy was an attempt to replace the festering pile of groupthink that was Reddit with something “On The Fediverse” (rather like “On The Blockchain” only less morally repugnant) and instead of thinking about where and how Reddit succeeded and where and how it failed and trying to do better, it just tried to clone Reddit while allowing its flaws to magnify by the distributed nature of it.


  • I don’t agree with that.

    Of course you don’t. You’re one of the non-creatives who thinks that “prompt engineering” makes you a creative, undoubtedly.

    But the first “L” in “LLM” says it all. The very definition of degenerative AI requires the wholesale dismemberment of human culture to work and, indeed, there’s already a problem: the LLM purveyors have hit a brick wall. They’ve run out of material to siphon away from us and are now stuck with only finding “new” ways to remix what they’ve already butchered in the hopes that we think the stench from the increasingly rotten corpse won’t be noticeable.

    LLMs are not a knife. They are a collection of knives and bone saws purpose-built to dismember culture. You can use those knives and saws to cut your steak at dinner, I guess, but they’d be clumsy and unwieldy and would produce pretty weird slices of meat on your plate. (Meat that has completely fucked-up fingers.) But this is like how you can use guns to just shoot at paper targets: it’s possible, but it’s not the purpose for which the gun was built.

    LLMs and the degenerative AI built from them will never be anything but the death of culture. Enjoy your rotting corpse writing and pictures while it lasts, though!