ipsc shooter, shitposter

  • 1 Post
  • 116 Comments
Joined 2 months ago
cake
Cake day: January 19th, 2025

help-circle

  • ok first dumb question, is the block of code that you had below this line

    Given a query of “write a json schema to represent a comment thread on a social media site like reddit”, it’ll do this bit of reasoning:

    Was this an actual output from an LLM or a hypothetical example that you wrote up? It’s not quite clear to me. It’s a lot of output but I don’t want to insult you if you wrote all that yourself

    First, each comment has an ID, author info, content, timestamps, votes, and replies. The replies are nested comments, forming a tree structure. So the schema should allow for nested objects within each comment’s replies.

    I ask because I really want to nitpick the hell out of this design decision:

    First, each comment has an ID, author info, content, timestamps, votes, and replies. The replies are nested comments, forming a tree structure. So the schema should allow for nested objects within each comment’s replies.

    Adding the replies as full items, that is going to absolutely murder performance. A better scheme would be for replies to be a list/array of IDs or URLs, or a URL to an API call that enumerates all the replies, instead of enumerating all the items and embedding them directly. That is going to absolutely kill performance. Depending on the implementation, you could easily be doing the classic N+1 query that a lot of web applications fall for.

    But then again at this point I’m arguing with an LLM which is generating absolutely dogshit code.








  • I think I understand that it says the exchange value of a product is proportionate to the labor time used to create it. Am I getting that correct?

    This is my understanding of things, so I think you are correct. Although I think “time” is not the correct dimension. I think more broadly the value of a product is linked to the labor that was put into it, across many different factors, instead of just time alone.

    “Is a cookie still worth its labor if it’s burnt? Is a pie worth the labor if it’s a shit pie?”

    Critiques like this are irrelevant because they are focusing on small situations where a single item of labor did not turn out as desired. Mistakes get made during the process of creating something, and most normal people don’t focus on the results that didn’t turn out correctly. The example of labor to produce a “shit pie” is not even worth considering.

    If anything, you could make a case that if you have some piece of labor that is very difficult and prone to failure you could argue that the value of the product also includes all the failed attempts to create the product correctly, or the difficulty involved.

    I am not an economist, I do not read theory, but this is my uninformed take.



  • You seem to think that the way these things work is by just simply pulling up chunks of existing code from a big db

    Sorry, that is not what I think. It’s just that surely there was something very similar enough to your JSON to get the prediction to come up with something that looks similar enough. It’s very annoying having to discuss an LLMs intricate details of how it works and then get nitpicked on a concept that I don’t think I was saying


  • For example, just the other day I used it to come up with a SQL table schema based on some sample input JSON data

    How likely is it that this JSON structure and corresponding database schema is somewhere in the (immense) training data. Was it novel? New?

    Like I just have continual questions about an LLM doing 3NF a priori on novel data.

    Like if we just outright say that LLMs are just a better Google or a better IntelliSense that can fetch you existing data that it has seen (which, given that it’s basically the entire Internet, across probably the entire existence of the Internet that has been crawled by crawlers and the Internet archive, which is a boggling amount) instead of dressing it up as coming up with NEW AND ENTIRELY NOVEL code like the hype keeps saying, then I’d be less of a detractor






  • bro you literally know nothing about this topic. No investigation, no right to speak. You need to stop talking.

    You are toxic, as well as being incredibly arrogant. A true example of Dunning-Kruger effect. If you want to have a tantrum then by all means do so, but don’t pretend that you are on some sort of high ground when you make your pronouncements.

    Every conversation you have had with me, you project opinions that I do not have (maxism vs anarchism, calling me a luddite, etc) and construct strawmen arguments that I did not make

    Do some self crit


  • Second, it’s already quite good at doing real world tasks, and saves me a ton of time writing boilerplate when coding.

    So, that’s another thing that I wonder about. All these LLMs are doing is automating boilerplate code, and frankly that’s not really innovative. “Programming via stack overflow” was a joke that has been in use for nearly two decades now (shudder) and all the LLM is doing is saving you the ALT+TAB between SO and your text editor, no?

    If you’re doing a TODO app in Angular or NextJS I’m sure you get tons of boilerplate.

    But what about when it comes to novel, original work? How much does that help? I mean really how much savings do you get, and how useful was it?