• WoodScientist@lemmy.world
    link
    fedilink
    arrow-up
    6
    arrow-down
    1
    ·
    12 hours ago

    So here’s the path that you’re envisioning:

    1. Someone wants to send you a communication of some sort. They draft a series of bullet points or short version.

    2. They have an LLM elaborate it into a long-form email or report.

    3. They send the long-from to you.

    4. You receive it and have an LLM summarize the long-form into a short-form.

    5. You read the short form.

    Do you realize how stupid this whole process is? The LLM in step (2) cannot create new useful information from nothing. It is simply elaborating on the bullet points or short version of whatever was fed to it. It’s extrapolating and elaborating, and it is doing so in a lossy manner. Then in step (4), you go through ANOTHER lossy process. The LLM in step (4) is summarizing things, and it might be removing some of the original real information the human created in step (1), rather than the useless fluff the LLM in step (2) added.

    WHY NOT JUST HAVE THE PERSON DIRECTLY SEND YOU THE BULLET POINTS FROM STEP (1)???!!

    This is idiocy. Pure and simply idiocy. We send start with a series of bullet points, and we end with a series of bullet points, and it’s translated through two separate lossy translation matrices. And we pointlessly burn huge amounts of electricity in the process.

    This is fucking stupid. If no one is actually going to read the long-form communications, the long-form communications SHOULDN’T EXIST.

    • spector@lemmy.ca
      link
      fedilink
      arrow-up
      1
      ·
      2 hours ago

      Also neither side necessarily knows the others filter chain. Generational loss could grow exponentially. Not only loss but addition by fabrication. Each side trading back and forth indeterminate deletions/additions. It’s worse than traditional generational loss. It’s generational noise which can resemble signal too.

      So if I receive a long form then how do I know if the substantial text is worth reading for the nuance from an actual human being. I can’t tell that apart from generated filler. If a human wrote the long form then maybe they’ve elaborated some nuance that deserved long form.

      On the flip side of the same coin. If I receive a short form either generated by me or them. Then to what degree can I trust the indeterminate noisy summary. I just have to trust that the LLM picked out precisely the key points that the author wanted to convey. And trust that nuance was not lost, skewed, or fabricated.

      It would be inevitable that two sides end up in a shooting war. Proverbial or otherwise. Because two communiques were playing a fancy game of telephone. Information that was lost or fabricated resulted in an incident but neither side knows which shot first because nobody realized the miscommunication started happening several generations ago.

    • cybersandwich@lemmy.world
      link
      fedilink
      arrow-up
      2
      ·
      8 hours ago

      That’s not what I am envisioning at all. That would be absurd.

      Ironically, an gpt4o understood my post better than you :P

      " Overall, your perspective appreciates the real-world applications and benefits of AI while maintaining a critical eye on the surrounding hype and skepticism. You see AI as a transformative tool that, when used appropriately, can enhance both individual and organizational capabilities."