• SpaceNoodle@lemmy.world
    link
    fedilink
    arrow-up
    107
    arrow-down
    2
    ·
    2 months ago

    80% is generous. Half of that is the user simply not realizing that the information is wrong.

    • grrgyle@slrpnk.net
      link
      fedilink
      arrow-up
      51
      arrow-down
      1
      ·
      2 months ago

      This becomes very obvious if you see anything generated for a field you know intimately.

      • toy_boat_toy_boat@lemmy.world
        link
        fedilink
        English
        arrow-up
        41
        ·
        2 months ago

        i think this is why i’ve never really had a good experience with an LLM - i’m always asking it for more detail about stuff i already know.

        it’s like chatgpt is pinocchio and users are just sitting on his face screaming “lie to me! lie to me!”

      • Couldbealeotard@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 months ago

        Oof. I tried to tell a manager why a certain technical thing wouldn’t work, and he pulled out his phone and started reading the Google AI summary “no, look, you just need to check the network driver and restart the router”. It was two devices that were electrical not compatible, and there was no IP infrastructure involved.