• 0 Posts
  • 23 Comments
Joined 1 year ago
cake
Cake day: June 11th, 2023

help-circle



  • They’re supposed to be good a transformation tasks. Language translation, create x in the style of y, replicate a pattern, etc. LLMs are outstandingly good at language transformer tasks.

    Using an llm as a fact generating chatbot is actually a misuse. But they were trained on such a large dataset and have such a large number of parameters (175 billion!?) that they passably perform in that role… which is, at its core, to fill in a call+response pattern in a conversation.

    At a fundamental level it will never ever generate factually correct answers 100% of the time. That it generates correct answers > 50% of the time is actually quite a marvel.











  • Max moved under braking multiple times, passed off track, ran into Norris not once but twice in the same corner, left the track maintaining his position, pushed Norris off track on the straight after the collision, then weaved and blocked Norris on the racing line on the following corner after it was clear he had a puncture.

    Norris was overly optimistic going into a corner once and gave the position back.

    But yeah, let’s go ahead and treat both of their behaviors as the same… /sarcasm