• chiliedogg@lemmy.world
    link
    fedilink
    arrow-up
    6
    ·
    3 个月前

    I think a big thing that people are failing to understand is that most of these bits aren’t advanced LLMs that cost billions to develop, but bots that use existing LLMs. Therefore the programming on them isn’t super advanced and there will be workarounds.

    Honestly the most effective way to keep them from getting tricked in the replies is to simply have them either not reply at all, or pre-program 50 or so standard prompts given to the LLM that are triggered by comment replies based on keywords.

    Basically they need to filter the thread in such a way that the replies are never provided directly to the LLM.