Oh come on, LLMs don’t hallucinate 24/7. For that, you have to ask a chatbot to say something it wasn’t properly trained for. But generating simple texts for background chatter? That’s safe and easy. The real issue is the amount of resources required by modern LLMs. But technologies tend to become better with time.
Oh come on, LLMs don’t hallucinate 24/7. For that, you have to ask a chatbot to say something it wasn’t properly trained for. But generating simple texts for background chatter? That’s safe and easy. The real issue is the amount of resources required by modern LLMs. But technologies tend to become better with time.
I still really don’t understand what amount of local resources it would require to run a trained LLM