I noticed there didn’t seem to be a community about large language models, akin to r/localllama. So maybe this will be it.
For the uninitiated, you can easily try a bleeding edge LLM in your browser here.
If you loved that, some places to get started with local installs and execution are here-
https://github.com/ggerganov/llama.cpp
https://github.com/oobabooga/text-generation-webui
https://github.com/LostRuins/koboldcpp
https://github.com/turboderp/exllama
and for models in general, the renowned TheBloke provides the best and fastest releases-
You must log in or # to comment.