Post
4045
Today we're starting a new initiative: LM Studio Community Models! π€
@bartowski , a prolific quantizer (both GGUF and EXL2) will be helping to curate notable new models in LM Studio's Community Models page: https://huggingface.co/lmstudio-community.
Our goal is to ensure the community has access to GGUF files for new & noteworthy models as soon as possible. Keep an eye on that page for updates.
If you're unfamiliar with GGUF, it's the de-facto standard for 'compressed' LLM weights. It is the native format of llama.cpp (https://github.com/ggerganov/llama.cpp, an LLM runtime C/C++ library.) This format is supported in LM Studio.
We will also be sharing new models on the LM Studio Discord: https://discord.gg/aPQfnNkxGC
@bartowski , a prolific quantizer (both GGUF and EXL2) will be helping to curate notable new models in LM Studio's Community Models page: https://huggingface.co/lmstudio-community.
Our goal is to ensure the community has access to GGUF files for new & noteworthy models as soon as possible. Keep an eye on that page for updates.
If you're unfamiliar with GGUF, it's the de-facto standard for 'compressed' LLM weights. It is the native format of llama.cpp (https://github.com/ggerganov/llama.cpp, an LLM runtime C/C++ library.) This format is supported in LM Studio.
We will also be sharing new models on the LM Studio Discord: https://discord.gg/aPQfnNkxGC