
Stable Diffusion Community (Unofficial, Non-profit)
community
AI & ML interests
Enhance and upgrade SD-models
Recent Activity
View all activity
sd-community's activity

1024mย
authored
a
paper
12 days ago
Post
2057
PSA for anyone using
Both of these themes have been updated to fix some of the long-standing inconsistencies ever since the transition to Gradio v5. Textboxes are no longer bright green and
If your space is already using one of these themes, you just need to restart your space to get the latest version. No code changes needed.
Nymbo/Nymbo_Theme
or Nymbo/Nymbo_Theme_5
in a Gradio space ~Both of these themes have been updated to fix some of the long-standing inconsistencies ever since the transition to Gradio v5. Textboxes are no longer bright green and
in-line code
is readable now! Both themes are now visually identical across versions.If your space is already using one of these themes, you just need to restart your space to get the latest version. No code changes needed.

1024mย
authored
3
papers
about 2 months ago
Robust and Fine-Grained Detection of AI Generated Texts
Paper
โข
2504.11952
โข
Published
โข
12
Improving Multilingual Capabilities with Cultural and Local Knowledge in Large Language Models While Enhancing Native Performance
Paper
โข
2504.09753
โข
Published
โข
5
Kaleidoscope: In-language Exams for Massively Multilingual Vision Evaluation
Paper
โข
2504.07072
โข
Published
โข
9
Post
4292
There seems to multiple paid apps shared here that are based on models on hf, but some ppl sell their wrappers as "products" and promote them here. For a long time, hf was the best and only platform to do oss model stuff but with the recent AI website builders anyone can create a product (really crappy ones btw) and try to sell it with no contribution to oss stuff. Please dont do this, or try finetuning the models you use...
Sorry for filling yall feed with this bs but yk...
Sorry for filling yall feed with this bs but yk...
Post
1625
Gemma 3 seems to be really good at human preference. Just waiting for ppl to see it.
Post
3268
๐AraClip is now fully integrated with Hugging Face ๐ค
AraClip is a specialized CLIP model that was created by @pain and optimized for Arabic text-image retrieval tasks๐ฅ
๐ Try it out ๐
๐ค model: Arabic-Clip/araclip
๐งฉ Gradio demo: Arabic-Clip/Araclip-Simplified
๐ website: https://arabic-clip.github.io/Arabic-CLIP/
AraClip is a specialized CLIP model that was created by @pain and optimized for Arabic text-image retrieval tasks๐ฅ
๐ Try it out ๐
๐ค model: Arabic-Clip/araclip
๐งฉ Gradio demo: Arabic-Clip/Araclip-Simplified
๐ website: https://arabic-clip.github.io/Arabic-CLIP/

ehristoforuย
posted
an
update
3 months ago
Post
3334
Introducing our first standalone model โ FluentlyLM Prinum
Introducing the first standalone model from Project Fluently LM! We worked on it for several months, used different approaches and eventually found the optimal one.
General characteristics:
- Model type: Causal language models (QwenForCausalLM, LM Transformer)
- Number of parameters: 32.5B
- Number of parameters (not embedded): 31.0B
- Number of layers: 64
- Context: 131,072 tokens
- Language(s) (NLP): English, French, Spanish, Russian, Chinese, Japanese, Persian (officially supported)
- License: MIT
Creation strategy:
The basis of the strategy is shown in Pic. 2.
We used Axolotl & Unsloth for SFT-finetuning with PEFT LoRA (rank=64, alpha=64) and Mergekit for SLERP and TIES mergers.
Evolution:
๐ 12th place in the Open LLM Leaderboard ( open-llm-leaderboard/open_llm_leaderboard) (21.02.2025)
Detailed results and comparisons are presented in Pic. 3.
Links:
- Model: fluently-lm/FluentlyLM-Prinum
- GGUF version: mradermacher/FluentlyLM-Prinum-GGUF
- Demo on ZeroGPU: ehristoforu/FluentlyLM-Prinum-demo
Introducing the first standalone model from Project Fluently LM! We worked on it for several months, used different approaches and eventually found the optimal one.
General characteristics:
- Model type: Causal language models (QwenForCausalLM, LM Transformer)
- Number of parameters: 32.5B
- Number of parameters (not embedded): 31.0B
- Number of layers: 64
- Context: 131,072 tokens
- Language(s) (NLP): English, French, Spanish, Russian, Chinese, Japanese, Persian (officially supported)
- License: MIT
Creation strategy:
The basis of the strategy is shown in Pic. 2.
We used Axolotl & Unsloth for SFT-finetuning with PEFT LoRA (rank=64, alpha=64) and Mergekit for SLERP and TIES mergers.
Evolution:
๐ 12th place in the Open LLM Leaderboard ( open-llm-leaderboard/open_llm_leaderboard) (21.02.2025)
Detailed results and comparisons are presented in Pic. 3.
Links:
- Model: fluently-lm/FluentlyLM-Prinum
- GGUF version: mradermacher/FluentlyLM-Prinum-GGUF
- Demo on ZeroGPU: ehristoforu/FluentlyLM-Prinum-demo

ameerazam08ย
posted
an
update
4 months ago
Post
4484
I have just released a new blogpost about kv caching and its role in inference speedup ๐
๐ https://huggingface.co/blog/not-lain/kv-caching/
some takeaways :
๐ https://huggingface.co/blog/not-lain/kv-caching/
some takeaways :
Post
1631
R1 is out! And with a lot of other R1 releated models...
Post
1765
we now have more than 2000 public AI models using ModelHubMixin๐ค
Post
4114
Published a new blogpost ๐
In this blogpost I have gone through the transformers' architecture emphasizing how shapes propagate throughout each layer.
๐ https://huggingface.co/blog/not-lain/tensor-dims
some interesting takeaways :
In this blogpost I have gone through the transformers' architecture emphasizing how shapes propagate throughout each layer.
๐ https://huggingface.co/blog/not-lain/tensor-dims
some interesting takeaways :