Stable Diffusion Community (Unofficial, Non-profit)

community

AI & ML interests

Enhance and upgrade SD-models

Recent Activity

sd-community's activity

NymboΒ 
posted an update 4 days ago
view post
Post
570
PSA for anyone using Nymbo/Nymbo_Theme or Nymbo/Nymbo_Theme_5 in a Gradio space ~

Both of these themes have been updated to fix some of the long-standing inconsistencies ever since the transition to Gradio v5. Textboxes are no longer bright green and in-line code is readable now! Both themes are now visually identical across versions.

If your space is already using one of these themes, you just need to restart your space to get the latest version. No code changes needed.
AtAndDevΒ 
posted an update 29 days ago
view post
Post
2948
Llama 4 is out...
Β·
AtAndDevΒ 
posted an update about 2 months ago
view post
Post
4239
There seems to multiple paid apps shared here that are based on models on hf, but some ppl sell their wrappers as "products" and promote them here. For a long time, hf was the best and only platform to do oss model stuff but with the recent AI website builders anyone can create a product (really crappy ones btw) and try to sell it with no contribution to oss stuff. Please dont do this, or try finetuning the models you use...
Sorry for filling yall feed with this bs but yk...
  • 6 replies
Β·
AtAndDevΒ 
posted an update about 2 months ago
view post
Post
1607
Gemma 3 seems to be really good at human preference. Just waiting for ppl to see it.
not-lainΒ 
posted an update about 2 months ago
ehristoforuΒ 
posted an update 2 months ago
view post
Post
3155
Introducing our first standalone model – FluentlyLM Prinum

Introducing the first standalone model from Project Fluently LM! We worked on it for several months, used different approaches and eventually found the optimal one.

General characteristics:
- Model type: Causal language models (QwenForCausalLM, LM Transformer)
- Number of parameters: 32.5B
- Number of parameters (not embedded): 31.0B
- Number of layers: 64
- Context: 131,072 tokens
- Language(s) (NLP): English, French, Spanish, Russian, Chinese, Japanese, Persian (officially supported)
- License: MIT

Creation strategy:
The basis of the strategy is shown in Pic. 2.
We used Axolotl & Unsloth for SFT-finetuning with PEFT LoRA (rank=64, alpha=64) and Mergekit for SLERP and TIES mergers.

Evolution:
πŸ† 12th place in the Open LLM Leaderboard ( open-llm-leaderboard/open_llm_leaderboard) (21.02.2025)

Detailed results and comparisons are presented in Pic. 3.

Links:
- Model: fluently-lm/FluentlyLM-Prinum
- GGUF version: mradermacher/FluentlyLM-Prinum-GGUF
- Demo on ZeroGPU: ehristoforu/FluentlyLM-Prinum-demo
  • 7 replies
Β·
AtAndDevΒ 
posted an update 3 months ago
view post
Post
2455
@nroggendorff is that you sama?
  • 2 replies
Β·
ameerazam08Β 
posted an update 3 months ago
not-lainΒ 
posted an update 3 months ago
AtAndDevΒ 
posted an update 3 months ago
view post
Post
1908
everywhere i go i see his face
AtAndDevΒ 
posted an update 3 months ago
view post
Post
547
Deepseek gang on fire fr fr
AtAndDevΒ 
posted an update 4 months ago
view post
Post
1627
R1 is out! And with a lot of other R1 releated models...
not-lainΒ 
posted an update 4 months ago
view post
Post
1707
we now have more than 2000 public AI models using ModelHubMixinπŸ€—
not-lainΒ 
posted an update 4 months ago
1aurentΒ 
posted an update 4 months ago
ehristoforuΒ 
posted an update 4 months ago
view post
Post
3999
βœ’οΈ Ultraset - all-in-one dataset for SFT training in Alpaca format.
fluently-sets/ultraset

❓ Ultraset is a comprehensive dataset for training Large Language Models (LLMs) using the SFT (instruction-based Fine-Tuning) method. This dataset consists of over 785 thousand entries in eight languages, including English, Russian, French, Italian, Spanish, German, Chinese, and Korean.

🀯 Ultraset solves the problem faced by users when selecting an appropriate dataset for LLM training. It combines various types of data required to enhance the model's skills in areas such as text writing and editing, mathematics, coding, biology, medicine, finance, and multilingualism.

πŸ€— For effective use of the dataset, it is recommended to utilize only the "instruction," "input," and "output" columns and train the model for 1-3 epochs. The dataset does not include DPO or Instruct data, making it suitable for training various types of LLM models.

❇️ Ultraset is an excellent tool to improve your language model's skills in diverse knowledge areas.
AtAndDevΒ 
posted an update 5 months ago
view post
Post
472
@s3nh Hey man check your discord! Got some news.
  • 4 replies
Β·