Join the conversation

Join the community of Machine Learners and AI enthusiasts.

Sign Up

All HF Hub posts

tomaarsenย 
posted an update about 17 hours ago
view post
Post
1096
๐Ÿค— Sentence Transformers is joining Hugging Face! ๐Ÿค— This formalizes the existing maintenance structure, as I've personally led the project for the past two years on behalf of Hugging Face! Details:

Today, the Ubiquitous Knowledge Processing (UKP) Lab is transferring the project to Hugging Face. Sentence Transformers will remain a community-driven, open-source project, with the same open-source license (Apache 2.0) as before. Contributions from researchers, developers, and enthusiasts are welcome and encouraged. The project will continue to prioritize transparency, collaboration, and broad accessibility.

Read our full announcement for more details and quotes from UKP and Hugging Face leadership: https://huggingface.co/blog/sentence-transformers-joins-hf

We see an increasing wish from companies to move from large LLM APIs to local models for better control and privacy, reflected in the library's growth: in just the last 30 days, Sentence Transformer models have been downloaded >270 million times, second only to transformers.

I would like to thank the UKP Lab, and especially Nils Reimers and Iryna Gurevych, both for their dedication to the project and for their trust in myself, both now and two years ago. Back then, neither of you knew me well, yet you trusted me to take the project to new heights. That choice ended up being very valuable for the embedding & Information Retrieval community, and I think this choice of granting Hugging Face stewardship will be similarly successful.

I'm very excited about the future of the project, and for the world of embeddings and retrieval at large!
paulmlย 
posted an update 3 days ago
view post
Post
3228
Qwen3-VL-4B is incredibly easy to fine-tune!
We've trained the first DSE model based on this model, and it's already performing at the same level as Jina v4!

While Jina Embeddings v4 is built on Qwen2.5-VL-3B (which has a non-commercial license), our model is based on Qwen3-VL-4B and released under Apache 2.0โ€”making it fully commercially permissive.

Check out our DSE model here:
racineai/QwenAmann-4B-dse
mitkoxย 
posted an update 1 day ago
view post
Post
2172
I see all Chinese labs are turning TL;DR into TL;DRGB

Problem: 1M text tokens == 1 M opportunities for your GPU to file worker-comp
Solution: donโ€™t feed the model War & Peaceโ€”feed it the movie poster.

This is Glyph, Zaiโ€™s new visual-text compression voodoo:
โ€ข 10 k words โ†’ 3 PNGs โ‰ˆ 3 k visual tokens
โ€ข Compression ratio: 4.3ร—
โ€ข Throughput: 40-60 tok/s i.e. your context window now finishes before my coffee does

So I did the only reasonable thing: asked GLM-4.6 to port Glyph for Qwen3-VL-8B-Thinking.
Translation: I made one model compress a novel into a comic strip, then made another model read the comic strip and still ace QA.
Itโ€™s basically passing notes in class, except the note is a 1920ร—1080 meme and the teacher is a transformer.

We've gone from "Attention is All You Need" to "Attention is Too Expensive, Just Use Your Eyes." Remember kids: in 2025 literacy is optional, but JPEG is forever.
branikitaย 
posted an update about 22 hours ago
view post
Post
1470
With Robonine team we released an open-source 3D-printed parallel gripper designed for robotics applications, compatible with popular budget servos like Feetech STS3215 and Waveshare ST3215.

This precision gripper offers parallel jaw movement, real-time monitoring, and positioning accuracy of ยฑ0.1ยฐ, making it ideal for both robotics enthusiasts and professionals. Complete build cost: Just $69.45โ€“$74.45, with all components available for purchase on Amazon. Direct links are provided in the Bill of Materials on GitHub.

Check out the project on github: https://github.com/roboninecom/3D-Printed-Parallel-Gripper-for-Robotics-Arms

We encourage you to Watch, Fork, and Star the repository to support our open-source initiative and stay updated on future developments. Your feedback is also welcome!
anditoย 
posted an update 1 day ago
view post
Post
974
Finally, our new paper is out! "๐—™๐—ถ๐—ป๐—ฒ๐—ฉ๐—ถ๐˜€๐—ถ๐—ผ๐—ป: ๐—ข๐—ฝ๐—ฒ๐—ป ๐——๐—ฎ๐˜๐—ฎ ๐—œ๐˜€ ๐—”๐—น๐—น ๐—ฌ๐—ผ๐˜‚ ๐—ก๐—ฒ๐—ฒ๐—ฑ"! ๐Ÿฅณ
FineVision: Open Data Is All You Need (2510.17269)

If you've ever trained a VLM, you know this problem: nobody shares their data mixtures. It's a black box, making replicating SOTA work impossible.
We wanted to change that.

FineVision unifies 200 sources into 24 million samples. With 17.3 million images and 9.5 billion answer tokens, it's the largest open resource of its kind.

In the paper, we share how we built it:
๐Ÿ” finding and cleaning data at scale
๐Ÿงน removing excessive duplicates across sources
๐Ÿค— decontaminating against 66 public benchmarks

My favorite part is Figure 6 (in the video!). It's our visual diversity analysis. It shows that FineVision isn't just bigger; it's more balanced and conceptually richer than other open datasets.
NVIDIA's Eagle 2 paper highlighted just how critical this visual diversity is, and our results confirm it: models trained on FineVision consistently outperform those trained on any other open dataset on 11 benchmarks!

๐ŸŽ‰ To celebrate the paper, Iโ€™m also releasing a concatenated and shuffled version of the full dataset! ๐Ÿ‘‰HuggingFaceM4/FineVision_full_shuffled

Itโ€™s ready to stream, so you can start training your own models right away:

from datasets import load_dataset
d = load_dataset("HuggingFaceM4/FineVision_full_shuffled", split="train", streaming=True)
print(next(iter(d)))

A big shoutout to the first authors: Luis Wiedmann and Orr Zohar. They are rockstars!
piercusย 
posted an update 2 days ago
view post
Post
1408
๐Ÿšง Reproducing LBM-Eraserโ€ฆ in progress! [1]

When repurposing a T2I model into a pure I2I model, thereโ€™s always that orphaned text path โ€” what do we do with it? ๐Ÿค”

You can reuse it as learnable embeddings in multi-task setups [2], freeze an empty text prompt, distillate or prune the corresponding part.

In LBM, they take a clever route โ€” zeroing [3] and reshaping [4] the text-related cross-attentions into self-attentions.
This gives you fresh weights for I2I computation, nicely integrated into your SD architecture.

๐Ÿ“Ž References
[1] Our LBM Fork: https://github.com/finegrain-ai/LBM
[2] OmniPaint: OmniPaint: Mastering Object-Oriented Editing via Disentangled Insertion-Removal Inpainting (2503.08677)
[3] LBM Zeroing: https://github.com/gojasper/LBM/blob/cafebc46a9ac16dcc61691d289cc4676b5c75380/examples/training/train_lbm_surface.py#L147-L148
[4] LBM Reshaping: https://github.com/gojasper/LBM/blob/cafebc46a9ac16dcc61691d289cc4676b5c75380/examples/training/train_lbm_surface.py#L100
prithivMLmodsย 
posted an update 2 days ago
codelionย 
posted an update 3 days ago
view post
Post
3036
๐Ÿง  Introducing Ellora Recipe #6: Execution-Aware World Model for Qwen3-4B-Thinking

Teaching LLMs to understand not just what code does, but HOW it executes at runtime!

Inspired by Meta's CWM (Code World Model) research, this LoRA adapter adds execution awareness to Qwen3-4B-Thinking-2507. The model learns to predict variable states, trace program execution step-by-step, and debug code by understanding runtime behavior.

๐Ÿ” Key Innovation:
We combine Qwen3's native thinking capabilities with real Python execution traces captured via sys.settrace(). The model is trained using GRPO with a custom reward function that scores execution prediction accuracy.

๐Ÿ“Š Training Approach:
- Hybrid Magpie-style code generation
- Real execution tracing for ground truth
- Self-supervised learning (no manual annotations!)
- 298 training samples with execution traces

โœจ What it does:
- Predicts variable states at each line of code
- Explains execution flow with thinking tags
- Helps debug by understanding runtime behavior
- Works as a "neural debugger"

๐ŸŽฏ Results:
- 20% overall accuracy on execution prediction
- 33.3% mean state accuracy
- Trained on Qwen3-4B-Thinking (262K context, 4B params)

๐Ÿ”— Links:
Model: codelion/Qwen3-4B-execution-world-model-lora
Dataset: codelion/execution-world-model-dataset
GitHub Recipe: https://github.com/codelion/ellora
Notebook: https://github.com/codelion/ellora/blob/main/Ellora_Recipe_6_Execution_World_Model_Thinking_LoRA.ipynb

Part of the Ellora project - standardized LoRA recipes for enhancing LLM capabilities. All recipes use self-supervised data generation and work with existing infrastructure (PEFT, LoRAX, vLLM).

#LLM #LoRA #CodeGeneration #WorldModel #Qwen #AI #MachineLearning
merveย 
posted an update 3 days ago
view post
Post
4037
deepseek-ai/DeepSeek-OCR is out! ๐Ÿ”ฅ my take โคต๏ธ
> pretty insane it can parse and re-render charts in HTML
> it uses CLIP and SAM features concatenated, so better grounding
> very efficient per vision tokens/performance ratio
> covers 100 languages
  • 2 replies
ยท
appvoidย 
posted an update 3 days ago
view post
Post
3959
today is going to be a great day for small models, are you ready?
ยท