AI & ML interests

Inference

merveย 
posted an update 14 days ago
view post
Post
2954
GPT-4.1-mini level model right in your iPhone ๐Ÿคฏ

openbmb/MiniCPM-V-4 is only 4B while surpassing GPT-4.1-mini in vision benchmarks ๐Ÿ”ฅ

allows commercial use as well!
merveย 
posted an update 16 days ago
view post
Post
1056
we're all sleeping on this OCR model rednote-hilab/dots.ocr ๐Ÿ”ฅ

dots.ocr is a new 3B model with sota performance, support for 100 languages & allowing commercial use! ๐Ÿคฏ

single e2e model to extract image, convert tables, formula, and more into markdown ๐Ÿ“
try it MohamedRashad/Dots-OCR
merveย 
posted an update 16 days ago
view post
Post
608
massive releases and tons of Flux 1. Krea LoRas past week!
here's some of the picks, find more models in collection ๐Ÿซก merve/releases-august-2-6890c14248203522b7d0267f

LLMs ๐Ÿ’ฌ
> Tencent dropped tencent/Hunyuan-7B-Instruct
> Qwen released Qwen/Qwen3-Coder-30B-A3B-Instruct, 30B MoE with 3B params for coding (OS)

vision/multimodal
> RedNote released rednote-hilab/dots.ocr - 3B OCR model (OS)
> Cohere released CohereLabs/command-a-vision-07-2025 - 112B (dense!) VLM for 6 languages
> StepFun-AI shipped stepfun-ai/step3 - 321B MoE VLM (OS)
> Skywork shipped Skywork/Skywork-UniPic-1.5B - new any-to-any model (image+text โ†’ image+text) (OS)
merveย 
posted an update 20 days ago
merveย 
posted an update 21 days ago
view post
Post
3578
past week in open AI was insane ๐Ÿ”ฅ here's some of picks, find more here merve/releases-july-25-688768ca47fe3693407e02d1

๐Ÿ’ฌ LLMs & VLMs
> Qwen/Qwen3-235B-A22B-Thinking-2507 had a new update (OS)
> Qwen/Qwen3-Coder-480B-A35B-Instruct is out with 480B total 35B active params ๐Ÿคฏ (OS)
> AllenAI dropped an update to allenai/olmOCR-7B-0725 ๐Ÿ“
> InternLM released internlm/Intern-S1 - 235B Qwen3 MoE + 6B InternViT encoder (OS)
> OmniSVG/OmniSVG is a new SVG generation VLM (OS)

๐Ÿ–ผ๏ธ image/video/3D generation
> WanAI released Wan2.2 series - both T2V and I2V 14B models for high-quality video generation (OS) multimodalart/wan-22-688767e313337b434ed55112
> Tencent dropped tencent/HunyuanWorld-1 - image-to-3D scene generation model
  • 1 reply
ยท
merveย 
posted an update 24 days ago
view post
Post
4350
๐Ÿคฏ 241B VLM with apache-2.0 license internlm/Intern-S1

internlm released Intern-S1: multimodal reasoning model based on 235B MoE Qwen3 and 6B InternViT ๐Ÿ˜

benchmarks look great (๐Ÿ‘‘ best model โœ… best open model)
Wauplinย 
posted an update 26 days ago
view post
Post
2921
Say hello to hf: a faster, friendlier Hugging Face CLI โœจ

We are glad to announce a long-awaited quality-of-life improvement: the Hugging Face CLI has been officially renamed from huggingface-cli to hf!

So... why this change?

Typing huggingface-cli constantly gets old fast. More importantly, the CLIโ€™s command structure became messy as new features were added over time (upload, download, cache management, repo management, etc.). Renaming the CLI is a chance to reorganize commands into a clearer, more consistent format.

We decided not to reinvent the wheel and instead follow a well-known CLI pattern: hf <resource> <action>. Isn't hf auth login easier to type and remember?

The full rationale, implementation details, and migration notes are in the blog post: https://huggingface.co/blog/hf-cli

merveย 
posted an update 28 days ago
view post
Post
803
so many open LLMs and image LoRAs dropped past week, here's some picks for you ๐Ÿซก merve/releases-july-18-687e3fbd2ab9b39c51f9238b

LLMs
> ByteDance released a bunch of translation models called Seed-X-RM (7B) ByteDance-Seed/Seed-X-RM-7B
> NVIDIA released reasoning models of which 32B surpassing the giant Qwen3-235B with cc-by-4.0 license ๐Ÿ‘ nvidia/openreasoning-nemotron-687730dae0170059860f1f01
> LG released a new EXAONE model (32B) LGAI-EXAONE/EXAONE-4.0-32B

VLMs/any-to-any
> vidore/colqwen-omni-v0.1 is a new any-to-any retriever (MIT)
> HiDream-ai/HiDream-E1-1 is image+text in image+text out model (MIT)

LoRAs
> There's a bunch of LoRAs based on Flux Kontext, gotta check out the collection ๐Ÿค 
merveย 
posted an update about 1 month ago
merveย 
posted an update about 1 month ago
merveย 
posted an update about 1 month ago
view post
Post
2625
Fine-tune Gemma3n on videos with audios inside with Colab A100 ๐Ÿ”ฅ
Just dropped the notebook where you can learn how to fine-tune Gemma3n on images+audio+text at the same time!

keep in mind, it's made for educational purposes ๐Ÿซก we do LoRA, audio resampling & video downsampling to be able to train <40GB VRAM

stretch modalities and unfreeze layers as you wish! ๐Ÿ™๐Ÿป merve/smol-vision
  • 1 reply
ยท
merveย 
posted an update about 1 month ago
view post
Post
2449
past week had huuuge releases ๐Ÿ’—
here's our picks ๐Ÿ”ฅ find more models, datasets, demos here merve/releases-july-11-68750452c358c98b0fa663f7

> moonshotai/Kimi-K2-Instruct is the new sota LLM with 1T total 32B active parameters ๐Ÿคฏ

> HuggingFaceTB/SmolLM3-3B is the new best LM for it's size, offers thinking mode ๐Ÿ’ญ as well as the dataset HuggingFaceTB/smoltalk2

> Alibaba-NLP/WebSailor-3B is the new agentic LLM for complex browsing

> Google DeepMind released medical vision LMs with an agentic doctor-patient app google/medgemma-release-680aade845f90bec6a3f60c4

> fal released a LoRA to improve details on face images fal/Realism-Detailer-Kontext-Dev-LoRA
jbilcke-hfย 
posted an update about 1 month ago
view post
Post
4621
Are you looking to run a robot simulator, maybe run long robot policy training tasks, but you don't have the GPU at home?

Well.. you can run MuJoCo inside a Hugging Face space!

All you have to do is to clone this space:
jbilcke-hf/train-robots-with-mujoco

Don't forget to a pick a Nvidia GPU for your space, to be able to get some nice OpenGL renders!

Are you new to MuJoCo and/or JupyterLab notebooks?

You can get started with this tutorial (select "Open from URL" then paste the URL to this notebook):
jbilcke-hf/train-robots-with-mujoco

Happy robot hacking! ๐Ÿฆพ
  • 2 replies
ยท
merveย 
posted an update about 1 month ago
view post
Post
3135
GitHub refuses to render notebooks for a long time now ๐Ÿ’”

so smol-vision now lives in Hugging Face model repository ๐Ÿค— merve/smol-vision
  • 1 reply
ยท
merveย 
posted an update about 1 month ago
view post
Post
3466
ByteDance released Tar 1.5B and 7B: image-text in image-text out models, fully open-source ๐Ÿ‘ ByteDance-Seed/tar-6864cf0d9fe59a3b91cc4260

They have an image tokenizer unified with text, and they de-tokenize using either of two models (LLM and diffusion)
The model is actually a full LLM (Qwen2), the tokenizer converts image tokens ๐Ÿคฏ
merveย 
posted an update about 1 month ago
view post
Post
3700
Huge drops in open AI past week!
Find more models, datasets, demos here merve/releases-july-4-686bcc54ed7c45c341fbf654
Some of our picks ๐Ÿซก
โฏ๏ธ BAAI/MTVCraft is a new Veo3-like text-to-video model, demo is here BAAI/MTVCraft
๐Ÿง‘๐Ÿปโ€๐Ÿ’ป apple/diffucoder-6868139f56672ae046fe04e8 is a new family of diffusion LLMs (7B base and instruct) for coding
๐Ÿ—ฃ๏ธ kyutai/tts-1.6b-en_fr is a new small TTS model for English and France
๐Ÿ‘€ aharley/alltracker is a new pixel tracking model by Stanford, demo is here aharley/alltracker
๐Ÿ“– racineai/OGC_MEGA_MultiDomain_DocRetrieval is a new large visual document retrieval dataset
  • 1 reply
ยท
merveย 
posted an update about 2 months ago
view post
Post
971
SOOOO MANY MODEL RELEASES ๐Ÿ˜
Here's some picks from past week ๐Ÿค—

> ByteDance/XVerse is a new identity preserving image generation model ๐Ÿ–ผ๏ธ
> google/gemma-3n-E4B-it, any-to-text model supported by transformers ๐Ÿค—
> nvidia/llama-nemoretriever-colembed-3b-v1 two new state-of-the-art visual document retrievers ๐Ÿ“‘
> New version of Dia TTS model is up nari-labs/Dia-1.6B-0626
> Black Forest Labs releases Kontext benchmark black-forest-labs/kontext-bench

Find more here merve/releases-june-27-6864e8eb17f7e3a8b444083c
merveย 
posted an update about 2 months ago
view post
Post
3048
visual reasoning is now in transformers ๐Ÿ”ฅ
https://huggingface.co/THUDM/GLM-4.1V-9B-Thinking is just released and merged into transformers, we gave it a vibe test run ๐Ÿค 

it's very good, comes with 64k context length and MIT license ๐Ÿ˜
it supports 4k image tokens and any aspect ratio as well!
Notebook: http://colab.research.google.com/drive/1atODIiV57hOZLv16Bjzwd6fwx0yoTorj?usp=sharing
Demo: https://huggingface.co/spaces/THUDM/GLM-4.1V-9B-Thinking-Demo
merveย 
posted an update about 2 months ago
view post
Post
2551
so many multimodal releases these days ๐Ÿค 
> ERNIE-4.5-VL: new vision language MoE models by Baidu https://huggingface.co/models?search=ernie-4.5-vl
> new visual document retrievers by NVIDIA (sota on ViDoRe!) nvidia/llama-nemoretriever-colembed-3b-v1 nvidia/llama-nemoretriever-colembed-1b-v1
> Ovis-3b: new image-text in image-text out models by Alibaba โคต๏ธ https://huggingface.co/spaces/AIDC-AI/Ovis-U1-
merveย 
posted an update about 2 months ago
view post
Post
621
Dataset Viewer for PDFs just landed on Hugging Face ๐Ÿ“–๐Ÿค— you can now preview all the PDFs easier than before!

on top of this, there's PdfFolder format to load the PDF datasets quicker ๐Ÿ’จ
> to use it, your dataset should follow a directory format like folder/train/doc1.pdf, folder/train/doc1.pdf
> if you want to include bounding boxes, labels etc. you can keep them in a metadata.csv file in the same folder ๐Ÿค

read document dataset docs https://huggingface.co/docs/datasets/main/en/document_dataset
check all the document datasets here https://huggingface.co/datasets?modality=modality:document&sort=trending ๐Ÿ“–
  • 1 reply
ยท