AI & ML interests

Collection of JS libraries to interact with the Hugging Face Hub

Recent Activity

huggingfacejs's activity

merveΒ 
posted an update about 10 hours ago
XenovaΒ 
posted an update 1 day ago
view post
Post
818
NEW: Real-time conversational AI models can now run 100% locally in your browser! 🀯

πŸ” Privacy by design (no data leaves your device)
πŸ’° Completely free... forever
πŸ“¦ Zero installation required, just visit a website
⚑️ Blazingly-fast WebGPU-accelerated inference

Try it out: webml-community/conversational-webgpu

For those interested, here's how it works:
- Silero VAD for voice activity detection
- Whisper for speech recognition
- SmolLM2-1.7B for text generation
- Kokoro for text to speech

Powered by Transformers.js and ONNX Runtime Web! πŸ€— I hope you like it!
  • 1 reply
Β·
merveΒ 
posted an update 1 day ago
view post
Post
910
Past week was insanely packed for open AI! 😱
Luckily we picked some highlights for you ❀️ lfg!

πŸ’¬ LLMs/VLMs
> Deepseek 🐳 released deepseek-ai/DeepSeek-R1-0528, 38B model, only 0.2 and 1.4 points behind o3 in AIME 24/25 🀯 they also released an 8B distilled version based on Qwen3 (OS) deepseek-ai/deepseek-r1-678e1e131c0169c0bc89728d
> Xiaomi released MiMo-7B-RL (LLM for code and math) and MiMo-VL-7B-RL (VLM for visual reasoning, GUI agentic task and general use) (OS) 😍 XiaomiMiMo/mimo-vl-68382ccacc7c2875500cd212
> NVIDIA released , new reasoning model nvidia/Nemotron-Research-Reasoning-Qwen-1.5B
> DS: MiniMax released https://huggingface.co/MiniMaxAI/SynLogic, new 49k logical reasoning examples across 35 tasks including solving cipher, sudoku and more!

πŸ–ΌοΈ Image/Video Generation
> tencent released tencent/HunyuanPortrait, a new model for consistent portrait generation with SVD Research license. They also released tencent/HunyuanVideo-Avatar, audio driven avatar generation (OS)
> showlab released showlab/OmniConsistency, consistent stylization model (OS)
> Rapidata/text-2-video-human-preferences-veo3 is a new T2V preference dataset based on videos from Veo3 with 46k examples (OS)

AudioπŸ—£οΈ
> https://huggingface.co/ResembleAI/Chatterbox is a new 500M text-to-speech model preferred more than ElevenLabs (OS) 😍
> PlayHT/PlayDiffusion is a new speech editing model (OS)

Other
> https://huggingface.co/NX-AI/TiReX is a new time series foundation model
> Yandex released a huge (4.79B examples!) video recommendation dataset https://huggingface.co/yandex/yambda

OS ones have Apache2.0 or MIT licenses, find more models and datasets here merve/releases-30-may-6840097345e0b1e915bff843
merveΒ 
posted an update 1 day ago
view post
Post
867
Yesterday was the day of vision language action models (VLAs)!

> SmolVLA: open-source small VLA for robotics by Hugging Face LeRobot team πŸ€–
Blog: https://huggingface.co/blog/smolvla
Model: lerobot/smolvla_base

> Holo-1: 3B & 7B web/computer use agentic VLAs by H Company πŸ’»
Model family: Hcompany/holo1-683dd1eece7eb077b96d0cbd
Demo: https://huggingface.co/spaces/multimodalart/Holo1
Blog: https://huggingface.co/blog/Hcompany/holo1
super exciting times!!
merveΒ 
posted an update 2 days ago
merveΒ 
posted an update 3 days ago
merveΒ 
posted an update 4 days ago
view post
Post
1094
New GUI model by Salesforce AI & Uni HK: Jedi
tianbaoxiexxx/Jedi xlangai/Jedi-7B-1080p πŸ€—
Based on Qwen2.5-VL with Apache 2.0 license

prompt with below screenshot β†’ select "find more"
  • 2 replies
Β·
merveΒ 
posted an update 6 days ago
view post
Post
1939
HOT: MiMo-VL new 7B vision LMs by Xiaomi surpassing gpt-4o (Mar), competitive in GUI agentic + reasoning tasks ❀️‍πŸ”₯ XiaomiMiMo/mimo-vl-68382ccacc7c2875500cd212

not only that, but also MIT license & usable with transformers πŸ”₯
merveΒ 
posted an update 7 days ago
view post
Post
2684
introducing: VLM vibe eval πŸͺ­ visionLMsftw/VLMVibeEval

vision LMs are saturated over benchmarks, so we built vibe eval πŸ’¬

> compare different models with refreshed in-the-wild examples in different categories 🀠
> submit your favorite model for eval
no numbers -- just vibes!
merveΒ 
posted an update 9 days ago
view post
Post
2532
emerging trend: models that can understand image + text and generate image + text

don't miss out ‡️
> MMaDA: single 8B diffusion model aligned with CoT (reasoning!) + UniGRPO Gen-Verse/MMaDA
> BAGEL: 7B MoT model based on Qwen2.5, SigLIP-so-400M, Flux VAE ByteDance-Seed/BAGEL
both by ByteDance! 😱

I keep track of all any input β†’ any output models here https://huggingface.co/collections/merve/any-to-any-models-6822042ee8eb7fb5e38f9b62
  • 1 reply
Β·
merveΒ 
posted an update 10 days ago
view post
Post
3108
what happened in open AI past week? so many vision LM & omni releases πŸ”₯ merve/releases-23-may-68343cb970bbc359f9b5fb05

multimodal πŸ’¬πŸ–ΌοΈ
> new moondream (VLM) is out: it's 4-bit quantized (with QAT) version of moondream-2b, runs on 2.5GB VRAM at 184 tps with only 0.6% drop in accuracy (OS) 🌚
> ByteDance released BAGEL-7B, an omni model that understands and generates both image + text. they also released Dolphin, a document parsing VLM 🐬 (OS)
> Google DeepMind dropped MedGemma in I/O, VLM that can interpret medical scans, and Gemma 3n, an omni model with competitive LLM performance

> MMaDa is a new 8B diffusion language model that can generate image and text



LLMs
> Mistral released Devstral, a 24B coding assistant (OS) πŸ‘©πŸ»β€πŸ’»
> Fairy R1-32B is a new reasoning model -- distilled version of DeepSeek-R1-Distill-Qwen-32B (OS)
> NVIDIA released ACEReason-Nemotron-14B, new 14B math and code reasoning model
> sarvam-m is a new Indic LM with hybrid thinking mode, based on Mistral Small (OS)
> samhitika-0.0.1 is a new Sanskrit corpus (BookCorpus translated with Gemma3-27B)

image generation 🎨
> MTVCrafter is a new human motion animation generator
  • 1 reply
Β·
merveΒ 
posted an update 14 days ago
view post
Post
2581
Google released MedGemma on I/O'25 πŸ‘ google/medgemma-release-680aade845f90bec6a3f60c4

> 4B and 27B instruction fine-tuned vision LMs and a 4B pre-trained vision LM for medicine
> available with transformers from the get-go πŸ€—

they also released a cool demo for scan reading ➑️ google/rad_explain

use with transformers ‡️
  • 1 reply
Β·
merveΒ 
posted an update 14 days ago
view post
Post
3096
Bu post'u Γ§evirebilirsiniz πŸ€—πŸ’—
Β·
merveΒ 
posted an update 15 days ago
view post
Post
2373
tis the year of any-to-any/omni models 🀠
ByteDance-Seed/BAGEL-7B-MoT 7B native multimodal model that understands and generates both image + text

it outperforms leading VLMs like Qwen 2.5-VL πŸ‘ and has Apache 2.0 license 😱
merveΒ 
in huggingfacejs/tasks 16 days ago

image-to-video

2
#7 opened 16 days ago by
multimodalart
merveΒ 
posted an update 16 days ago
view post
Post
1713
NVIDIA released new vision reasoning model for robotics: Cosmos-Reason1-7B πŸ€– nvidia/cosmos-reason1-67c9e926206426008f1da1b7

> first reasoning model for robotics
> based on Qwen 2.5-VL-7B, use with Hugging Face transformers or vLLM πŸ€—
> comes with SFT & alignment datasets and a new benchmark πŸ‘
merveΒ 
posted an update 18 days ago
view post
Post
2575
It was the week of video generation at @huggingface , on top of many new LLMs, VLMs and more!
Let’s have a wrap 🌯 merve/may-16-releases-682aeed23b97eb0fe965345c

LLMs πŸ’¬
> Alibaba Qwen released WorldPM-72B, new World Preference Model trained with 15M preference samples (OS)
> II-Medical-8B, new LLM for medical reasoning that comes in 8B by Intelligent-Internet
> TRAIL is a new dataset by Patronus for trace error reasoning for agents (OS)

Multimodal πŸ–ΌοΈπŸ’¬
> Salesforce Research released BLIP3o, a new any-to-any model with image-text input and image-text output πŸ’¬it’s based on an image encoder, a text decoder and a DiT, and comes in 8B
> They also released pre-training and fine-tuning datasets
> MMMG is a multimodal generation benchmark for image, audio, text (interleaved)

Image Generation ⏯️
> Alibaba Wan-AI released Wan2.1-VACE, video foundation model for image and text to video, video-to-audio and more tasks, comes in 1.3B and 14B (OS)
> ZuluVision released MoviiGen1.1, new cinematic video generation model based on Wan 2.1 14B (OS)
> multimodalart released isometric-skeumorphic-3d-bnb, an isometric 3D asset generator (like AirBnB assets) based on Flux
> LTX-Video-0.9.7-distilled is a new real-time video generation (text and image to video) model by Lightricks
> Hidream_t2i_human_preference is a new text-to-image preference dataset by Rapidata with 195k human responses from 38k annotators

Audio πŸ—£οΈ
> stabilityai released stable-audio-open-small new text-to-audio model
> TEN-framework released ten-vad, voice activity detection model (OS)