Vaibhav Srivastav's picture

Vaibhav Srivastav PRO

reach-vb

AI & ML interests

TTS + LM performance prediction

Recent Activity

Organizations

Hugging Face's profile picture Notebooks-explorers's profile picture Whisper fine-tuning sprint's profile picture The LLM Course's profile picture Whisper Fine-Tuning Event's profile picture Kensho's profile picture Mozilla Foundation's profile picture PolinaOrg's profile picture Coqui.ai's profile picture Internal Data & Models for Speech Recognition Event's profile picture Speech Recognition Community Event Version 2's profile picture onnx's profile picture Hugging Test Lab's profile picture Internal Data's profile picture The Team Ten's profile picture Huggingface Projects's profile picture EuroPython 2022's profile picture Whisper Distillation's profile picture BigCode's profile picture Hugging Face OSS Metrics's profile picture Harmonai's Dance Diffusion Community's profile picture EuroSciPy 2022's profile picture LaLoka Labs's profile picture Core ML Projects's profile picture meta-private's profile picture Blog-explorers's profile picture Music Gen Sprint's profile picture Hugging Face for Audio's profile picture Hugging Face Smol Models Research's profile picture Open ASR Leaderboard's profile picture test's profile picture MusicGen Internal's profile picture TTS Eval (OLD)'s profile picture ZeroGPU Explorers's profile picture Editing Audio's profile picture ggml.ai's profile picture LocalLLaMA's profile picture gg-hf's profile picture Python Italia's profile picture Unofficial Mistral Community's profile picture Journalists on Hugging Face's profile picture Llzama's profile picture finding-nemo's profile picture diarizers-community's profile picture MLX Community's profile picture Cartesia's profile picture Hugging Face Assignments's profile picture IBM Granite's profile picture On-device Squad's profile picture TTS AGI's profile picture Social Post Explorers's profile picture Apple CoreNet Models 's profile picture LM Studio Community's profile picture gg-gguf's profile picture hsramall's profile picture Lina Speech's profile picture Dev Mode Explorers's profile picture Sweet Dream(Booth)s's profile picture private beta for deeplinks's profile picture Paris AI Running Club's profile picture gg-tt's profile picture Kyutai's profile picture OuteAI's profile picture Hugging Face Discord Community's profile picture LLHF's profile picture SLLHF's profile picture Ratchet Community's profile picture Hugging Quants's profile picture lbhf's profile picture CoreML Scratchpad's profile picture blhf's profile picture Meta Llama's profile picture kmhf's profile picture nltpt's profile picture nltpt-q's profile picture ai4b-hf's profile picture Ollama Tools's profile picture Spirit LM's profile picture qrias's profile picture Audio Collabs's profile picture Consumer AI Edge Hackathon (Meta, Hugging Face, Pytorch, Scaleway & Unaite)'s profile picture open/ acc's profile picture ExecuTorch Community's profile picture wut?'s profile picture DDUF's profile picture AI Starter Pack's profile picture None yet's profile picture Open R1's profile picture LiteRT Community (FKA TFLite)'s profile picture MultiLlasa's profile picture gg-hf-g's profile picture mshf's profile picture fluxions-hf's profile picture yoso's profile picture hf-private-mlx's profile picture Bitsandbytes Community's profile picture llrehf's profile picture HF Trending Deploy's profile picture hf-inference's profile picture Hugging Face Discussion Board's profile picture Cerebras Hugging Face Hackathon's profile picture

reach-vb's activity

reacted to clem's post with โค๏ธ๐Ÿ‘€ 17 days ago
reacted to lbourdois's post with ๐Ÿ”ฅโค๏ธ 23 days ago
view post
Post
2197
We introduce FAT5 (Flash Attention T5) โšก

An implementation of T5 in PyTorch with UL2 objective optimized for GPGPU for both training and inference thanks to 13 different optimizations.
The main one is that we have designed a CUDA kernel to expand the Flash Attention by @tridao with RPE biases and supports other PE such as RoPE, ALiBi or FIRE.
The result kernel is 2 times faster than a SPDA implementation.
We also use Triton kernels to optimize certain parts of the architecture, such as the cross-entropy and RMSNorm layer.

The various kernels have been carefully built to be compatible with BF16 and torch.compile to go even faster and achieve efficient pretraining.

All other optimizations are described in a ๐Ÿ“ subsequent blog post available on @huggingface ๐Ÿค—: CATIE-AQ/FAT5-report.

This methodology enabled us to efficiently pretrain as a proof of concept a FAT5 with 147M parameters in French in a reasonable time (1,461H for 419B tokens), with limited resources (1 A100 i.e. a computational budget of ~ โ‚ฌ1,900) and a low carbon footprint (13.5kg eq CO2).

The model's weights are also available on Hugging Face: CATIE-AQ/FAT5-small.
Not very useful in practice, it's a PoC and not an instructed model (it's planned for later).

All the code is available on GitHub if you want to pretrain your own model in your own language or for a specific domain: https://github.com/catie-aq/flashT5 โญ

Ending by indicating that was a joint project with @BorisAlbar at hf.co/CATIE-AQ.
reacted to julien-c's post with ๐Ÿš€๐Ÿ”ฅ about 1 month ago
view post
Post
3203
Important notice ๐Ÿšจ

For Inference Providers who have built support for our Billing API (currently: Fal, Novita, HF-Inference โ€“ with more coming soon), we've started enabling Pay as you go (=PAYG)

What this means is that you can use those Inference Providers beyond the free included credits, and they're charged to your HF account.

You can see it on this view: any provider that does not have a "Billing disabled" badge, is PAYG-compatible.
ยท
reacted to PranjaliJoshi's post with โค๏ธ๐Ÿ‘€ about 1 month ago
view post
Post
639
๐ŸŒ Have you tried Cosmos world foundation models on Hugging Face? Because more updates are coming! ๐Ÿš€

Cosmos world foundation models (WFMs) are generative pretrained models for synthetic data generation for training AI models for robot or autonomous vehicle development.

๐Ÿ› ๏ธ If you are building generative VLMs or foundation models for physical AI like policy models- there are new updates coming at NVIDIA GTC.

GTC is NVIDIAโ€™s biggest annual event (March 17-21) - it will have deep dives, training labs, and researcher-led sessions on Cosmos.

Plus, Jensen Huangโ€™s keynote! ๐ŸŽค

๐ŸŽŸ๏ธ 20% off GTC registration โ†’ Use code HUGGINGFACE20
๐Ÿ”— https://www.nvidia.com/gtc/
๐Ÿ“ Happening in person at the San Jose Convention Center and online.
Explore all Cosmos sessions at GTC: https://nvda.ws/41yBkmY

Try the existing Cosmos WFMs:

๐Ÿ”— Hugging Face models: nvidia/cosmos-6751e884dc10e013a0a0d8e6

๐Ÿ› ๏ธ Post-training scripts: https://github.com/NVIDIA/Cosmos/blob/main/cosmos1/models/POST_TRAINING.md
  • 1 reply
ยท
reacted to AdinaY's post with ๐Ÿš€๐Ÿ”ฅ๐Ÿ˜Ž about 1 month ago
view post
Post
4033
Exciting releases from the Chinese community this February๐Ÿ”ฅ
๐Ÿ‘‰ https://huggingface.co/collections/zh-ai-community/2025-february-67a35aaa68e97812def5b6ef

MLLM:
โœจ Ovis2 by Alibaba
AIDC-AI/ovis2-67ab36c7e497429034874464
โœจ Step Audio Chat by StepFun AI
stepfun-ai/step-audio-67b33accf45735bb21131b0b

Audio:
โœจ Step Audio TTS by StepFunAI
stepfun-ai/Step-Audio-TTS-3B
โœจ InspireMusic by Alibaba
FunAudioLLM
โœจ Baichuan Audio by BaichuanAI
baichuan-inc/Baichuan-Audio-Instruct

Video:
โœจ Wan2.1 by Alibaba_Wan
Wan-AI/Wan2.1-T2V-14B
โœจ Stepvideo-T2V by StepFun AI
stepfun-ai/stepvideo-t2v
โœจ SkyReels-V1 by Skywork
Skywork/skyreels-v1-67b34676ff65b4ec02d16307
โœจ LLaDA-8B by RenminUniversity
GSAI-ML/LLaDA-8B-Instruct

MoE:
โœจ Moonlight-16B by MoonshotAI (Kimi)
moonshotai/Moonlight-16B-A3B-Instruct

Reasoning:
โœจ TinyR1-32B by Qihoo360
qihoo360/TinyR1-32B-Preview

Dataset:
โœจ Chinese DeepSeek R1-Distill data -110k
Congliu/Chinese-DeepSeek-R1-Distill-data-110k
replied to lysandre's post about 2 months ago
reacted to lysandre's post with ๐Ÿš€โค๏ธ about 2 months ago
view post
Post
6249
SmolVLM-2 and SigLIP-2 are now part of transformers in dedicated releases!

They're added on top of the v4.49.0 release, and can be installed from the following tags: v4.49.0-SmolVLM-2 and v4.49.0-SigLIP-2.

This marks a new beginning for the release process of transformers. For the past five years, we've been doing monthly releases featuring many models (v4.49.0, the latest release, features 9 new architectures).

Starting with SmolVLM-2 & SigLIP2, we'll now additionally release tags supporting new models on a stable branch. These models are therefore directly available for use by installing from the tag itself. These tags will continue to be updated with fixes applied to these models.

Going forward, continue expecting software releases following semantic versioning: v4.50.0 will have ~10 new architectures compared to v4.49.0, as well as a myriad of new features, improvements and bug fixes. Accompanying these software releases, we'll release tags offering brand new models as fast as possible, to make them accessible to all immediately.
  • 1 reply
ยท
replied to Keltezaa's post 2 months ago
reacted to AdinaY's post with ๐Ÿ”ฅ๐Ÿš€ 3 months ago
view post
Post
2675
๐Ÿ”ฅSo many exciting releases coming from the Chinese community this month!
https://huggingface.co/collections/zh-ai-community/2025-january-6786b054f492fb223591269e

LLMs:
โœจ Qwen2.5 -1M by Alibaba
Qwen/qwen25-1m-679325716327ec07860530ba
โœจ InternLM3-8B-Instruct by Shanghai AI Lab
internlm/internlm3-8b-instruct
โœจ MiniMax-Text-01 by MiniMax AI
MiniMaxAI/MiniMax-Text-01
โœจ RWKV-7 by BlinkDL -- RNN + Transformer ๐Ÿ‘€
BlinkDL/rwkv-7-world
โœจ DeepSeek-R1 by DeepSeek -- THE ONE ๐Ÿ™Œ
deepseek-ai
โœจ Baichuan-M1-14B by Baichuan - Medical ๐Ÿฉบ
baichuan-inc/Baichuan-M1-14B-Base
โœจ Qwen2.5-Math-PRM by Alibaba - Math ๐Ÿ”ข
Qwen/Qwen2.5-Math-PRM-7B

Code:
โœจ Tare by Bytedance
https://trae.ai

TTS:
โœจ T2A-01-HD by MiniMax AI
https://hailuo.ai/audio
โœจ LLaSA by HKUST Audio
HKUSTAudio/Llasa-3B

MLLM:
โœจ Kimi k1.5 by Moonshot AI
https://kimi.ai
โœจ MiniCPM-o-2_6 by OpenBMB
openbmb/MiniCPM-o-2_6
โœจ Sa2VA-4B by ByteDance
ByteDance/Sa2VA-4B
โœจ VideoLLaMA 3 by Alibaba DAMO
DAMO-NLP-SG/videollama3-678cdda9281a0e32fe79af15
โœจ LLaVA-Mini by Chinese Academy of Sciences
ICTNLP/llava-mini-llama-3.1-8b
โœจHunyuan-7B by Tencent
tencent/Hunyuan-7B-Instruct
โœจ Hunyuan 3D 2.0 by Tencent
tencent/Hunyuan3D-2
โœจMiniMax-VL-01 by MiniMax AI - A non transformer based VLM ๐Ÿ‘€
MiniMaxAI/MiniMax-VL-01

Agent:
โœจ UI-TARS by Bytedance
bytedance-research/UI-TARS-7B-SFT
โœจ GLM-PC by Zhipu AI
https://cogagent.aminer.cn

Dataset:
โœจ Fineweb-Edu-Chinese by Opencsg
opencsg/Fineweb-Edu-Chinese-V2.1
โœจ Multimodal_textbook by Alibaba
DAMO-NLP-SG/multimodal_textbook
โœจ MME-Finance by Hithink AI
ยท
reacted to julien-c's post with ๐Ÿ‘ 4 months ago
view post
Post
10550
After some heated discussion ๐Ÿ”ฅ, we clarify our intent re. storage limits on the Hub

TL;DR:
- public storage is free, and (unless blatant abuse) unlimited. We do ask that you consider upgrading to PRO and/or Enterprise Hub if possible
- private storage is paid above a significant free tier (1TB if you have a paid account, 100GB otherwise)

docs: https://huggingface.co/docs/hub/storage-limits

We optimize our infrastructure continuously to scale our storage for the coming years of growth in Machine learning, to the benefit of the community ๐Ÿ”ฅ

cc: @reach-vb @pierric @victor and the HF team
ยท
replied to julien-c's post 4 months ago
reacted to julien-c's post with ๐Ÿค— 4 months ago
view post
Post
10550
After some heated discussion ๐Ÿ”ฅ, we clarify our intent re. storage limits on the Hub

TL;DR:
- public storage is free, and (unless blatant abuse) unlimited. We do ask that you consider upgrading to PRO and/or Enterprise Hub if possible
- private storage is paid above a significant free tier (1TB if you have a paid account, 100GB otherwise)

docs: https://huggingface.co/docs/hub/storage-limits

We optimize our infrastructure continuously to scale our storage for the coming years of growth in Machine learning, to the benefit of the community ๐Ÿ”ฅ

cc: @reach-vb @pierric @victor and the HF team
ยท