Vaibhav Srivastav's picture

Vaibhav Srivastav PRO

reach-vb

AI & ML interests

TTS + LM performance prediction

Recent Activity

updated a dataset about 18 hours ago
reach-vb/trending-repos
updated a dataset 1 day ago
reach-vb/transformers-releases
liked a Space 1 day ago
PlayHT/PlayDiffusion
View all activity

Organizations

Hugging Face's profile picture Notebooks-explorers's profile picture 🧨Diffusers's profile picture Whisper fine-tuning sprint's profile picture The LLM Course's profile picture Whisper Fine-Tuning Event's profile picture Kensho's profile picture Mozilla Foundation's profile picture PolinaOrg's profile picture Speech Recognition Community Event Version 2's profile picture Internal Data & Models for Speech Recognition Event's profile picture Coqui.ai's profile picture onnx's profile picture Hugging Test Lab's profile picture Internal Data's profile picture The Team Ten's profile picture Huggingface Projects's profile picture EuroPython 2022's profile picture Whisper Distillation's profile picture BigCode's profile picture Harmonai's Dance Diffusion Community's profile picture Hugging Face OSS Metrics's profile picture EuroSciPy 2022's profile picture LaLoka Labs's profile picture Core ML Projects's profile picture meta-private's profile picture Blog-explorers's profile picture Music Gen Sprint's profile picture Hugging Face for Audio's profile picture Hugging Face Smol Models Research's profile picture Open ASR Leaderboard's profile picture test's profile picture MusicGen Internal's profile picture TTS Eval (OLD)'s profile picture Editing Audio's profile picture ZeroGPU Explorers's profile picture ggml.ai's profile picture LocalLLaMA's profile picture gg-hf's profile picture Python Italia's profile picture Unofficial Mistral Community's profile picture Journalists on Hugging Face's profile picture Llzama's profile picture finding-nemo's profile picture diarizers-community's profile picture MLX Community's profile picture Cartesia's profile picture Hugging Face Assignments's profile picture IBM Granite's profile picture On-device Squad's profile picture TTS AGI's profile picture Social Post Explorers's profile picture LM Studio Community's profile picture Apple CoreNet Models 's profile picture gg-gguf's profile picture hsramall's profile picture Lina Speech's profile picture Dev Mode Explorers's profile picture Sweet Dream(Booth)s's profile picture private beta for deeplinks's profile picture Paris AI Running Club's profile picture gg-tt's profile picture OuteAI's profile picture Hugging Face Discord Community's profile picture LLHF's profile picture SLLHF's profile picture Ratchet Community's profile picture lbhf's profile picture Hugging Quants's profile picture CoreML Scratchpad's profile picture blhf's profile picture Meta Llama's profile picture AI at Meta's profile picture kmhf's profile picture nltpt's profile picture nltpt-q's profile picture H company's profile picture ai4b-hf's profile picture Ollama Tools's profile picture Spirit LM's profile picture qrias's profile picture Audio Collabs's profile picture Consumer AI Edge Hackathon (Meta, Hugging Face, Pytorch, Scaleway & Unaite)'s profile picture open/ acc's profile picture ExecuTorch Community's profile picture wut?'s profile picture DDUF's profile picture AI Starter Pack's profile picture None yet's profile picture Open R1's profile picture LiteRT Community (FKA TFLite)'s profile picture MultiLlasa's profile picture gg-hf-g's profile picture mshf's profile picture fluxions-hf's profile picture yoso's profile picture hf-private-mlx's profile picture Bitsandbytes Community's profile picture llrehf's profile picture HF Trending Deploy's profile picture hf-inference's profile picture Transformers Community's profile picture Cerebras Hugging Face Hackathon's profile picture Inference Endpoints Images's profile picture yolo's profile picture kozistr grant org's profile picture gg-hf-gm's profile picture Model Metadata's profile picture Hugging Face MCP Course's profile picture yoco-sl's profile picture yofo's profile picture Changelog's profile picture yorgllre's profile picture MLX Community – Staging's profile picture

reach-vb's activity

reacted to jsulz's post with πŸ”₯ 14 days ago
view post
Post
2099
Heyo @RichardErkhov the xet-team at Hugging face was wondering if you wanted to join the fun and jump over to Xet storage. πŸ€—

We've been onboarding folks https://huggingface.co/blog/xet-on-the-hub know the backend can scale (Llama 4 and Qwen 3 are on Xet), is great for working with quants (see xet-team/quantization-dedup ), and we're pushing on inviting impactful orgs and users on the Hub. You fit the bill.

We'd love to onboard you, get some feedback, and create some excitement πŸŽ‰

The steps are pretty straightforward - join the waitlist at hf.co/join/xet and we'll take care of the rest.

The system is fully backward compatible, so you shouldn't notice a thing. BUT to get the best experience when uploading/downloading, make sure you have hf_xet installed alongside the latest huggingface_hub

What do you think?
  • 4 replies
Β·
replied to their post 15 days ago
view reply

we're still optimising the > 50GB path, so at least right now, I'd recommend keeping <50 GB shards but this might change soon and then we can work out a plan

replied to their post 15 days ago
view reply

perfect! can you try and join the waitlist via hf.co/join/xet please!

posted an update 16 days ago
view post
Post
3596
hey hey @mradermacher - VB from Hugging Face here, we'd love to onboard you over to our optimised xet backend! πŸ’₯

as you know we're in the process of upgrading our storage backend to xet (which helps us scale and offer blazingly fast upload/ download speeds too): https://huggingface.co/blog/xet-on-the-hub and now that we are certain that the backend can scale with even big models like Llama 4/ Qwen 3 - we;re moving to the next phase of inviting impactful orgs and users on the hub over as you are a big part of the open source ML community - we would love to onboard you next and create some excitement about it in the community too!

in terms of actual steps - it should be as simple as one of the org admins to join hf.co/join/xet - we'll take care of the rest.

p.s. you'd need to have a the latest hf_xet version of huggingface_hub lib but everything else should be the same: https://huggingface.co/docs/hub/storage-backends#using-xet-storage

p.p.s. this is fully backwards compatible so everything will work as it should! πŸ€—
Β·
reacted to fdaudens's post with πŸ‘ 23 days ago
view post
Post
1207
The rapid progress in small audio models is mind-blowing! 🀯 Just tested OuteTTS v0.2 - cloned my voice from a 10s clip with impressive accuracy and natural prosody.

At 500M parameters, it's efficient enough to run on basic hardware but powerful enough for professional use.

This could transform how we produce audio content for new - think instant translated interviews keeping original voices, or scaled audio article production!

Demo and Model on the Hub: OuteAI/OuteTTS-0.2-500M h/t @reach-vb
  • 3 replies
Β·
reacted to clem's post with β€οΈπŸ‘€ 2 months ago
reacted to lbourdois's post with πŸ”₯❀️ 2 months ago
view post
Post
2806
We introduce FAT5 (Flash Attention T5) ⚑

An implementation of T5 in PyTorch with UL2 objective optimized for GPGPU for both training and inference thanks to 13 different optimizations.
The main one is that we have designed a CUDA kernel to expand the Flash Attention by @tridao with RPE biases and supports other PE such as RoPE, ALiBi or FIRE.
The result kernel is 2 times faster than a SPDA implementation.
We also use Triton kernels to optimize certain parts of the architecture, such as the cross-entropy and RMSNorm layer.

The various kernels have been carefully built to be compatible with BF16 and torch.compile to go even faster and achieve efficient pretraining.

All other optimizations are described in a πŸ“ subsequent blog post available on @huggingface πŸ€—: CATIE-AQ/FAT5-report.

This methodology enabled us to efficiently pretrain as a proof of concept a FAT5 with 147M parameters in French in a reasonable time (1,461H for 419B tokens), with limited resources (1 A100 i.e. a computational budget of ~ €1,900) and a low carbon footprint (13.5kg eq CO2).

The model's weights are also available on Hugging Face: CATIE-AQ/FAT5-small.
Not very useful in practice, it's a PoC and not an instructed model (it's planned for later).

All the code is available on GitHub if you want to pretrain your own model in your own language or for a specific domain: https://github.com/catie-aq/flashT5 ⭐

Ending by indicating that was a joint project with @BorisAlbar at hf.co/CATIE-AQ.
reacted to julien-c's post with πŸš€πŸ”₯ 3 months ago
view post
Post
3875
Important notice 🚨

For Inference Providers who have built support for our Billing API (currently: Fal, Novita, HF-Inference – with more coming soon), we've started enabling Pay as you go (=PAYG)

What this means is that you can use those Inference Providers beyond the free included credits, and they're charged to your HF account.

You can see it on this view: any provider that does not have a "Billing disabled" badge, is PAYG-compatible.
Β·
reacted to PranjaliJoshi's post with β€οΈπŸ‘€ 3 months ago
view post
Post
673
🌍 Have you tried Cosmos world foundation models on Hugging Face? Because more updates are coming! πŸš€

Cosmos world foundation models (WFMs) are generative pretrained models for synthetic data generation for training AI models for robot or autonomous vehicle development.

πŸ› οΈ If you are building generative VLMs or foundation models for physical AI like policy models- there are new updates coming at NVIDIA GTC.

GTC is NVIDIA’s biggest annual event (March 17-21) - it will have deep dives, training labs, and researcher-led sessions on Cosmos.

Plus, Jensen Huang’s keynote! 🎀

🎟️ 20% off GTC registration β†’ Use code HUGGINGFACE20
πŸ”— https://www.nvidia.com/gtc/
πŸ“ Happening in person at the San Jose Convention Center and online.
Explore all Cosmos sessions at GTC: https://nvda.ws/41yBkmY

Try the existing Cosmos WFMs:

πŸ”— Hugging Face models: nvidia/cosmos-6751e884dc10e013a0a0d8e6

πŸ› οΈ Post-training scripts: https://github.com/NVIDIA/Cosmos/blob/main/cosmos1/models/POST_TRAINING.md
  • 1 reply
Β·
reacted to AdinaY's post with πŸš€πŸ”₯😎 3 months ago
view post
Post
4063
Exciting releases from the Chinese community this FebruaryπŸ”₯
πŸ‘‰ https://huggingface.co/collections/zh-ai-community/2025-february-67a35aaa68e97812def5b6ef

MLLM:
✨ Ovis2 by Alibaba
AIDC-AI/ovis2-67ab36c7e497429034874464
✨ Step Audio Chat by StepFun AI
stepfun-ai/step-audio-67b33accf45735bb21131b0b

Audio:
✨ Step Audio TTS by StepFunAI
stepfun-ai/Step-Audio-TTS-3B
✨ InspireMusic by Alibaba
FunAudioLLM
✨ Baichuan Audio by BaichuanAI
baichuan-inc/Baichuan-Audio-Instruct

Video:
✨ Wan2.1 by Alibaba_Wan
Wan-AI/Wan2.1-T2V-14B
✨ Stepvideo-T2V by StepFun AI
stepfun-ai/stepvideo-t2v
✨ SkyReels-V1 by Skywork
Skywork/skyreels-v1-67b34676ff65b4ec02d16307
✨ LLaDA-8B by RenminUniversity
GSAI-ML/LLaDA-8B-Instruct

MoE:
✨ Moonlight-16B by MoonshotAI (Kimi)
moonshotai/Moonlight-16B-A3B-Instruct

Reasoning:
✨ TinyR1-32B by Qihoo360
qihoo360/TinyR1-32B-Preview

Dataset:
✨ Chinese DeepSeek R1-Distill data -110k
Congliu/Chinese-DeepSeek-R1-Distill-data-110k
replied to lysandre's post 3 months ago
reacted to lysandre's post with πŸš€β€οΈ 3 months ago
view post
Post
6917
SmolVLM-2 and SigLIP-2 are now part of transformers in dedicated releases!

They're added on top of the v4.49.0 release, and can be installed from the following tags: v4.49.0-SmolVLM-2 and v4.49.0-SigLIP-2.

This marks a new beginning for the release process of transformers. For the past five years, we've been doing monthly releases featuring many models (v4.49.0, the latest release, features 9 new architectures).

Starting with SmolVLM-2 & SigLIP2, we'll now additionally release tags supporting new models on a stable branch. These models are therefore directly available for use by installing from the tag itself. These tags will continue to be updated with fixes applied to these models.

Going forward, continue expecting software releases following semantic versioning: v4.50.0 will have ~10 new architectures compared to v4.49.0, as well as a myriad of new features, improvements and bug fixes. Accompanying these software releases, we'll release tags offering brand new models as fast as possible, to make them accessible to all immediately.
  • 1 reply
Β·
replied to Keltezaa's post 4 months ago