AI & ML interests

ONNX is an open ecosystem for interoperable AI models

Recent Activity

echarlaixΒ  new activity 17 days ago
onnx/export:tag-onnx
echarlaixΒ  new activity 17 days ago
onnx/export:write-discussion
echarlaixΒ  new activity 17 days ago
onnx/export:torch-requirements
View all activity

onnx's activity

XenovaΒ 
posted an update 2 days ago
view post
Post
2003
NEW: Real-time conversational AI models can now run 100% locally in your browser! 🀯

πŸ” Privacy by design (no data leaves your device)
πŸ’° Completely free... forever
πŸ“¦ Zero installation required, just visit a website
⚑️ Blazingly-fast WebGPU-accelerated inference

Try it out: webml-community/conversational-webgpu

For those interested, here's how it works:
- Silero VAD for voice activity detection
- Whisper for speech recognition
- SmolLM2-1.7B for text generation
- Kokoro for text to speech

Powered by Transformers.js and ONNX Runtime Web! πŸ€— I hope you like it!
  • 2 replies
Β·
echarlaixΒ 
in onnx/export 17 days ago

tag-onnx

1
#18 opened 17 days ago by
echarlaix

write-discussion

#17 opened 17 days ago by
echarlaix

torch-requirements

#16 opened 17 days ago by
echarlaix

hf-oauth

#15 opened 17 days ago by
echarlaix

fix

#14 opened 18 days ago by
echarlaix
echarlaixΒ 
in onnx/export 18 days ago

onnx-export-fix

4
#13 opened 21 days ago by
echarlaix
reach-vbΒ 
posted an update 18 days ago
view post
Post
3675
hey hey @mradermacher - VB from Hugging Face here, we'd love to onboard you over to our optimised xet backend! πŸ’₯

as you know we're in the process of upgrading our storage backend to xet (which helps us scale and offer blazingly fast upload/ download speeds too): https://huggingface.co/blog/xet-on-the-hub and now that we are certain that the backend can scale with even big models like Llama 4/ Qwen 3 - we;re moving to the next phase of inviting impactful orgs and users on the hub over as you are a big part of the open source ML community - we would love to onboard you next and create some excitement about it in the community too!

in terms of actual steps - it should be as simple as one of the org admins to join hf.co/join/xet - we'll take care of the rest.

p.s. you'd need to have a the latest hf_xet version of huggingface_hub lib but everything else should be the same: https://huggingface.co/docs/hub/storage-backends#using-xet-storage

p.p.s. this is fully backwards compatible so everything will work as it should! πŸ€—
Β·
XenovaΒ 
posted an update about 1 month ago
view post
Post
8079
Introducing the ONNX model explorer: Browse, search, and visualize neural networks directly in your browser. 🀯 A great tool for anyone studying Machine Learning! We're also releasing the entire dataset of graphs so you can use them in your own projects! πŸ€—

Check it out! πŸ‘‡
Demo: onnx-community/model-explorer
Dataset: onnx-community/model-explorer
Source code: https://github.com/xenova/model-explorer
XenovaΒ 
posted an update about 2 months ago
view post
Post
2680
Reasoning models like o3 and o4-mini are advancing faster than ever, but imagine what will be possible when they can run locally in your browser! 🀯

Well, with πŸ€— Transformers.js, you can do just that! Here's Zyphra's new ZR1 model running at over 100 tokens/second on WebGPU! ⚑️

Giving models access to browser APIs (like File System, Screen Capture, and more) could unlock an entirely new class of web experiences that are personalized, interactive, and run locally in a secure, sandboxed environment.

For now, try out the demo! πŸ‘‡
webml-community/Zyphra-ZR1-WebGPU
  • 1 reply
Β·
XenovaΒ 
posted an update 4 months ago
view post
Post
13432
We did it. Kokoro TTS (v1.0) can now run 100% locally in your browser w/ WebGPU acceleration. Real-time text-to-speech without a server. ⚑️

Generate 10 seconds of speech in ~1 second for $0.

What will you build? πŸ”₯
webml-community/kokoro-webgpu

The most difficult part was getting the model running in the first place, but the next steps are simple:
βœ‚οΈ Implement sentence splitting, allowing for streamed responses
🌍 Multilingual support (only phonemization left)

Who wants to help?
Β·
XenovaΒ 
posted an update 5 months ago
view post
Post
7809
Introducing Kokoro.js, a new JavaScript library for running Kokoro TTS, an 82 million parameter text-to-speech model, 100% locally in the browser w/ WASM. Powered by πŸ€— Transformers.js. WebGPU support coming soon!
πŸ‘‰ npm i kokoro-js πŸ‘ˆ

Try it out yourself: webml-community/kokoro-web
Link to models/samples: onnx-community/Kokoro-82M-ONNX

You can get started in just a few lines of code!
import { KokoroTTS } from "kokoro-js";

const tts = await KokoroTTS.from_pretrained(
  "onnx-community/Kokoro-82M-ONNX",
  { dtype: "q8" }, // fp32, fp16, q8, q4, q4f16
);

const text = "Life is like a box of chocolates. You never know what you're gonna get.";
const audio = await tts.generate(text,
  { voice: "af_sky" }, // See `tts.list_voices()`
);
audio.save("audio.wav");

Huge kudos to the Kokoro TTS community, especially taylorchu for the ONNX exports and Hexgrad for the amazing project! None of this would be possible without you all! πŸ€—

The model is also extremely resilient to quantization. The smallest variant is only 86 MB in size (down from the original 326 MB), with no noticeable difference in audio quality! 🀯
Β·
XenovaΒ 
posted an update 5 months ago
view post
Post
8493
First project of 2025: Vision Transformer Explorer

I built a web app to interactively explore the self-attention maps produced by ViTs. This explains what the model is focusing on when making predictions, and provides insights into its inner workings! 🀯

Try it out yourself! πŸ‘‡
webml-community/attention-visualization

Source code: https://github.com/huggingface/transformers.js-examples/tree/main/attention-visualization
akhaliqΒ 
posted an update 6 months ago
view post
Post
20608
Google drops Gemini 2.0 Flash Thinking

a new experimental model that unlocks stronger reasoning capabilities and shows its thoughts. The model plans (with thoughts visible), can solve complex problems with Flash speeds, and more

now available in anychat, try it out: https://huggingface.co/spaces/akhaliq/anychat
Β·
XenovaΒ 
posted an update 6 months ago
view post
Post
4438
Introducing Moonshine Web: real-time speech recognition running 100% locally in your browser!
πŸš€ Faster and more accurate than Whisper
πŸ”’ Privacy-focused (no data leaves your device)
⚑️ WebGPU accelerated (w/ WASM fallback)
πŸ”₯ Powered by ONNX Runtime Web and Transformers.js

Demo: webml-community/moonshine-web
Source code: https://github.com/huggingface/transformers.js-examples/tree/main/moonshine-web
Β·
XenovaΒ 
posted an update 6 months ago
view post
Post
3422
Introducing TTS WebGPU: The first ever text-to-speech web app built with WebGPU acceleration! πŸ”₯ High-quality and natural speech generation that runs 100% locally in your browser, powered by OuteTTS and Transformers.js. πŸ€— Try it out yourself!

Demo: webml-community/text-to-speech-webgpu
Source code: https://github.com/huggingface/transformers.js-examples/tree/main/text-to-speech-webgpu
Model: onnx-community/OuteTTS-0.2-500M (ONNX), OuteAI/OuteTTS-0.2-500M (PyTorch)