AI & ML interests

Hugging Face Inference Endpoints Images repository allows AI Builders to collaborate and engage creating awesome inference deployments

Recent Activity

AdinaY 
posted an update 3 days ago
view post
Post
3076
Kimi-K2 is now available on the hub🔥🚀
This is a trillion-parameter MoE model focused on long context, code, reasoning, and agentic behavior.

moonshotai/kimi-k2-6871243b990f2af5ba60617d

✨ Base & Instruct
✨ 1T total / 32B active - Modified MIT License
✨ 128K context length
✨ Muon optimizer for stable trillion-scale training
  • 1 reply
·
jbilcke-hf 
posted an update 5 days ago
view post
Post
2544
Are you looking to run a robot simulator, maybe run long robot policy training tasks, but you don't have the GPU at home?

Well.. you can run MuJoCo inside a Hugging Face space!

All you have to do is to clone this space:
jbilcke-hf/train-robots-with-mujoco

Don't forget to a pick a Nvidia GPU for your space, to be able to get some nice OpenGL renders!

Are you new to MuJoCo and/or JupyterLab notebooks?

You can get started with this tutorial (select "Open from URL" then paste the URL to this notebook):
jbilcke-hf/train-robots-with-mujoco

Happy robot hacking! 🦾
  • 2 replies
·
a-r-r-o-w 
posted an update 6 days ago
view post
Post
3039
Caching is an essential technique used in diffusion inference serving for speeding up image/video generations. Diffusers just added support for another caching method: First Block Cache - a technique developed by @chengzeyi building upon the ideas of TeaCache.

The idea in short is: if the model predictions do not vary much over successive inference steps, we can skip certain steps where the prediction difference is small. To figure out whether an inference step will make a significant improvement to the overall velocity/noise prediction, we calculate the relative difference of the output of the first transformer block at timestep $t$ with $t-1$, and compare it against a selected threshold. If the difference is lower than the threshold, we skip the step. A higher threshold will lead to more steps being skipped. However, skipping many steps is bad because it can throw off the model predictions, and so we need to test and select the threshold based on level of quality-speed tradeoff for every model we use it with.

Diffusers usage with CogView4:

import torch
from diffusers import CogView4Pipeline
from diffusers.hooks import apply_first_block_cache, FirstBlockCacheConfig

pipe = CogView4Pipeline.from_pretrained("THUDM/CogView4-6B", torch_dtype=torch.bfloat16)
pipe.to("cuda")

apply_first_block_cache(pipe.transformer, FirstBlockCacheConfig(threshold=0.2))

prompt = "A photo of an astronaut riding a horse on mars"
image = pipe(prompt, generator=torch.Generator().manual_seed(42)).images[0]
image.save("output.png")


Below, you'll find the benchmarks and visualizations of the predicted output at different blocks of the Flux DiT.

Docs: https://huggingface.co/docs/diffusers/main/en/optimization/cache
PR: https://github.com/huggingface/diffusers/pull/11180

References:
- First Block Cache: https://github.com/chengzeyi/ParaAttention
- TeaCache: https://github.com/ali-vilab/TeaCache
AdinaY 
posted an update 6 days ago
view post
Post
439
The tech report of RoboBrain 2.0 is now available on the Daily Papers page🔥

It's an embedded brain model that sees, thinks, and plans for many robots.

Leave your insights or questions, the authors are happy to respond.
RoboBrain 2.0 Technical Report (2507.02029)
AdinaY 
posted an update 6 days ago
AdinaY 
posted an update 6 days ago
view post
Post
254
POLAR🐻‍❄️ New reward modeling by Shanghai AI Lab

internlm/polar-68693f829d2e83ac5e6e124a

✨ 1.8B/7B - Apache 2.0
✨ Scalable policy discriminative pretraining
✨ Easy RLHF with minimal preference data
AdinaY 
posted an update 12 days ago
view post
Post
1942
The Chinese Open Source Heatmap is live 🔥
You can now track the companies/ research labs/ communities powering China’s open source AI movement.

zh-ai-community/model-release-heatmap-zh

Some highlights:

✨Giant Tech are investing more in open source.
-Alibaba: Full stack open ecosystem
-Tecent: Hunyuan image/video/3D
-Bytedance: Catching up fast in 2025
-Baidu: New player in open LLM

✨New players emerging post–DeepSeek moment.
-Xiaomi
-Red Note
-Bilibili
-MiniMax
-Moonshot AI

✨Startup list is shifting fast! Those who find a direction aligned with their strengths are the ones who endure.
-DeepSeek
-MiniMax
-StepFun
-Moonshot AI
-Zhipu AI
-OpenBMB

✨Research Lab & Community are making key contributions.
-BAAI
-Shanghai AI Lab
-OpenMOSS
-MAP
AdinaY 
posted an update 12 days ago
view post
Post
3322
🔥 June highlights from China’s open source ecosystem.

zh-ai-community/june-2025-open-works-from-the-chinese-community-683d66c188f782dc5570ba15

✨Baidu & MiniMax both launched open foundation models
- Baidu: Ernie 4.5 ( from 0.3B -424B ) 🤯
- MiniMax: MiniMax -M1 ( Hybrid MoE reasoning model )

✨Multimodal AI is moving from fusion to full-stack reasoning: unified Any-to-Any pipelines across text, vision, audio, and 3D
- Baidu: ERNIE-4.5-VL-424B
- Moonshot AI: Kimi-VL-A3B
- Alibaba: Ovis-U1
- BAAI: Video-XL-2/OmniGen2
- AntGroup: Ming-Lite-Omni
- Chinese Academy of Science: Stream-Omni
- Bytedance: SeedVR2-3B
- Tencent: Hunyuan 3D 2.1/ SongGeneration
- FishAudio: Openaudio-s1-mini

✨Domain specific models are rapidly emerging
- Alibaba DAMO: Lingshu-7B (medical MLLM)
- BAAI: RoboBrain (Robotics)

✨ So many small models!
- OpenBMB: MiciCPM4 ( on device )
- Qwen: Embedding/Reranker (0.6B)
- Alibaba: Ovis-U1-3B
- Moonshot AI: Kimi-VL-A3B
- Bytedance: SeedVR2-3B
AdinaY 
posted an update 12 days ago
view post
Post
325
MTVCraft 🔥 Veo3 style Audio-Video model by BAAI

Model:
BAAI/MTVCraft
Demo:
BAAI/MTVCraft

✨ Text > [Speech + SFX + BGM] > Synchronized Video
✨ Built with Qwen3 + ElevenLabs + MTV
AdinaY 
posted an update 13 days ago
view post
Post
2295
GLM-4.1V-Thinking 🔥 New open vision reasoning model by Zhipu AI

THUDM/glm-41v-thinking-6862bbfc44593a8601c2578d

✨ 9B base & Thinking - MIT license
✨ CoT + RL with Curriculum Sampling
✨ 64k context, 4K image, any aspect ratio
✨ Support English & Chinese
✨ Outperforms GPT 4O -2024/11/20
arthurbresnu 
posted an update 13 days ago
view post
Post
1997
‼️Sentence Transformers v5.0 is out! The biggest update yet introduces Sparse Embedding models, encode methods improvements, Router module & much more. Sparse + Dense = 🔥 hybrid search performance!

1️⃣ Sparse Encoder Models - New support for sparse embeddings (30k+ dims, <1% non-zero)

* Full SPLADE, Inference-free SPLADE, CSR support
* 4 new modules, 12 losses, 9 evaluators
* Integration with elastic, opensearch-project, Qdrant, ibm-granite
* Decode interpretable embeddings
* Hybrid search integration

2️⃣ Enhanced Encode Methods

* encode_query & encode_document with auto prompts
* Direct device list passing to encode()
* Cleaner multi-processing

3️⃣ Router Module & Training

* Different paths for queries vs documents
* Custom learning rates per parameter group
* Composite loss logging
* Perfect for two-tower architectures

4️⃣ Documentation & Training

* New Training/Loss Overview docs
* 6 training example pages
* Search engine integration examples

Read the comprehensive blogpost about training sparse embedding models: https://huggingface.co/blog/train-sparse-encoder

See the full release notes here: https://github.com/UKPLab/sentence-transformers/releases/v5.0.0

What's next? We would love to hear from the community! What sparse encoder models would you like to see? And what new capabilities should Sentence Transformers handle - multimodal embeddings, late interaction models, or something else? Your feedback shapes our roadmap!

I'm incredibly excited to see the community explore sparse embeddings and hybrid search! The interpretability alone makes this a game-changer for understanding what your models are actually doing.

🙏 Thanks to @tomaarsen for this incredible opportunity!

AdinaY 
posted an update 15 days ago
AdinaY 
posted an update 15 days ago
view post
Post
341
Baidu kept its promise, releasing 10 open models on the very last day of June🚀 Let's meet ERNIE 4.5 🔥

baidu/ernie-45-6861cd4c9be84540645f35c9

✨ From 0.3B to 424B total params
✨ Includes 47B & 3B active param MoE models + a 0.3B dense model
✨ Apache 2.0
✨ 128K context length
✨ Text+Vision co-training with ViT & UPO
a-r-r-o-w 
posted an update 18 days ago
view post
Post
2794
As you might have already heard, FLUX.1-Kontext-dev is now released and taken the generative community by storm!

In case you haven't come across it, you can get started with Kontext using 🤗 diffusers. See the official [model]( black-forest-labs/FLUX.1-Kontext-dev) and [docs](https://huggingface.co/docs/diffusers/main/en/api/pipelines/flux#flux).

Want to know how inference companies like Fal & Replicate are able to run the model so fast and in under 2 seconds per image? Check out this [gist](https://gist.github.com/a-r-r-o-w/d08c37e8bd3e9c26b4ce80360be148c6) for some details!
  • 1 reply
·
AdinaY 
posted an update 18 days ago
view post
Post
3101
Hunyuan-A13B 🔥 New MoE LLM by TencentHunyuan

tencent/Hunyuan-A13B-Instruct

✨80B total / 13B active params
✨256K context window
✨Dual-mode reasoning: fast & slow thinking
✨Efficient inference (GQA + quantization)
jsulz 
posted an update 18 days ago
view post
Post
4377
It's been a bit since I took a step back and looked at xet-team progress to migrate Hugging Face from Git LFS to Xet, but every time I do it boggles the mind.

A month ago there were 5,500 users/orgs on Xet with 150K repos and 4PB. Today?
🤗 700,000 users/orgs
📈 350,000 repos
🚀 15PB

Meanwhile, our migrations have pushed throughput to numbers that are bonkers. In June, we hit upload speeds of 577Gb/s (crossing 500Gb/s for the first time).

These are hard numbers to put into context, but let's try:

The latest run of the Common Crawl from commoncrawl was 471 TB.

We now have ~32 crawls stored in Xet. At peak upload speed we could move the latest crawl into Xet in about two hours.

We're moving to a new phase in the process, so stay tuned.

This shift in gears means it's also time to roll up our sleeves and look at all the bytes we have and the value we're adding to the community.

I already have some homework from @RichardErkhov to look at the dedupe across their uploads, and I'll be doing the same for other early adopters, big models/datasets, and frequent uploaders (looking at you @bartowski 👀)

Let me know if there's anything you're interested in; happy to dig in!
·
freddyaboulton 
posted an update 19 days ago