Andrew Reed's picture

Andrew Reed

andrewrreed

AI & ML interests

Applied ML, Practical AI, Inference & Deployment, LLMs, Multi-modal Models, RAG

Recent Activity

Organizations

Hugging Face's profile picture Demo Corp's profile picture Atmos Bank's profile picture Hugging Test Lab's profile picture HuggingFaceM4's profile picture Cloudera Fast Forward Labs's profile picture Code Llama's profile picture Xlscout Ltd's profile picture Olto's profile picture Enterprise Explorers's profile picture Navigate360's profile picture Ryght AI's profile picture Sanofi's profile picture Social Post Explorers's profile picture Xsolla's profile picture open/ acc's profile picture wut?'s profile picture Langfuse's profile picture Inference Endpoints Images's profile picture

andrewrreed's activity

upvoted an article 1 day ago
view article
Article

Dell Enterprise Hub is all you need to build AI on premises

By jeffboudier and 7 others
18
published an article 12 days ago
view article
Article

Dell Enterprise Hub is all you need to build AI on premises

By jeffboudier and 7 others
18
commented on How to Build an MCP Server with Gradio about 1 month ago
upvoted an article about 1 month ago
upvoted 2 articles about 1 month ago
view article
Article

17 Reasons Why Gradio Isn't Just Another UI Library

By ysharma and 1 other
37
upvoted 2 articles about 1 month ago
view article
Article

An Introduction to AI Model Optimization Techniques

By PrunaAI and 1 other
28
upvoted an article about 2 months ago
reacted to jsulz's post with 🔥 about 2 months ago
view post
Post
3766
Huge week for xet-team as Llama 4 is the first major model on Hugging Face uploaded with Xet providing the backing! Every byte downloaded comes through our infrastructure.

Using Xet on Hugging Face is the fastest way to download and iterate on open source models and we've proved it with Llama 4 giving a boost of ~25% across all models.

We expect builders on the Hub to see even more improvements, helping power innovation across the community.

With the models on our infrastructure, we can peer in and see how well our dedupe performs across the Llama 4 family. On average, we're seeing ~25% dedupe, providing huge savings to the community who iterate on these state-of-the-art models. The attached image shows a few selected models and how they perform on Xet.

Thanks to the meta-llama team for launching on Xet!
upvoted an article 2 months ago
view article
Article

The New and Fresh analytics in Inference Endpoints

By erikkaum and 4 others
21