Luc Georges's picture

Luc Georges

mcpotato

AI & ML interests

None yet

Recent Activity

Articles

Organizations

Hugging Face's profile picture Scanned Tokens's profile picture Hugging Face H4's profile picture Inference Endpoints's profile picture Blog-explorers's profile picture hf-security's profile picture Dev Mode Explorers's profile picture Hugging Face Discord Community's profile picture HF MC Players's profile picture

mcpotato's activity

upvoted 2 articles 1 day ago
view article
Article

Introducing smolagents: simple agents that write actions in code.

535
view article
Article

Welcome to Inference Providers on the Hub 🔥

162
upvoted an article 4 months ago
view article
Article

Fine-tuning LLMs to 1.58bit: extreme quantization made easy

216
upvoted 2 articles 5 months ago
view article
Article

2024 Security Feature Highlights

16
view article
Article

Hugging Face partners with TruffleHog to Scan for Secrets

10
upvoted an article 7 months ago
view article
Article

Binary and Scalar Embedding Quantization for Significantly Faster & Cheaper Retrieval

70
reacted to dvilasuero's post with 🚀🔥 8 months ago
view post
Post
8133
Today is a huge day in Argilla’s history. We couldn’t be more excited to share this with the community: we’re joining Hugging Face!

We’re embracing a larger mission, becoming part of a brilliant and kind team and a shared vision about the future of AI.

Over the past year, we’ve been collaborating with Hugging Face on countless projects: launching partner of Docker Spaces, empowering the community to clean Alpaca translations into Spanish and other languages, launching argilla/notus-7b-v1 building on Zephyr’s learnings, the Data is Better Together initiative with hundreds of community contributors, or releasing argilla/OpenHermesPreferences, one of the largest open preference tuning datasets

After more than 2,000 Slack messages and over 60 people collaborating for over a year, it already felt like we were part of the same team, pushing in the same direction. After a week of the smoothest transition you can imagine, we’re now the same team.

To those of you who’ve been following us, this won’t be a huge surprise, but it will be a big deal in the coming months. This acquisition means we’ll double down on empowering the community to build and collaborate on high quality datasets, we’ll bring full support for multimodal datasets, and we’ll be in a better place to collaborate with the Open Source AI community. For enterprises, this means that the Enterprise Hub will unlock highly requested features like single sign-on and integration with Inference Endpoints.

As a founder, I am proud of the Argilla team. We're now part of something bigger and a larger team but with the same values, culture, and goals. Grateful to have shared this journey with my beloved co-founders Paco and Amélie.

Finally, huge thanks to the Chief Llama Officer @osanseviero for sparking this and being such a great partner during the acquisition process.

Would love to answer any questions you have so feel free to add them below!
·
New activity in dev-mode-explorers/README 8 months ago
upvoted an article 8 months ago
view article
Article

Space secrets security update

50
published an article 9 months ago
upvoted an article 10 months ago
view article
Article

Making thousands of open LLMs bloom in the Vertex AI Model Garden

18
reacted to trisfromgoogle's post with 🚀 10 months ago
view post
Post
1844
Very excited to share the first two official Gemma variants from Google! Today at Google Cloud Next, we announced cutting-edge models for code and research!

First, google/codegemma-release-66152ac7b683e2667abdee11 - a new set of code-focused Gemma models at 2B and 7B, in both pretrained and instruction-tuned variants. These exhibit outstanding performance on academic benchmarks and (in my experience) real-life usage. Read more in the excellent HuggingFace blog: https://huggingface.co/blog/codegemma

Second, ( google/recurrentgemma-release-66152cbdd2d6619cb1665b7a), which is based on the outstanding Google DeepMind research in Griffin: https://arxiv.org/abs/2402.19427. RecurrentGemma is a research variant that enables higher throughput and vastly improved memory usage. We are excited about new architectures, especially in the lightweight Gemma sizes, where innovations like RecurrentGemma can scale modern AI to many more use cases.

For details on the launches of these models, check out our launch blog -- and please do not hesitate to send us feedback. We are excited to see what you build with CodeGemma and RecurrentGemma!

Huge thanks to the Hugging Face team for helping ensure that these models work flawlessly in the Hugging Face ecosystem at launch!
·