Lain

not-lain

AI & ML interests

custom AI models with HF integration, multimodal rag and open-source contributions

Articles

Organizations

not-lain's activity

reacted to DavidGF's post with πŸ”₯ 8 days ago
view post
Post
2915
πŸŽ‰ Celebrating One Year of #SauerkrautLM with Two Groundbreaking Releases!

We're thrilled to announce the release of SauerkrautLM-v2-14b in two specialized versions: VAGOsolutions/SauerkrautLM-v2-14b-SFT and VAGOsolutions/SauerkrautLM-v2-14b-DPO. Built on the robust Qwen2.5-14B foundation, these models represent a significant leap forward in multilingual AI capabilities.

πŸ”¬ Technical Breakthroughs:
πŸ’  Innovative three-phase Fine-Tuning approach
πŸ’  Two-step Spectrum SFT + one-step Spectrum DPO optimization phase for enhanced performance
πŸ’  Balance of German and English language capabilities
πŸ’  Advanced function calling - almost on par with Claude-3.5-Sonnet-20240620

πŸ‡©πŸ‡ͺ German Language Excellence:
What sets this release apart is our unique achievement in simultaneously improving both German and English capabilities. Through our specialized training approach with over 1.2B tokens across two phases, we've managed to:
πŸ’  Enhance German language understanding and generation (SFT Version > DPO Version)
πŸ’  Maintain authentic German linguistic nuances
πŸ’  Improve cross-lingual capabilities
πŸ’  Preserve cultural context awareness

πŸ“Š Training Innovation:
Our three-phase approach targeted specific layer percentages (15%, 20% and 25%) with carefully curated datasets, including:
πŸ’  Mathematics-focused content (proprietary classifier-selected)
πŸ’  High-quality German training data
πŸ’  Specialized function calling datasets
πŸ’  Premium multilingual content

🎁 Community Contribution:
We're also releasing two new datasets in a few days:
1️⃣ SauerkrautLM-Fermented-GER-DPO: 3,300 high-quality German training samples
2️⃣ SauerkrautLM-Fermented-Irrelevance-GER-DPO: 2,000 specialized samples for optimized function call irrelevance handling

Thank you to our incredible community and partners who have supported us throughout this journey. Here's to another year of AI innovation!Β πŸš€
reacted to reach-vb's post with πŸ”₯πŸš€ 9 days ago
view post
Post
2888
Smol models ftw! AMD released AMD OLMo 1B - beats OpenELM, tiny llama on MT Bench, Alpaca Eval - Apache 2.0 licensed πŸ”₯

> Trained with 1.3 trillion (dolma 1.7) tokens on 16 nodes, each with 4 MI250 GPUs

> Three checkpoints:

- AMD OLMo 1B: Pre-trained model
- AMD OLMo 1B SFT: Supervised fine-tuned on Tulu V2, OpenHermes-2.5, WebInstructSub, and Code-Feedback datasets
- AMD OLMo 1B SFT DPO: Aligned with human preferences using Direct Preference Optimization (DPO) on UltraFeedback dataset

Key Insights:
> Pre-trained with less than half the tokens of OLMo-1B
> Post-training steps include two-phase SFT and DPO alignment
> Data for SFT:
- Phase 1: Tulu V2
- Phase 2: OpenHermes-2.5, WebInstructSub, and Code-Feedback

> Model checkpoints on the Hub & Integrated with Transformers ⚑️

Congratulations & kudos to AMD on a brilliant smol model release! πŸ€—

amd/amd-olmo-6723e7d04a49116d8ec95070
reacted to nroggendorff's post with 🀯 10 days ago
view post
Post
1791
Did you guys know that if you try to link a prepaid card to huggingface it won't work, but then if you press the button again it links anyway? Then you can lock the card (deny any charges), and get resources for free? You're welcome :P
Β·
reacted to merve's post with ❀️πŸ”₯ 12 days ago
view post
Post
5147
Another great week in open ML!
Here's a small recap 🫰🏻

Model releases
⏯️ Video Language Models
AI at Meta released Vision-CAIR/LongVU_Qwen2_7B, a new state-of-the-art long video LM model based on DINOv2, SigLIP, Qwen2 and Llama 3.2

πŸ’¬ Small language models
Hugging Face released HuggingFaceTB/SmolLM2-1.7B, a family of new smol language models with Apache 2.0 license that come in sizes 135M, 360M and 1.7B, along with datasets.
Meta released facebook/MobileLLM-1B, a new family of on-device LLMs of sizes 125M, 350M and 600M

πŸ–ΌοΈ Image Generation
Stability AI released stabilityai/stable-diffusion-3.5-medium, a 2B model with commercially permissive license

πŸ–ΌοΈπŸ’¬Any-to-Any
gpt-omni/mini-omni2 is closest reproduction to GPT-4o, a new LLM that can take image-text-audio input and output speech is released!

Dataset releases
πŸ–ΌοΈ Spawning/PD12M, a new captioning dataset of 12.4 million examples generated using Florence-2
reacted to vikhyatk's post with πŸ”₯ 16 days ago
view post
Post
4055
Pushed a new update to vikhyatk/moondream2 today. TextVQA up from 60.2 to 65.2, DocVQA up from 61.9 to 70.5.

Space has been updated to the new model if you want to try it out! vikhyatk/moondream2
reacted to vikhyatk's post with πŸ”₯πŸ‘ 16 days ago
view post
Post
1237
Just released a dataset with 7000+ hours of synthetically generated lo-fi music. vikhyatk/lofi
replied to merve's post 16 days ago
reacted to merve's post with β€οΈπŸ‘πŸ”₯ 16 days ago
view post
Post
5077
Hugging Face Hub Python library now comes with easy inference for vision language models! ✨

$ pip install huggingface_hub πŸ€—
  • 1 reply
Β·
reacted to nroggendorff's post with 😎 17 days ago
view post
Post
3287
@echo off
echo hello world
pause

Β·
reacted to fdaudens's post with ❀️ 18 days ago
view post
Post
2749
🀯 Plot twist: Size isn't everything in AI! A lean 32B parameter model just showed up to the party and outperformed a 70B one. Efficiency > Scale? The AI world just got more interesting...

Cohere For AI released Aya Expanse, a new family of multilingual models (8B and 32B) spanning 23 popular languages.

Models: CohereForAI/c4ai-aya-expanse-671a83d6b2c07c692beab3c3
Blog post: https://huggingface.co/blog/aya-expanse
Demo: CohereForAI/aya_expanse
reacted to merve's post with β€οΈπŸ‘€ 19 days ago
view post
Post
2404
Lotus πŸͺ· is a new foundation model on monocular depth estimation ✨
Compared to previous diffusion-based MDE models, Lotus is modified for dense prediction tasks
Authors also released a model for normal prediction πŸ€—
Find everything in this collection merve/lotus-6718fb957dc1c85a47ca1210
reacted to thomwolf's post with πŸš€β€οΈ 19 days ago
view post
Post
3883
Parents in the 1990: Teach the kids to code
Parents now: Teach the kids to fix the code when it starts walking around πŸ€–βœ¨
  • 2 replies
Β·
reacted to fdaudens's post with πŸ‘€ 20 days ago
view post
Post
1341
Just watched @thomwolf tear down the over-hyped AGI narrative in 30 seconds - and it's refreshingly grounded.

No wild speculation about superintelligence timelines or consciousness. Just practical insights from someone who really understands the technology.

This is the kind of level-headed perspective that helps us focus on what AI can actually do today (which is already transformative) rather than getting lost in AGI fantasy. Worth your time if you want to understand AI progress without the hype.

Watch the full interview at CogX here: https://www.youtube.com/watch?v=IjL_6Th6Ea0