alkinun's picture

alkinun

AtAndDev

AI & ML interests

LLMs, Alignment, Merging, Unsloth, DPO, SFT, ORPO, SPIN..

Recent Activity

Organizations

ESPnet's profile picture CVPR Demo Track's profile picture BigScience Biomedical Datasets's profile picture ONNXConfig for all's profile picture Gradio-Themes-Party's profile picture video-p2p-library's profile picture Gradio-Blocks-Party's profile picture scikit-learn's profile picture lora concepts library's profile picture OpenBuddy Community's profile picture Open-Source AI Meetup's profile picture ECCV 2022's profile picture Kornia AI's profile picture Tune a video concepts library's profile picture SIGGRAPH 2022's profile picture Interspeech2022's profile picture Stable Diffusion concepts library's profile picture SIGGRAPH Asia 2022 Demos's profile picture Stable Diffusion Dreambooth Concepts Library's profile picture Musika's profile picture Blog-explorers's profile picture OpenSky's profile picture ICCV2023's profile picture ICML2023's profile picture huggingPartyParis's profile picture MultiπŸ€–Transformers's profile picture Team Tonic's profile picture That Time I got Reincarnated as a Hugging Face Organization's profile picture ZeroGPU Explorers's profile picture Pirates Party for all software open source's profile picture MLX Community's profile picture recipe research's profile picture Narra's profile picture Social Post Explorers's profile picture Cognitive Computations's profile picture M4-ai's profile picture Spinner-GPT-4's profile picture Dev Mode Explorers's profile picture Stable Diffusion Community (Unofficial, Non-profit)'s profile picture Hugging Face Discord Community's profile picture Nerdy Face's profile picture OpenEndedLM's profile picture open/ acc's profile picture Data Is Better Together Contributor's profile picture None yet's profile picture

AtAndDev's activity

replied to merve's post 27 minutes ago
view reply

More like sir cartier cash carti yung carti king vamp carti baby boi guapo

reacted to merve's post with πŸ”₯ 3 days ago
view post
Post
1100
New GUI model by Salesforce AI & Uni HK: Jedi
tianbaoxiexxx/Jedi xlangai/Jedi-7B-1080p πŸ€—
Based on Qwen2.5-VL with Apache 2.0 license

prompt with below screenshot β†’ select "find more"
  • 3 replies
Β·
replied to merve's post 3 days ago
reacted to attackerElvies's post with πŸ€— 3 days ago
view post
Post
1696
HALOOO MY COMMUNITY
posted an update 9 days ago
view post
Post
2667
deepseek-ai/DeepSeek-R1-0528

This is the end
  • 1 reply
Β·
reacted to mlabonne's post with πŸš€πŸ˜Žβ€οΈπŸ”₯πŸ‘ 9 days ago
reacted to codelion's post with πŸ”₯ 9 days ago
view post
Post
2318
Introducing AutoThink: Adaptive reasoning for LLMs that improves performance by 43% on reasoning benchmarks!

Instead of using fixed thinking budgets, AutoThink:
- Classifies query complexity (HIGH/LOW) using adaptive classification
- Dynamically allocates thinking tokens based on complexity
- Uses steering vectors derived from Pivotal Token Search to guide reasoning patterns

Results on DeepSeek-R1-Distill-Qwen-1.5B:
- GPQA-Diamond: 31.06% vs 21.72% baseline (+9.34 points)
- MMLU-Pro: 26.38% vs 25.58% baseline (+0.8 points)
- Uses fewer tokens than baseline approaches

Works with any local reasoning model - DeepSeek, Qwen, Llama, custom models. The technique combines our research on Pivotal Token Search (PTS) implementation and adaptive classification frameworks.

Paper: AutoThink: efficient inference for reasoning LLMs
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5253327

Code and examples:
https://github.com/codelion/optillm/tree/main/optillm/autothink

PTS implementation and technical details:
https://github.com/codelion/pts
https://huggingface.co/blog/codelion/pts

Adaptive classifier framework:
https://github.com/codelion/adaptive-classifier

Would love to hear your thoughts on adaptive resource allocation for LLM reasoning! Have you experimented with similar approaches?
  • 5 replies
Β·
reacted to merve's post with πŸš€ 10 days ago
view post
Post
2537
emerging trend: models that can understand image + text and generate image + text

don't miss out ‡️
> MMaDA: single 8B diffusion model aligned with CoT (reasoning!) + UniGRPO Gen-Verse/MMaDA
> BAGEL: 7B MoT model based on Qwen2.5, SigLIP-so-400M, Flux VAE ByteDance-Seed/BAGEL
both by ByteDance! 😱

I keep track of all any input β†’ any output models here https://huggingface.co/collections/merve/any-to-any-models-6822042ee8eb7fb5e38f9b62
  • 1 reply
Β·
reacted to ProCreations's post with πŸš€ 10 days ago
view post
Post
2875
Eyyyy 50 followers 🀯
  • 1 reply
Β·
reacted to m-ric's post with πŸ”₯ 10 days ago
view post
Post
2560
A new research paper from KAIST builds on smolagents to push boundaries of distillation πŸ₯³
➑️ "Distilling LLM Agent into Small Models with Retrieval and Code Tools" teaches that, when trying to distil reasoning capability from a strong LLM ("teacher") into a smaller one ("student"), it's much better to use Agent traces than CoT traces.

Advantages are:
1. Improved generalization
Intuitively, this is because your agent can encounter more "surprising" results by interacting with its environment : for example, a web research called by the LLM teacher in agent mode can bring results that the LLM teacher would not have generated in CoT.

2. Reduce hallucinations
The trace won't hallucinate tool call outputs!

Thank you @akseljoonas for mentioning this paper!
reacted to AdinaY's post with πŸ‘β€οΈπŸš€ 11 days ago
view post
Post
2824
Orsta πŸ”₯ vision language models trained with V-Triune, a unified reinforcement learning system by MiniMax AI

One-RL-to-See-Them-All/one-rl-to-see-them-all-6833d27abce23898b2f9815a

✨ 7B & 32B with MIT license
✨ Masters 8 visual tasks: math, science QA, charts, puzzles, object detection, grounding, OCR, and counting
✨ Uses Dynamic IoU rewards for better visual understanding
✨Strong performance in visual reasoning and perception
reacted to clem's post with πŸ€— 11 days ago
view post
Post
3219
It's just become easier to share your apps on the biggest AI app store (aka HF spaces) for unlimited storage, more visibility and community interactions.

Just pick a React, Svelte, or Vue template when you create your space or add app_build_command: npm run build in your README's YAML and app_file: build/index.html in your README's YAML block.

Or follow this link: https://huggingface.co/new-space?sdk=static

Let's build!
  • 1 reply
Β·
reacted to Kseniase's post with πŸ˜ŽπŸš€ 11 days ago
view post
Post
4250
12 Types of JEPA

JEPA, or Joint Embedding Predictive Architecture, is an approach to building AI models introduced by Yann LeCun. It differs from transformers by predicting the representation of a missing or future part of the input, rather than the next token or pixel. This encourages conceptual understanding, not just low-level pattern matching. So JEPA allows teaching AI to reason abstractly.

Here are 12 types of JEPA you should know about:

1. I-JEPA -> Self-Supervised Learning from Images with a Joint-Embedding Predictive Architecture (2301.08243)
A non-generative, self-supervised learning framework designed for processing images. It works by masking parts of the images and then trying to predict those masked parts

2. MC-JEPA -> MC-JEPA: A Joint-Embedding Predictive Architecture for Self-Supervised Learning of Motion and Content Features (2307.12698)
Simultaneously interprets video data - dynamic elements (motion) and static details (content) - using a shared encoder

3. V-JEPA -> Revisiting Feature Prediction for Learning Visual Representations from Video (2404.08471)
Presents vision models trained by predicting future video features, without pretrained image encoders, text, negative sampling, or reconstruction

4. UI-JEPA -> UI-JEPA: Towards Active Perception of User Intent through Onscreen User Activity (2409.04081)
Masks unlabeled UI sequences to learn abstract embeddings, then adds a fine-tuned LLM decoder for intent prediction.

5. Audio-based JEPA (A-JEPA) -> A-JEPA: Joint-Embedding Predictive Architecture Can Listen (2311.15830)
Masks spectrogram patches with a curriculum, encodes them, and predicts hidden representations.

6. S-JEPA -> S-JEPA: towards seamless cross-dataset transfer through dynamic spatial attention (2403.11772)
Signal-JEPA is used in EEG analysis. It adds a spatial block-masking scheme and three lightweight downstream classifiers

7. TI-JEPA -> TI-JEPA: An Innovative Energy-based Joint Embedding Strategy for Text-Image Multimodal Systems (2503.06380)
Text-Image JEPA uses self-supervised, energy-based pre-training to map text and images into a shared embedding space, improving cross-modal transfer to downstream tasks

Find more types below πŸ‘‡

Also, explore the basics of JEPA in our article: https://www.turingpost.com/p/jepa

If you liked it, subscribe to the Turing Post: https://www.turingpost.com/subscribe
  • 1 reply
Β·