More like sir cartier cash carti yung carti king vamp carti baby boi guapo
alkinun
AtAndDev
AI & ML interests
LLMs, Alignment, Merging, Unsloth, DPO, SFT, ORPO, SPIN..
Recent Activity
liked
a model
about 2 hours ago
NousResearch/Genstruct-7B
Organizations
AtAndDev's activity

upvoted
an
article
3 days ago
Article
System Prompt Learning: Teaching LLMs to Learn Problem-Solving Strategies from Experience
By
โข
โข
10
reacted to
merve's
post with ๐ฅ
3 days ago
Post
1100
New GUI model by Salesforce AI & Uni HK: Jedi
tianbaoxiexxx/Jedi xlangai/Jedi-7B-1080p ๐ค
Based on Qwen2.5-VL with Apache 2.0 license
prompt with below screenshot โ select "find more"
tianbaoxiexxx/Jedi xlangai/Jedi-7B-1080p ๐ค
Based on Qwen2.5-VL with Apache 2.0 license
prompt with below screenshot โ select "find more"
L playlist sorry

reacted to
attackerElvies's
post with ๐ค
3 days ago
Post
1696
HALOOO MY COMMUNITY

reacted to
mlabonne's
post with ๐๐โค๏ธ๐ฅ๐
9 days ago
Post
14190
โ๏ธ AutoAbliteration
I made a Colab notebook to automatically abliterate models.
It's quite general, so you can do interesting stuff like blocking a given language in the model outputs.
๐ป Colab: https://colab.research.google.com/drive/1RmLv-pCMBBsQGXQIM8yF-OdCNyoylUR1?usp=sharing
I made a Colab notebook to automatically abliterate models.
It's quite general, so you can do interesting stuff like blocking a given language in the model outputs.
๐ป Colab: https://colab.research.google.com/drive/1RmLv-pCMBBsQGXQIM8yF-OdCNyoylUR1?usp=sharing

reacted to
codelion's
post with ๐ฅ
9 days ago
Post
2318
Introducing AutoThink: Adaptive reasoning for LLMs that improves performance by 43% on reasoning benchmarks!
Instead of using fixed thinking budgets, AutoThink:
- Classifies query complexity (HIGH/LOW) using adaptive classification
- Dynamically allocates thinking tokens based on complexity
- Uses steering vectors derived from Pivotal Token Search to guide reasoning patterns
Results on DeepSeek-R1-Distill-Qwen-1.5B:
- GPQA-Diamond: 31.06% vs 21.72% baseline (+9.34 points)
- MMLU-Pro: 26.38% vs 25.58% baseline (+0.8 points)
- Uses fewer tokens than baseline approaches
Works with any local reasoning model - DeepSeek, Qwen, Llama, custom models. The technique combines our research on Pivotal Token Search (PTS) implementation and adaptive classification frameworks.
Paper: AutoThink: efficient inference for reasoning LLMs
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5253327
Code and examples:
https://github.com/codelion/optillm/tree/main/optillm/autothink
PTS implementation and technical details:
https://github.com/codelion/pts
https://huggingface.co/blog/codelion/pts
Adaptive classifier framework:
https://github.com/codelion/adaptive-classifier
Would love to hear your thoughts on adaptive resource allocation for LLM reasoning! Have you experimented with similar approaches?
Instead of using fixed thinking budgets, AutoThink:
- Classifies query complexity (HIGH/LOW) using adaptive classification
- Dynamically allocates thinking tokens based on complexity
- Uses steering vectors derived from Pivotal Token Search to guide reasoning patterns
Results on DeepSeek-R1-Distill-Qwen-1.5B:
- GPQA-Diamond: 31.06% vs 21.72% baseline (+9.34 points)
- MMLU-Pro: 26.38% vs 25.58% baseline (+0.8 points)
- Uses fewer tokens than baseline approaches
Works with any local reasoning model - DeepSeek, Qwen, Llama, custom models. The technique combines our research on Pivotal Token Search (PTS) implementation and adaptive classification frameworks.
Paper: AutoThink: efficient inference for reasoning LLMs
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5253327
Code and examples:
https://github.com/codelion/optillm/tree/main/optillm/autothink
PTS implementation and technical details:
https://github.com/codelion/pts
https://huggingface.co/blog/codelion/pts
Adaptive classifier framework:
https://github.com/codelion/adaptive-classifier
Would love to hear your thoughts on adaptive resource allocation for LLM reasoning! Have you experimented with similar approaches?

upvoted
a
paper
9 days ago