Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
233.9
TFLOPS
6
11
82
P.M.SALMAN KHAN
salmankhanpm
Follow
lucazsh's profile picture
jk12p's profile picture
Gargaz's profile picture
4 followers
·
99 following
https://salmankhanpm.co
salmankhanpm154
SALMANKHANPM
salmankhanpm786
AI & ML interests
NLP - LLM - AI SAFETY
Recent Activity
liked
a dataset
about 17 hours ago
google/IndicGenBench_flores_in
liked
a dataset
about 17 hours ago
PrimeIntellect/SYNTHETIC-1
reacted
to
Kseniase
's
post
with 👍
about 18 hours ago
10 awesome advanced LoRA approaches Low-Rank Adaptation (LoRA) is the go-to method for efficient model fine-tuning that adds small low-rank matrices instead of retraining full models. The field isn’t standing still – new LoRA variants push the limits of efficiency, generalization, and personalization. So we’re sharing 10 of the latest LoRA approaches you should know about: 1. Mixture-of-LoRA-experts → https://huggingface.co/papers/2509.13878 Adds multiple low-rank adapters (LoRA) into a model’s layers, and a routing mechanism activates the most suitable ones for each input. This lets the model adapt better to new unseen conditions 2. Amortized Bayesian Meta-Learning for LoRA (ABMLL) → https://huggingface.co/papers/2508.14285 Balances global and task-specific parameters within a Bayesian framework to improve uncertainty calibration and generalization to new tasks without high memory or compute costs 3. AutoLoRA → https://huggingface.co/papers/2508.02107 Automatically retrieves and dynamically aggregates public LoRAs for stronger T2I generation 4. aLoRA (Activated LoRA) → https://huggingface.co/papers/2504.12397 Only applies LoRA after invocation, letting the model reuse the base model’s KV cache instead of recomputing the full turn’s KV cache. Efficient in multi-turn conversations 5. LiLoRA (LoRA in LoRA) → https://huggingface.co/papers/2508.06202 Shares the LoRA matrix A across tasks and additionally low-rank-decomposes matrix B to cut parameters in continual vision-text MLLMs 6. Sensitivity-LoRA → https://huggingface.co/papers/2509.09119 Dynamically assigns ranks to weight matrices based on their sensitivity, measured using second-order derivatives Read further below ↓ Also, subscribe to the Turing Post: https://www.turingpost.com/subscribe
View all activity
Organizations
spaces
2
Sort: Recently updated
Sleeping
Telugu Vocab Inspector & Evaluation
✐
Tokenize text with various models and compare results
Runtime error
AutoTrain Advanced
🚀
models
45
Sort: Recently updated
salmankhanpm/whisper-te-2e-u
Automatic Speech Recognition
•
Updated
Jul 19
•
10
salmankhanpm/whisper-te-lora-2e
Updated
Jul 19
salmankhanpm/whisper-te-16-2e
Automatic Speech Recognition
•
0.0B
•
Updated
Jul 19
•
8
salmankhanpm/whisper-te-2e
Updated
Jul 19
salmankhanpm/w-lora_model-de-l
Updated
Jul 18
salmankhanpm/w-lora_model-de-16
Automatic Speech Recognition
•
0.0B
•
Updated
Jul 18
•
5
salmankhanpm/w-lora_model-de
Updated
Jul 18
salmankhanpm/gemma-3-4b-it-Q4_K_M-GGUF
Image-Text-to-Text
•
4B
•
Updated
Jul 16
•
1
salmankhanpm/gemma-3n-E4B-it-Q4_K_M-GGUF
Image-Text-to-Text
•
7B
•
Updated
Jul 7
•
23
salmankhanpm/sarvam-1-2b-Instruct
Text Generation
•
3B
•
Updated
Jun 26
•
4
View 45 models
datasets
3
Sort: Recently updated
salmankhanpm/beavertails-en-te-30k-cleaned
Viewer
•
Updated
Apr 25
•
30.2k
•
35
salmankhanpm/beavertails-en-te
Updated
Apr 25
•
7
salmankhanpm/beavertails-en-te-30k
Viewer
•
Updated
Apr 25
•
30.2k
•
17