Rexschwert
·
AI & ML interests
AI, Big Data, Data Science, Machine Learning, Computer Vision, Natural Language Processing
Recent Activity
reacted
to
Kseniase's
post
with 🔥
8 days ago
13 New types of LoRA
LoRA (Low-Rank Adaptation) is a popular lightweight method for fine-tuning AI models. It doesn't update the full model, it adds small trainable components, low-rank matrices, while keeping the original weights frozen. Only these adapters are trained.
Recently, many interesting new LoRA variations came out, so it’s a great time to take a look at these 13 clever approaches:
1. T-LoRA → https://huggingface.co/papers/2507.05964
A timestep-dependent LoRA method for adapting diffusion models with a single image. It dynamically adjusts updates and uses orthogonal initialization to reduce overlap, achieving better fidelity–alignment balance than standard LoRA
2. SingLoRA → https://huggingface.co/papers/2507.05566
Simplifies LoRA by using only one small matrix instead of usual two, and multiplying it by its own transpose (like A × Aᵀ). It uses half the parameters of LoRA and avoids scale mismatch between different matrices
3. LiON-LoRA → https://huggingface.co/papers/2507.05678
Improves control and precision in video diffusion models when training data is limited. It builds on LoRA, adding 3 key principles: linear scalability, orthogonality, and norm consistency. A controllable token and modified self-attention enables smooth adjustment of motion
4. LoRA-Mixer → https://huggingface.co/papers/2507.00029
Combines LoRA and mixture-of-experts (MoE) to adapt LLMs for multiple tasks. It dynamically routes task-specific LoRA experts into linear projections of attention modules, supporting both joint training and frozen expert reuse
5. QR-LoRA → https://huggingface.co/papers/2507.04599
Separates content and style when combining multiple LoRA adapters. It implements QR decomposition to structure parameter updates, where the orthogonal Q matrix reduces interference between features, and the R matrix captures specific transformations
Read further in the comments 👇
If you like it, also subscribe to the Turing Post: https://www.turingpost.com/subscribe
reacted
to
Kseniase's
post
with ❤️
8 days ago
13 New types of LoRA
LoRA (Low-Rank Adaptation) is a popular lightweight method for fine-tuning AI models. It doesn't update the full model, it adds small trainable components, low-rank matrices, while keeping the original weights frozen. Only these adapters are trained.
Recently, many interesting new LoRA variations came out, so it’s a great time to take a look at these 13 clever approaches:
1. T-LoRA → https://huggingface.co/papers/2507.05964
A timestep-dependent LoRA method for adapting diffusion models with a single image. It dynamically adjusts updates and uses orthogonal initialization to reduce overlap, achieving better fidelity–alignment balance than standard LoRA
2. SingLoRA → https://huggingface.co/papers/2507.05566
Simplifies LoRA by using only one small matrix instead of usual two, and multiplying it by its own transpose (like A × Aᵀ). It uses half the parameters of LoRA and avoids scale mismatch between different matrices
3. LiON-LoRA → https://huggingface.co/papers/2507.05678
Improves control and precision in video diffusion models when training data is limited. It builds on LoRA, adding 3 key principles: linear scalability, orthogonality, and norm consistency. A controllable token and modified self-attention enables smooth adjustment of motion
4. LoRA-Mixer → https://huggingface.co/papers/2507.00029
Combines LoRA and mixture-of-experts (MoE) to adapt LLMs for multiple tasks. It dynamically routes task-specific LoRA experts into linear projections of attention modules, supporting both joint training and frozen expert reuse
5. QR-LoRA → https://huggingface.co/papers/2507.04599
Separates content and style when combining multiple LoRA adapters. It implements QR decomposition to structure parameter updates, where the orthogonal Q matrix reduces interference between features, and the R matrix captures specific transformations
Read further in the comments 👇
If you like it, also subscribe to the Turing Post: https://www.turingpost.com/subscribe
reacted
to
Kseniase's
post
with 🚀
8 days ago
13 New types of LoRA
LoRA (Low-Rank Adaptation) is a popular lightweight method for fine-tuning AI models. It doesn't update the full model, it adds small trainable components, low-rank matrices, while keeping the original weights frozen. Only these adapters are trained.
Recently, many interesting new LoRA variations came out, so it’s a great time to take a look at these 13 clever approaches:
1. T-LoRA → https://huggingface.co/papers/2507.05964
A timestep-dependent LoRA method for adapting diffusion models with a single image. It dynamically adjusts updates and uses orthogonal initialization to reduce overlap, achieving better fidelity–alignment balance than standard LoRA
2. SingLoRA → https://huggingface.co/papers/2507.05566
Simplifies LoRA by using only one small matrix instead of usual two, and multiplying it by its own transpose (like A × Aᵀ). It uses half the parameters of LoRA and avoids scale mismatch between different matrices
3. LiON-LoRA → https://huggingface.co/papers/2507.05678
Improves control and precision in video diffusion models when training data is limited. It builds on LoRA, adding 3 key principles: linear scalability, orthogonality, and norm consistency. A controllable token and modified self-attention enables smooth adjustment of motion
4. LoRA-Mixer → https://huggingface.co/papers/2507.00029
Combines LoRA and mixture-of-experts (MoE) to adapt LLMs for multiple tasks. It dynamically routes task-specific LoRA experts into linear projections of attention modules, supporting both joint training and frozen expert reuse
5. QR-LoRA → https://huggingface.co/papers/2507.04599
Separates content and style when combining multiple LoRA adapters. It implements QR decomposition to structure parameter updates, where the orthogonal Q matrix reduces interference between features, and the R matrix captures specific transformations
Read further in the comments 👇
If you like it, also subscribe to the Turing Post: https://www.turingpost.com/subscribe
View all activity
Organizations