Hugging Face's logo Hugging Face
  • Models
  • Datasets
  • Spaces
  • Posts
  • Docs
  • Enterprise
  • Pricing

  • Log In
  • Sign Up
RichardForests 's Collections
Language Models
CV
RL
Diffusion models
3D/4D Gaussian Splatting
Multimodal
Mamba
NeRF
Transformers & MoE
(3D) Foundation Models
SSL
DL & Software DStructures
Gemma & MoE
Dora
Flash Attention in Triton
Lora variations
Parameter Efficient - LLMs
Robotics - Cross Attention
LLM Agents OS
DMs - Lighting Conditions

Lora variations

updated May 21, 2024
Upvote
-

  • GaLore: Memory-Efficient LLM Training by Gradient Low-Rank Projection

    Paper • 2403.03507 • Published Mar 6, 2024 • 189

  • Flora: Low-Rank Adapters Are Secretly Gradient Compressors

    Paper • 2402.03293 • Published Feb 5, 2024 • 6

  • PRILoRA: Pruned and Rank-Increasing Low-Rank Adaptation

    Paper • 2401.11316 • Published Jan 20, 2024 • 1

  • MoRA: High-Rank Updating for Parameter-Efficient Fine-Tuning

    Paper • 2405.12130 • Published May 20, 2024 • 51

  • LoRA Learns Less and Forgets Less

    Paper • 2405.09673 • Published May 15, 2024 • 89

  • Chameleon: Mixed-Modal Early-Fusion Foundation Models

    Paper • 2405.09818 • Published May 16, 2024 • 131
Upvote
-
  • Collection guide
  • Browse collections
Company
TOS Privacy About Jobs
Website
Models Datasets Spaces Pricing Docs