KoAlpaca: Korean Alpaca Model based on Stanford Alpaca (feat. LLAMA and Polyglot-ko)

  • More informations at https://github.com/Beomi/KoAlpaca
  • This repository contains finetuned(LoRA) KoAlpaca model weights based on LLAMA 65B model.
    • Note: This repo has only the LoRA weights.
  • Used Korean dataset and English dataset to train model.
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and HF Inference API was unable to determine this model's library.