KoAlpaca: Korean Alpaca Model based on Stanford Alpaca (feat. LLAMA and Polyglot-ko)
- More informations at https://github.com/Beomi/KoAlpaca
- This repository contains finetuned(LoRA) KoAlpaca model weights based on LLAMA 65B model.
- Note: This repo has only the LoRA weights.
- Used Korean dataset and English dataset to train model.
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
HF Inference API was unable to determine this model's library.