LLaMa 2 instruction tuned on cleaned Alpaca dataset using QLoRA. For languages other than English, Alpaca data was first translated using NLLB-1.3B
Anjishnu Mukherjee
iamshnoo
AI & ML interests
nlp, bias, interpretability, fairness, low resourced languages, multilingual nlp
Recent Activity
new activity
22 days ago
thuml/sundial-base-128m:Exogenous variables and finetuning
liked
a model
22 days ago
thuml/sundial-base-128m
upvoted
a
paper
4 months ago
Birdie: Advancing State Space Models with Reward-Driven Objectives and
Curricula
Organizations
None yet