base model :
- google/gemma-2-9b
dataset :
- ayoubkirouane/Small-Instruct-Alpaca_Format
Get Started :
- Load model directly :
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("ayoubkirouane/gemma-2-9b-alpaca-small-Instruct")
model = AutoModelForCausalLM.from_pretrained("ayoubkirouane/gemma-2-9b-alpaca-small-Instruct")
- Use a pipeline as a high-level helper :
from transformers import pipeline
pipe = pipeline("text-generation", model="ayoubkirouane/gemma-2-9b-alpaca-small-Instruct")
- Downloads last month
- 28
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.