π Jacob's Personal Book Advisor (Merged Model)
This is a merged model combining Llama-3.1-8B-Instruct with a LoRA adapter trained on Jacob's personal book library.
β Ready for Inference API - This merged model works directly with HuggingFace Inference API.
Features
- Personalized book recommendations from Jacob's library
- Content questions and summaries
- Reading advice based on actual book collection
Usage
With Inference API
from huggingface_hub import InferenceClient
client = InferenceClient(model="jacobpmeyer/book-advisor-merged")
response = client.text_generation(
"### Instruction:\nRecommend a science fiction book\n\n### Response:\n",
max_new_tokens=300,
temperature=0.7
)
print(response)
Direct Usage
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("jacobpmeyer/book-advisor-merged")
model = AutoModelForCausalLM.from_pretrained("jacobpmeyer/book-advisor-merged")
prompt = "### Instruction:\nWhat's a good book for vacation reading?\n\n### Response:\n"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=200, temperature=0.7)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
Training Details
- Base Model: meta-llama/Llama-3.1-8B-Instruct
- Method: LoRA fine-tuning + merging
- Training Data: Personal epub book collection
- Format: Instruction-following (Alpaca style)
Model Performance
This merged model combines the instruction-following capabilities of Llama-3.1-8B-Instruct with personalized knowledge from Jacob's book library, providing relevant and personalized book recommendations and insights.
- Downloads last month
- 0
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
π
Ask for provider support
Model tree for jacobpmeyer/book-advisor-merged
Base model
meta-llama/Llama-3.1-8B
Finetuned
meta-llama/Llama-3.1-8B-Instruct