how to use output model as llm

#3
by narsisfa - opened

hi- i need fine tune model and use it in below code (replace with gpt-3.5-turbo llm). tanks for help

from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
model_name = "gpt-3.5-turbo"
llm = ChatOpenAI(model_name=model_name)
from langchain.chains import RetrievalQA
retrieval_chain = RetrievalQA.from_chain_type(llm, chain_type="stuff", retriever=db.as_retriever())
retrieval_chain.run(query)

i have this error: thanks for helping
--code--------------
code: llm = "lora_model"
from langchain.chains import RetrievalQA
retrieval_chain = RetrievalQA.from_chain_type(llm, chain_type="stuff", retriever=db1.as_retriever())
retrieval_chain.run("hi")
--error------------------
ValidationError: 2 validation errors for LLMChain
llm
instance of Runnable expected (type=type_error.arbitrary_type; expected_arbitrary_type=Runnable)
llm
instance of Runnable expected (type=type_error.arbitrary_type; expected_arbitrary_type=Runnable)

You can use the sample colab notebook present here.
https://huggingface.co/datasets/unsloth/notebooks/tree/main

Sign up or log in to comment