--- base_model: Meta-Llama-3.1-8B-bnb-4bit language: - en license: apache-2.0 tags: - text-generation-inference - transformers - llama - trl - sft --- # Uploaded model - **Developed by:** huzaifa525 - **License:** apache-2.0 - **Finetuned from model :** Meta-Llama-3.1-8B-bnb-4bit # **LLaMA 3.1B Fine-tuned on Medical Dataset** ### **Model Overview** This is a fine-tuned version of the **Meta-Llama-3.1-8B-bnb-4bit** model, specifically adapted for the medical field. It has been trained using a dataset that provides extensive information on **diseases, symptoms, and treatments**, making it ideal for AI-powered healthcare tools such as medical chatbots, virtual assistants, and diagnostic support systems. ### **Key Features** - **Disease Diagnosis**: Accurately identifies diseases based on symptoms provided by the user. - **Symptom Analysis**: Breaks down and interprets symptoms to provide a comprehensive medical overview. - **Treatment Recommendations**: Suggests treatments and remedies according to medical conditions. ### **Dataset** The model is fine-tuned on 2000 rows from a dataset consisting of **272k rows**. This dataset includes rich information about diseases, symptoms, and their corresponding treatments. The model is continuously being updated and will be further trained on the remaining data in future releases to improve accuracy and capabilities. ### **Model Applications** - **Medical Chatbots**: Use for real-time interaction between patients and virtual medical agents. - **Healthcare Virtual Assistants**: For symptom checking, health guidance, and first-level triage. - **Diagnostic Tools**: To assist healthcare professionals in diagnosing conditions based on symptoms. - **Patient Self-Assessment**: Symptom checkers to empower patients in their health journey. ### **How to Use** #### Use a pipeline as a high-level helper ``` from transformers import pipeline messages = [ {"role": "user", "content": "Who are you?"}, ] pipe = pipeline("text-generation", model="huzaifa525/Doctoraifinetune-3.1-8B") pipe(messages) ``` #### Load model directly ``` from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("huzaifa525/Doctoraifinetune-3.1-8B") model = AutoModelForCausalLM.from_pretrained("huzaifa525/Doctoraifinetune-3.1-8B") ``` ## **Planned Updates** - **Full Dataset Training**: The model will be updated with the full **272k rows** of data, which will improve its disease identification, symptom analysis, and treatment recommendations. - **Enhanced Accuracy**: Ongoing improvements based on feedback and further training will continue to refine the model’s performance.