Manoj21k commited on
Commit
31e6358
·
1 Parent(s): 812a6f6

Delete README.md

Browse files
Files changed (1) hide show
  1. README.md +0 -42
README.md DELETED
@@ -1,42 +0,0 @@
1
- ---
2
- license: mit
3
- ---
4
- Finetuned Model - Manoj21k/microsoft-phi-2-finetuned
5
- Alpaca Datasets Instruction Finetuning
6
- We are pleased to introduce the Manoj21k/microsoft-phi-2-finetuned model, which has undergone fine-tuning using Alpaca datasets with instructional objectives. This process aims to enhance the model's performance in understanding and generating responses based on specific instructions. Here are key details about this finetuned model:
7
-
8
- Fine-Tuning Details:
9
- Datasets Used:
10
-
11
- The model has been fine-tuned using Alpaca datasets, which are curated for instructional objectives. These datasets provide diverse examples and scenarios to improve the model's ability to follow instructions accurately.
12
- Instructional Objectives:
13
-
14
- The fine-tuning process emphasizes the model's proficiency in understanding and responding to prompts provided in an instructional format. This includes scenarios where explicit instructions are given, allowing the model to generate more contextually relevant and task-specific outputs.
15
- Intended Use Cases:
16
- Instruction-Based Tasks:
17
-
18
- The finetuned model is particularly well-suited for tasks that involve providing instructions in the prompt, such as generating detailed responses, following specific guidelines, or addressing instructional queries.
19
- Enhanced Controllability:
20
-
21
- Users can expect improved controllability when using this model, making it a valuable asset for applications where precise instruction adherence is crucial.
22
- Integration Example:
23
- python
24
- Copy code
25
- from transformers import AutoModelForCausalLM, AutoTokenizer
26
-
27
- # Load the finetuned model
28
- finetuned_model = AutoModelForCausalLM.from_pretrained("Manoj21k/microsoft-phi-2-finetuned")
29
-
30
- # Tokenize input with instruction and generate output
31
- tokenizer = AutoTokenizer.from_pretrained("Manoj21k/microsoft-phi-2-finetuned")
32
- input_text = "Instruct: Provide a detailed explanation of..."
33
- inputs = tokenizer(input_text, return_tensors="pt", return_attention_mask=False)
34
- output = finetuned_model.generate(**inputs, max_length=200)
35
-
36
- # Decode and print the generated text
37
- decoded_output = tokenizer.batch_decode(output)[0]
38
- print(decoded_output)
39
- Note:
40
- The fine-tuned model is specialized for instruction-based tasks and may outperform the base Phi-2 model in scenarios that require adherence to explicit instructions.
41
- Users are encouraged to experiment with various instructional prompts to leverage the model's capabilities effectively.
42
- As always, we appreciate user feedback to continue refining and improving the model for a wide range of applications.