remove how to use models
Browse files
README.md
CHANGED
@@ -30,41 +30,6 @@ It supports the **GGUF format**, making it ideal for running on various hardware
|
|
30 |
- ๐ **Supports English language**
|
31 |
- ๐๏ธ **Trained using Unsloth for high performance**
|
32 |
|
33 |
-
## Model Usage
|
34 |
-
|
35 |
-
### Install Dependencies
|
36 |
-
To use this model, install the required libraries:
|
37 |
-
```bash
|
38 |
-
pip install transformers text-generation gguf unsloth
|
39 |
-
```
|
40 |
-
|
41 |
-
### Load the Model
|
42 |
-
```python
|
43 |
-
from transformers import AutoModelForCausalLM, AutoTokenizer
|
44 |
-
|
45 |
-
model_name = "deepakkumar07/Llama-3.2-3B-Instruct"
|
46 |
-
tokenizer = AutoTokenizer.from_pretrained(model_name)
|
47 |
-
model = AutoModelForCausalLM.from_pretrained(model_name)
|
48 |
-
|
49 |
-
input_text = "What is the capital of France?"
|
50 |
-
inputs = tokenizer(input_text, return_tensors="pt")
|
51 |
-
|
52 |
-
output = model.generate(**inputs)
|
53 |
-
print(tokenizer.decode(output[0], skip_special_tokens=True))
|
54 |
-
```
|
55 |
-
|
56 |
-
### GGUF Inference
|
57 |
-
For GGUF-based inference, use **llama.cpp** or **text-generation-inference**:
|
58 |
-
```bash
|
59 |
-
pip install llama-cpp-python
|
60 |
-
```
|
61 |
-
```python
|
62 |
-
from llama_cpp import Llama
|
63 |
-
|
64 |
-
llm = Llama(model_path="Llama-3.2-3B-Instruct.gguf")
|
65 |
-
response = llm("Tell me a joke.")
|
66 |
-
print(response)
|
67 |
-
```
|
68 |
|
69 |
## License
|
70 |
This model is licensed under **Apache 2.0**.
|
|
|
30 |
- ๐ **Supports English language**
|
31 |
- ๐๏ธ **Trained using Unsloth for high performance**
|
32 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
33 |
|
34 |
## License
|
35 |
This model is licensed under **Apache 2.0**.
|