Update README.md
Browse files
README.md
CHANGED
@@ -7,7 +7,7 @@ library_name: transformers
|
|
7 |
|
8 |

|
9 |
|
10 |
-
GGUF Quantizations for Virtuoso-Large
|
11 |
|
12 |
**Virtuoso-Large (72B)** is our most powerful and versatile general-purpose model, designed to excel at handling complex and varied tasks across domains. With state-of-the-art performance, it offers unparalleled capability for nuanced understanding, contextual adaptability, and high accuracy.
|
13 |
|
@@ -15,7 +15,7 @@ GGUF Quantizations for Virtuoso-Large
|
|
15 |
|
16 |
- Architecture Base: Qwen2.5-72B
|
17 |
- Parameter Count: 72B
|
18 |
-
- License: [Apache-2.0](https://huggingface.co/arcee-ai/Virtuoso-Large#license)
|
19 |
|
20 |
### Use Cases
|
21 |
|
|
|
7 |
|
8 |

|
9 |
|
10 |
+
GGUF Quantizations for [Virtuoso-Large](https://huggingface.co/arcee-ai/Virtuoso-Large)
|
11 |
|
12 |
**Virtuoso-Large (72B)** is our most powerful and versatile general-purpose model, designed to excel at handling complex and varied tasks across domains. With state-of-the-art performance, it offers unparalleled capability for nuanced understanding, contextual adaptability, and high accuracy.
|
13 |
|
|
|
15 |
|
16 |
- Architecture Base: Qwen2.5-72B
|
17 |
- Parameter Count: 72B
|
18 |
+
- License: [Apache-2.0](https://huggingface.co/arcee-ai/Virtuoso-Large-GGUF#license)
|
19 |
|
20 |
### Use Cases
|
21 |
|