bartowski commited on
Commit
7b2ec4f
·
verified ·
1 Parent(s): 2c345ee

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -7,7 +7,7 @@ library_name: transformers
7
 
8
  ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6435718aaaef013d1aec3b8b/aaUsAuDk79RMvG3ShWuMY.png)
9
 
10
- GGUF Quantizations for Virtuoso-Large
11
 
12
  **Virtuoso-Large (72B)** is our most powerful and versatile general-purpose model, designed to excel at handling complex and varied tasks across domains. With state-of-the-art performance, it offers unparalleled capability for nuanced understanding, contextual adaptability, and high accuracy.
13
 
@@ -15,7 +15,7 @@ GGUF Quantizations for Virtuoso-Large
15
 
16
  - Architecture Base: Qwen2.5-72B
17
  - Parameter Count: 72B
18
- - License: [Apache-2.0](https://huggingface.co/arcee-ai/Virtuoso-Large#license)
19
 
20
  ### Use Cases
21
 
 
7
 
8
  ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6435718aaaef013d1aec3b8b/aaUsAuDk79RMvG3ShWuMY.png)
9
 
10
+ GGUF Quantizations for [Virtuoso-Large](https://huggingface.co/arcee-ai/Virtuoso-Large)
11
 
12
  **Virtuoso-Large (72B)** is our most powerful and versatile general-purpose model, designed to excel at handling complex and varied tasks across domains. With state-of-the-art performance, it offers unparalleled capability for nuanced understanding, contextual adaptability, and high accuracy.
13
 
 
15
 
16
  - Architecture Base: Qwen2.5-72B
17
  - Parameter Count: 72B
18
+ - License: [Apache-2.0](https://huggingface.co/arcee-ai/Virtuoso-Large-GGUF#license)
19
 
20
  ### Use Cases
21