Added Metrics Explanation
Browse files- README-huggingface.md +14 -0
- README.md +14 -0
README-huggingface.md
CHANGED
|
@@ -113,6 +113,20 @@ The model shows strong performance across key metrics:
|
|
| 113 |
- **Response Coherence:** 82.1%
|
| 114 |
- **Peer Network Efficiency:** 91.2%
|
| 115 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 116 |
## Limitations & Biases
|
| 117 |
|
| 118 |
1. **Current Limitations:**
|
|
|
|
| 113 |
- **Response Coherence:** 82.1%
|
| 114 |
- **Peer Network Efficiency:** 91.2%
|
| 115 |
|
| 116 |
+
### Understanding the Metrics
|
| 117 |
+
|
| 118 |
+
- **Training Progress**: Two complete passes over training data, totaling 20,000 batched steps (10,000 steps per epoch with 8 samples per batch).
|
| 119 |
+
|
| 120 |
+
- **Model Scale**: Core neural network and distributed coordination components combine to 1.82 GB, including parameter tensors and peer synchronization data structures.
|
| 121 |
+
|
| 122 |
+
- **Convergence Metric**: Final validation phase showed 7.11 cross-entropy divergence between model predictions and reference sequences, computed at the token level.
|
| 123 |
+
|
| 124 |
+
- **Token Precision**: In out-of-sample testing, 78.5% of the model's next-token selections matched the reference completions across all validation sequences.
|
| 125 |
+
|
| 126 |
+
- **Output Quality**: Automated analysis of 82.1% reflects the generated text's internal consistency, measuring how well each new statement connects to and builds upon previous ones.
|
| 127 |
+
|
| 128 |
+
- **Network Performance**: Distributed training achieved 91.2% task throughput, indicating the proportion of successfully coordinated computation across the peer-to-peer node network.
|
| 129 |
+
|
| 130 |
## Limitations & Biases
|
| 131 |
|
| 132 |
1. **Current Limitations:**
|
README.md
CHANGED
|
@@ -105,6 +105,20 @@ Initial testing shows promising results:
|
|
| 105 |
- **Response Coherence:** 82.1%
|
| 106 |
- **Peer Network Efficiency:** 91.2%
|
| 107 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 108 |
## Limitations & Biases
|
| 109 |
|
| 110 |
1. **Current Limitations:**
|
|
|
|
| 105 |
- **Response Coherence:** 82.1%
|
| 106 |
- **Peer Network Efficiency:** 91.2%
|
| 107 |
|
| 108 |
+
### Metrics Explanation
|
| 109 |
+
|
| 110 |
+
- **Training Progress**: Two complete passes over training data, totaling 20,000 batched steps (10,000 steps per epoch with 8 samples per batch).
|
| 111 |
+
|
| 112 |
+
- **Model Scale**: Core neural network and distributed coordination components combine to 1.82 GB, including parameter tensors and peer synchronization data structures.
|
| 113 |
+
|
| 114 |
+
- **Convergence Metric**: Final validation phase showed 7.11 cross-entropy divergence between model predictions and reference sequences, computed at the token level.
|
| 115 |
+
|
| 116 |
+
- **Token Precision**: In out-of-sample testing, 78.5% of the model's next-token selections matched the reference completions across all validation sequences.
|
| 117 |
+
|
| 118 |
+
- **Output Quality**: Automated analysis of 82.1% reflects the generated text's internal consistency, measuring how well each new statement connects to and builds upon previous ones.
|
| 119 |
+
|
| 120 |
+
- **Network Performance**: Distributed training achieved 91.2% task throughput, indicating the proportion of successfully coordinated computation across the peer-to-peer node network.
|
| 121 |
+
|
| 122 |
## Limitations & Biases
|
| 123 |
|
| 124 |
1. **Current Limitations:**
|