Update README.md
Browse files
README.md
CHANGED
@@ -101,3 +101,40 @@ print("Predictions:")
|
|
101 |
for i, prob in enumerate(verdict.predictions):
|
102 |
print(f" Label {i}: {prob * 100:.2f}%")
|
103 |
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
101 |
for i, prob in enumerate(verdict.predictions):
|
102 |
print(f" Label {i}: {prob * 100:.2f}%")
|
103 |
```
|
104 |
+
|
105 |
+
## Labels
|
106 |
+
|
107 |
+
The model can detect the following labels:
|
108 |
+
|
109 |
+
- **AI_GEN**: Is the video AI-generated or not?
|
110 |
+
- **ANIME_1D**: Is the video in 2D anime style?
|
111 |
+
- **ANIME_2D**: Is the video in 3D anime style?
|
112 |
+
- **VIDEO_GAME**: Does the video look like a video game?
|
113 |
+
- **KLING**: Is the video generated by Kling?
|
114 |
+
- **HIGGSFIELD**: Is the video generated by Higgsfield?
|
115 |
+
- **WAN**: Is the video generated by Wan?
|
116 |
+
- **MIDJOURNEY**: Is the video generated using images from Midjourney?
|
117 |
+
- **HAILUO**: Is the video generated by Hailuo?
|
118 |
+
- **RAY**: Is the video generated by Ray?
|
119 |
+
- **VEO**: Is the video generated by Veo?
|
120 |
+
- **RUNWAY**: Is the video generated by Runway?
|
121 |
+
- **SORA**: Is the video generated by Sora?
|
122 |
+
- **CHATGPT**: Is the video generated using images from ChatGPT?
|
123 |
+
- **PIKA**: Is the video generated by Pika?
|
124 |
+
- **HUNYUAN**: Is the video generated by Hunyuan?
|
125 |
+
- **VIDU**: Is the video generated by Vidu?
|
126 |
+
|
127 |
+
> **Note**: The **AI_GEN** label is the most accurate as it has the most training data. Other labels have limited training data and may be less accurate.
|
128 |
+
|
129 |
+
## Accuracy
|
130 |
+
|
131 |
+
The PR curve of the model is shown below:
|
132 |
+
|
133 |
+
<p align="center">
|
134 |
+
<img src="https://github.com/LaunchPlatform/cakelens-v5/raw/master/assets/pr-curve.png?raw=true" alt="PR Curve" />
|
135 |
+
</p>
|
136 |
+
|
137 |
+
At threshold 0.5, the model has an precision of 0.77 and a recall of 0.74.
|
138 |
+
The dataset size is 5,093 videos for training and 498 videos for validation.
|
139 |
+
Please note that the model is not perfect and may make mistakes.
|
140 |
+
|