Update Dataset test
#1
by
carlosj934
- opened
README.md
DELETED
@@ -1,166 +0,0 @@
|
|
1 |
-
---
|
2 |
-
license: mit
|
3 |
-
task_categories:
|
4 |
-
- image-classification
|
5 |
-
- keypoint-detection
|
6 |
-
tags:
|
7 |
-
- martial-arts
|
8 |
-
- bjj
|
9 |
-
- brazilian-jiu-jitsu
|
10 |
-
- pose-detection
|
11 |
-
- sports-analysis
|
12 |
-
- submissions
|
13 |
-
- grappling
|
14 |
-
- computer-vision
|
15 |
-
language:
|
16 |
-
- en
|
17 |
-
size_categories:
|
18 |
-
- n<1K
|
19 |
-
version: 0.0.1
|
20 |
-
---
|
21 |
-
|
22 |
-
# BJJ Positions & Submissions Dataset
|
23 |
-
|
24 |
-
## Dataset Description
|
25 |
-
|
26 |
-
This dataset contains pose keypoint annotations **and compressed video clips** for Brazilian Jiu-Jitsu (BJJ) combat positions and submissions. It includes 2D keypoint coordinates for up to 2 athletes per image, labeled with specific BJJ positions and submission attempts, as well as short video segments for each position/submission. The videos are optimized for use in video transformer models such as ViViT.
|
27 |
-
|
28 |
-
### Dataset Summary
|
29 |
-
|
30 |
-
- **Total samples**: 1
|
31 |
-
- **Position classes**: 1 unique BJJ positions
|
32 |
-
- **Keypoint format**: MS-COCO (17 keypoints per person)
|
33 |
-
- **Video format**: MP4, H.264, 360p/480p, 15 FPS, compressed for ML
|
34 |
-
- **Data format**: [x, y, confidence] for each keypoint, plus associated video
|
35 |
-
- **Last updated**: 2025-07-21
|
36 |
-
- **Version**: 0.0.1
|
37 |
-
|
38 |
-
### Supported Tasks
|
39 |
-
|
40 |
-
- BJJ position classification
|
41 |
-
- Submission detection
|
42 |
-
- Multi-person pose estimation
|
43 |
-
- Combat sports analysis
|
44 |
-
- **Video action recognition (ViViT, etc.)**
|
45 |
-
- Action recognition in grappling
|
46 |
-
|
47 |
-
## Recent Updates
|
48 |
-
|
49 |
-
### Version 1.2.0 (2025-07-21)
|
50 |
-
- Added 1 total samples
|
51 |
-
- Improved data structure for better compatibility
|
52 |
-
- Enhanced position annotations
|
53 |
-
|
54 |
-
### Position Distribution
|
55 |
-
|
56 |
-
- `closed_guard1`: 1 samples
|
57 |
-
|
58 |
-
|
59 |
-
## Dataset Structure
|
60 |
-
|
61 |
-
### Data Fields
|
62 |
-
|
63 |
-
- `id`: Unique sample identifier
|
64 |
-
- `image_name`: Name of the source image
|
65 |
-
- `position`: BJJ position/submission label
|
66 |
-
- `frame_number`: Frame number from source video
|
67 |
-
- `pose1_keypoints`: 17 keypoints for athlete 1 [[x, y, confidence], ...]
|
68 |
-
- `pose1_num_keypoints`: Number of visible keypoints for athlete 1
|
69 |
-
- `pose2_keypoints`: 17 keypoints for athlete 2 [[x, y, confidence], ...]
|
70 |
-
- `pose2_num_keypoints`: Number of visible keypoints for athlete 2
|
71 |
-
- `num_people`: Number of people detected (1 or 2)
|
72 |
-
- `total_keypoints`: Total visible keypoints across both athletes
|
73 |
-
- `date_added`: Date when sample was added to dataset
|
74 |
-
- **`video_path`**: Relative path to the associated compressed video clip (MP4, suitable for ViViT and other video models)
|
75 |
-
|
76 |
-
### Keypoint Format
|
77 |
-
|
78 |
-
Uses MS-COCO 17-keypoint format:
|
79 |
-
0. nose, 1. left_eye, 2. right_eye, 3. left_ear, 4. right_ear
|
80 |
-
5. left_shoulder, 6. right_shoulder, 7. left_elbow, 8. right_elbow
|
81 |
-
9. left_wrist, 10. right_wrist, 11. left_hip, 12. right_hip
|
82 |
-
13. left_knee, 14. right_knee, 15. left_ankle, 16. right_ankle
|
83 |
-
|
84 |
-
Each keypoint: [x, y, confidence] where confidence 0.0-1.0
|
85 |
-
|
86 |
-
### Video Format
|
87 |
-
|
88 |
-
- **Format**: MP4 (H.264), 360p or 480p, 15 FPS, compressed for efficient ML training
|
89 |
-
- **Usage**: Each sample links to a short video clip showing the position/submission, suitable for direct use in video transformer models (e.g., ViViT)
|
90 |
-
|
91 |
-
## Usage
|
92 |
-
|
93 |
-
```python
|
94 |
-
from datasets import load_dataset
|
95 |
-
|
96 |
-
# Load the dataset
|
97 |
-
dataset = load_dataset("carlosj934/BJJ_Positions_Submissions")
|
98 |
-
|
99 |
-
# Access samples
|
100 |
-
sample = dataset['train'][0]
|
101 |
-
print(f"Position: {sample['position']}")
|
102 |
-
print(f"Number of people: {sample['num_people']}")
|
103 |
-
print(f"Athlete 1 keypoints: {len(sample['pose1_keypoints'])}")
|
104 |
-
print(f"Video path: {sample['video_path']}")
|
105 |
-
|
106 |
-
# Example: Load video for ViViT preprocessing
|
107 |
-
import cv2
|
108 |
-
cap = cv2.VideoCapture(sample['video_path'])
|
109 |
-
frames = []
|
110 |
-
while True:
|
111 |
-
ret, frame = cap.read()
|
112 |
-
if not ret:
|
113 |
-
break
|
114 |
-
frames.append(frame)
|
115 |
-
cap.release()
|
116 |
-
print(f"Loaded {len(frames)} frames for ViViT input.")
|
117 |
-
|
118 |
-
# Filter by specific positions
|
119 |
-
guard_samples = dataset['train'].filter(lambda x: 'guard' in x['position'])
|
120 |
-
print(f"Guard positions: {len(guard_samples)} samples")
|
121 |
-
```
|
122 |
-
|
123 |
-
## Data Collection Progress
|
124 |
-
|
125 |
-
The dataset is continuously updated with new BJJ position and submission samples, including both pose annotations and video clips. Each position is being captured from multiple angles and with different athletes to improve model generalization and support robust video-based learning.
|
126 |
-
|
127 |
-
### Collection Goals
|
128 |
-
|
129 |
-
- **Target**: 50+ samples per position (900+ total)
|
130 |
-
- **Current**: 1 total samples
|
131 |
-
- **Coverage**: 1/18+ positions represented
|
132 |
-
- **Focus**: High-quality pose annotations and video clips for training robust BJJ classifiers and video models (ViViT, etc.)
|
133 |
-
|
134 |
-
## Applications
|
135 |
-
|
136 |
-
This dataset can be used for:
|
137 |
-
|
138 |
-
- **Position Classification**: Automatically identify BJJ positions in videos
|
139 |
-
- **Technique Analysis**: Analyze athlete positioning and technique execution
|
140 |
-
- **Training Feedback**: Provide real-time feedback on position quality
|
141 |
-
- **Competition Analysis**: Automatically score and analyze BJJ matches
|
142 |
-
- **Educational Tools**: Interactive learning applications for BJJ students
|
143 |
-
- **Video Action Recognition**: Train ViViT and other video transformer models for grappling action recognition
|
144 |
-
|
145 |
-
## Citation
|
146 |
-
|
147 |
-
If you use this dataset in your research, please cite:
|
148 |
-
|
149 |
-
```bibtex
|
150 |
-
@dataset{bjj_positions_submissions_2025,
|
151 |
-
title={BJJ Positions and Submissions Dataset},
|
152 |
-
author={Carlos J},
|
153 |
-
year={2025},
|
154 |
-
version={0.0.1},
|
155 |
-
publisher={Hugging Face},
|
156 |
-
url={https://huggingface.co/datasets/carlosj934/BJJ_Positions_Submissions}
|
157 |
-
}
|
158 |
-
```
|
159 |
-
|
160 |
-
## License
|
161 |
-
|
162 |
-
MIT License - See LICENSE file for details.
|
163 |
-
|
164 |
-
## Contact
|
165 |
-
|
166 |
-
For questions or contributions, please reach out through the Hugging Face dataset page.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
bjj_dataset/closed_guard/ClosedGuard1_compressed.mp4
DELETED
@@ -1,3 +0,0 @@
|
|
1 |
-
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:91b57a38c276560c7dc8f5c4dbfebfb080fadb69574e86d76fe69a7ee3d98b3d
|
3 |
-
size 258226
|
|
|
|
|
|
|
|