File size: 5,375 Bytes
1efdfe3 92b3aa8 1867477 1efdfe3 1867477 1efdfe3 1867477 1efdfe3 1867477 1efdfe3 1867477 1efdfe3 92b3aa8 1efdfe3 1867477 1efdfe3 92b3aa8 1efdfe3 92b3aa8 1efdfe3 92b3aa8 1efdfe3 1867477 1efdfe3 92b3aa8 1867477 1efdfe3 1867477 1efdfe3 1867477 1efdfe3 1867477 1efdfe3 1867477 1efdfe3 92b3aa8 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 |
---
license: mit
task_categories:
- image-classification
- keypoint-detection
tags:
- martial-arts
- bjj
- brazilian-jiu-jitsu
- pose-detection
- sports-analysis
- submissions
- grappling
- computer-vision
language:
- en
size_categories:
- n<1K
version: 0.0.1
---
# BJJ Positions & Submissions Dataset
## Dataset Description
This dataset contains pose keypoint annotations **and compressed video clips** for Brazilian Jiu-Jitsu (BJJ) combat positions and submissions. It includes 2D keypoint coordinates for up to 2 athletes per image, labeled with specific BJJ positions and submission attempts, as well as short video segments for each position/submission. The videos are optimized for use in video transformer models such as ViViT.
### Dataset Summary
- **Total samples**: 1
- **Position classes**: 1 unique BJJ positions
- **Keypoint format**: MS-COCO (17 keypoints per person)
- **Video format**: MP4, H.264, 360p/480p, 15 FPS, compressed for ML
- **Data format**: [x, y, confidence] for each keypoint, plus associated video
- **Last updated**: 2025-07-21
- **Version**: 0.0.1
### Supported Tasks
- BJJ position classification
- Submission detection
- Multi-person pose estimation
- Combat sports analysis
- **Video action recognition (ViViT, etc.)**
- Action recognition in grappling
## Recent Updates
### Version 1.2.0 (2025-07-21)
- Added 1 total samples
- Improved data structure for better compatibility
- Enhanced position annotations
### Position Distribution
- `closed_guard1`: 1 samples
## Dataset Structure
### Data Fields
- `id`: Unique sample identifier
- `image_name`: Name of the source image
- `position`: BJJ position/submission label
- `frame_number`: Frame number from source video
- `pose1_keypoints`: 17 keypoints for athlete 1 [[x, y, confidence], ...]
- `pose1_num_keypoints`: Number of visible keypoints for athlete 1
- `pose2_keypoints`: 17 keypoints for athlete 2 [[x, y, confidence], ...]
- `pose2_num_keypoints`: Number of visible keypoints for athlete 2
- `num_people`: Number of people detected (1 or 2)
- `total_keypoints`: Total visible keypoints across both athletes
- `date_added`: Date when sample was added to dataset
- **`video_path`**: Relative path to the associated compressed video clip (MP4, suitable for ViViT and other video models)
### Keypoint Format
Uses MS-COCO 17-keypoint format:
0. nose, 1. left_eye, 2. right_eye, 3. left_ear, 4. right_ear
5. left_shoulder, 6. right_shoulder, 7. left_elbow, 8. right_elbow
9. left_wrist, 10. right_wrist, 11. left_hip, 12. right_hip
13. left_knee, 14. right_knee, 15. left_ankle, 16. right_ankle
Each keypoint: [x, y, confidence] where confidence 0.0-1.0
### Video Format
- **Format**: MP4 (H.264), 360p or 480p, 15 FPS, compressed for efficient ML training
- **Usage**: Each sample links to a short video clip showing the position/submission, suitable for direct use in video transformer models (e.g., ViViT)
## Usage
```python
from datasets import load_dataset
# Load the dataset
dataset = load_dataset("carlosj934/BJJ_Positions_Submissions")
# Access samples
sample = dataset['train'][0]
print(f"Position: {sample['position']}")
print(f"Number of people: {sample['num_people']}")
print(f"Athlete 1 keypoints: {len(sample['pose1_keypoints'])}")
print(f"Video path: {sample['video_path']}")
# Example: Load video for ViViT preprocessing
import cv2
cap = cv2.VideoCapture(sample['video_path'])
frames = []
while True:
ret, frame = cap.read()
if not ret:
break
frames.append(frame)
cap.release()
print(f"Loaded {len(frames)} frames for ViViT input.")
# Filter by specific positions
guard_samples = dataset['train'].filter(lambda x: 'guard' in x['position'])
print(f"Guard positions: {len(guard_samples)} samples")
```
## Data Collection Progress
The dataset is continuously updated with new BJJ position and submission samples, including both pose annotations and video clips. Each position is being captured from multiple angles and with different athletes to improve model generalization and support robust video-based learning.
### Collection Goals
- **Target**: 50+ samples per position (900+ total)
- **Current**: 1 total samples
- **Coverage**: 1/18+ positions represented
- **Focus**: High-quality pose annotations and video clips for training robust BJJ classifiers and video models (ViViT, etc.)
## Applications
This dataset can be used for:
- **Position Classification**: Automatically identify BJJ positions in videos
- **Technique Analysis**: Analyze athlete positioning and technique execution
- **Training Feedback**: Provide real-time feedback on position quality
- **Competition Analysis**: Automatically score and analyze BJJ matches
- **Educational Tools**: Interactive learning applications for BJJ students
- **Video Action Recognition**: Train ViViT and other video transformer models for grappling action recognition
## Citation
If you use this dataset in your research, please cite:
```bibtex
@dataset{bjj_positions_submissions_2025,
title={BJJ Positions and Submissions Dataset},
author={Carlos J},
year={2025},
version={0.0.1},
publisher={Hugging Face},
url={https://huggingface.co/datasets/carlosj934/BJJ_Positions_Submissions}
}
```
## License
MIT License - See LICENSE file for details.
## Contact
For questions or contributions, please reach out through the Hugging Face dataset page.
|