carlosj934 commited on
Commit
1867477
·
1 Parent(s): 92b3aa8

Changing dataset so its raw videos that will be used to train ViViT models

Browse files
Files changed (4) hide show
  1. README.md +29 -7
  2. data-00000-of-00001.arrow +0 -3
  3. dataset_info.json +0 -64
  4. state.json +0 -13
README.md CHANGED
@@ -16,23 +16,24 @@ language:
16
  - en
17
  size_categories:
18
  - n<1K
19
- version: 1.2.0
20
  ---
21
 
22
  # BJJ Positions & Submissions Dataset
23
 
24
  ## Dataset Description
25
 
26
- This dataset contains pose keypoint annotations for Brazilian Jiu-Jitsu (BJJ) combat positions and submissions. It includes 2D keypoint coordinates for up to 2 athletes per image, labeled with specific BJJ positions and submission attempts.
27
 
28
  ### Dataset Summary
29
 
30
  - **Total samples**: 1
31
  - **Position classes**: 1 unique BJJ positions
32
  - **Keypoint format**: MS-COCO (17 keypoints per person)
33
- - **Data format**: [x, y, confidence] for each keypoint
 
34
  - **Last updated**: 2025-07-21
35
- - **Version**: 1.2.0
36
 
37
  ### Supported Tasks
38
 
@@ -40,6 +41,7 @@ This dataset contains pose keypoint annotations for Brazilian Jiu-Jitsu (BJJ) co
40
  - Submission detection
41
  - Multi-person pose estimation
42
  - Combat sports analysis
 
43
  - Action recognition in grappling
44
 
45
  ## Recent Updates
@@ -69,6 +71,7 @@ This dataset contains pose keypoint annotations for Brazilian Jiu-Jitsu (BJJ) co
69
  - `num_people`: Number of people detected (1 or 2)
70
  - `total_keypoints`: Total visible keypoints across both athletes
71
  - `date_added`: Date when sample was added to dataset
 
72
 
73
  ### Keypoint Format
74
 
@@ -80,6 +83,11 @@ Uses MS-COCO 17-keypoint format:
80
 
81
  Each keypoint: [x, y, confidence] where confidence 0.0-1.0
82
 
 
 
 
 
 
83
  ## Usage
84
 
85
  ```python
@@ -93,6 +101,19 @@ sample = dataset['train'][0]
93
  print(f"Position: {sample['position']}")
94
  print(f"Number of people: {sample['num_people']}")
95
  print(f"Athlete 1 keypoints: {len(sample['pose1_keypoints'])}")
 
 
 
 
 
 
 
 
 
 
 
 
 
96
 
97
  # Filter by specific positions
98
  guard_samples = dataset['train'].filter(lambda x: 'guard' in x['position'])
@@ -101,14 +122,14 @@ print(f"Guard positions: {len(guard_samples)} samples")
101
 
102
  ## Data Collection Progress
103
 
104
- The dataset is continuously updated with new BJJ position and submission samples. Each position is being captured from multiple angles and with different athletes to improve model generalization.
105
 
106
  ### Collection Goals
107
 
108
  - **Target**: 50+ samples per position (900+ total)
109
  - **Current**: 1 total samples
110
  - **Coverage**: 1/18+ positions represented
111
- - **Focus**: High-quality pose annotations for training robust BJJ classifiers
112
 
113
  ## Applications
114
 
@@ -119,6 +140,7 @@ This dataset can be used for:
119
  - **Training Feedback**: Provide real-time feedback on position quality
120
  - **Competition Analysis**: Automatically score and analyze BJJ matches
121
  - **Educational Tools**: Interactive learning applications for BJJ students
 
122
 
123
  ## Citation
124
 
@@ -129,7 +151,7 @@ If you use this dataset in your research, please cite:
129
  title={BJJ Positions and Submissions Dataset},
130
  author={Carlos J},
131
  year={2025},
132
- version={1.2.0},
133
  publisher={Hugging Face},
134
  url={https://huggingface.co/datasets/carlosj934/BJJ_Positions_Submissions}
135
  }
 
16
  - en
17
  size_categories:
18
  - n<1K
19
+ version: 0.0.1
20
  ---
21
 
22
  # BJJ Positions & Submissions Dataset
23
 
24
  ## Dataset Description
25
 
26
+ This dataset contains pose keypoint annotations **and compressed video clips** for Brazilian Jiu-Jitsu (BJJ) combat positions and submissions. It includes 2D keypoint coordinates for up to 2 athletes per image, labeled with specific BJJ positions and submission attempts, as well as short video segments for each position/submission. The videos are optimized for use in video transformer models such as ViViT.
27
 
28
  ### Dataset Summary
29
 
30
  - **Total samples**: 1
31
  - **Position classes**: 1 unique BJJ positions
32
  - **Keypoint format**: MS-COCO (17 keypoints per person)
33
+ - **Video format**: MP4, H.264, 360p/480p, 15 FPS, compressed for ML
34
+ - **Data format**: [x, y, confidence] for each keypoint, plus associated video
35
  - **Last updated**: 2025-07-21
36
+ - **Version**: 0.0.1
37
 
38
  ### Supported Tasks
39
 
 
41
  - Submission detection
42
  - Multi-person pose estimation
43
  - Combat sports analysis
44
+ - **Video action recognition (ViViT, etc.)**
45
  - Action recognition in grappling
46
 
47
  ## Recent Updates
 
71
  - `num_people`: Number of people detected (1 or 2)
72
  - `total_keypoints`: Total visible keypoints across both athletes
73
  - `date_added`: Date when sample was added to dataset
74
+ - **`video_path`**: Relative path to the associated compressed video clip (MP4, suitable for ViViT and other video models)
75
 
76
  ### Keypoint Format
77
 
 
83
 
84
  Each keypoint: [x, y, confidence] where confidence 0.0-1.0
85
 
86
+ ### Video Format
87
+
88
+ - **Format**: MP4 (H.264), 360p or 480p, 15 FPS, compressed for efficient ML training
89
+ - **Usage**: Each sample links to a short video clip showing the position/submission, suitable for direct use in video transformer models (e.g., ViViT)
90
+
91
  ## Usage
92
 
93
  ```python
 
101
  print(f"Position: {sample['position']}")
102
  print(f"Number of people: {sample['num_people']}")
103
  print(f"Athlete 1 keypoints: {len(sample['pose1_keypoints'])}")
104
+ print(f"Video path: {sample['video_path']}")
105
+
106
+ # Example: Load video for ViViT preprocessing
107
+ import cv2
108
+ cap = cv2.VideoCapture(sample['video_path'])
109
+ frames = []
110
+ while True:
111
+ ret, frame = cap.read()
112
+ if not ret:
113
+ break
114
+ frames.append(frame)
115
+ cap.release()
116
+ print(f"Loaded {len(frames)} frames for ViViT input.")
117
 
118
  # Filter by specific positions
119
  guard_samples = dataset['train'].filter(lambda x: 'guard' in x['position'])
 
122
 
123
  ## Data Collection Progress
124
 
125
+ The dataset is continuously updated with new BJJ position and submission samples, including both pose annotations and video clips. Each position is being captured from multiple angles and with different athletes to improve model generalization and support robust video-based learning.
126
 
127
  ### Collection Goals
128
 
129
  - **Target**: 50+ samples per position (900+ total)
130
  - **Current**: 1 total samples
131
  - **Coverage**: 1/18+ positions represented
132
+ - **Focus**: High-quality pose annotations and video clips for training robust BJJ classifiers and video models (ViViT, etc.)
133
 
134
  ## Applications
135
 
 
140
  - **Training Feedback**: Provide real-time feedback on position quality
141
  - **Competition Analysis**: Automatically score and analyze BJJ matches
142
  - **Educational Tools**: Interactive learning applications for BJJ students
143
+ - **Video Action Recognition**: Train ViViT and other video transformer models for grappling action recognition
144
 
145
  ## Citation
146
 
 
151
  title={BJJ Positions and Submissions Dataset},
152
  author={Carlos J},
153
  year={2025},
154
+ version={0.0.1},
155
  publisher={Hugging Face},
156
  url={https://huggingface.co/datasets/carlosj934/BJJ_Positions_Submissions}
157
  }
data-00000-of-00001.arrow DELETED
@@ -1,3 +0,0 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:5f60dad2bd584d70e0c118c8492d231825296a0547e2f08b7aca5c350552d861
3
- size 3328
 
 
 
 
dataset_info.json DELETED
@@ -1,64 +0,0 @@
1
- {
2
- "citation": "",
3
- "description": "",
4
- "features": {
5
- "id": {
6
- "dtype": "string",
7
- "_type": "Value"
8
- },
9
- "image_name": {
10
- "dtype": "string",
11
- "_type": "Value"
12
- },
13
- "position": {
14
- "dtype": "string",
15
- "_type": "Value"
16
- },
17
- "frame_number": {
18
- "dtype": "int32",
19
- "_type": "Value"
20
- },
21
- "pose1_keypoints": {
22
- "feature": {
23
- "feature": {
24
- "dtype": "float32",
25
- "_type": "Value"
26
- },
27
- "_type": "List"
28
- },
29
- "_type": "List"
30
- },
31
- "pose1_num_keypoints": {
32
- "dtype": "int32",
33
- "_type": "Value"
34
- },
35
- "pose2_keypoints": {
36
- "feature": {
37
- "feature": {
38
- "dtype": "float32",
39
- "_type": "Value"
40
- },
41
- "_type": "List"
42
- },
43
- "_type": "List"
44
- },
45
- "pose2_num_keypoints": {
46
- "dtype": "int32",
47
- "_type": "Value"
48
- },
49
- "num_people": {
50
- "dtype": "int32",
51
- "_type": "Value"
52
- },
53
- "total_keypoints": {
54
- "dtype": "int32",
55
- "_type": "Value"
56
- },
57
- "date_added": {
58
- "dtype": "string",
59
- "_type": "Value"
60
- }
61
- },
62
- "homepage": "",
63
- "license": ""
64
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
state.json DELETED
@@ -1,13 +0,0 @@
1
- {
2
- "_data_files": [
3
- {
4
- "filename": "data-00000-of-00001.arrow"
5
- }
6
- ],
7
- "_fingerprint": "67ff4e686df11486",
8
- "_format_columns": null,
9
- "_format_kwargs": {},
10
- "_format_type": null,
11
- "_output_all_columns": false,
12
- "_split": null
13
- }